The Reminger Report: Emerging Technologies
The Reminger Report: Emerging Technologies
Neurotechnology, AI, and the Law
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of The Reminger Report Podcast on Emerging Technologies, host Zach Pyers is joined by Sophia Kusner, a law clerk from Reminger’s Columbus office and a law student at Ohio State.
Together, they explore the rapidly evolving world of neurotechnology—devices that monitor and influence brain activity—and its intersection with artificial intelligence.
From wearable tech to brain-computer interfaces, they dive into the legal and ethical challenges surrounding data privacy, informed consent, and accountability in a future where thoughts may be digitally recorded and even acted upon.
ZBP Zachary B. Pyers, Esq.
SK Sophia Kusner
| ZBP | Welcome to another episode of the Reminger Report Podcast on Emerging Technology, I’m Zach Pyers and today I am joined by one of our law clerks here in the Columbus office, Sophia Kusner. Sophia if you would, first welcome and thank you for taking the time to join us and if you would, introduce yourself just a little bit to our listeners.
| SK | Yes, thank you for having me. My name is Sophia, I’m a rising 3-0 at the Ohio State University and I’m really excited to be working at Reminger this summer. It’s been a great experience so far.
| ZBP | Well, we have enjoyed having you, so thank you. Now today we’re not going to be talking about your Law Clerk experience, we’re going to be talking something slightly different – a legal frontier in neurotechnology. Now, for those people who are listening who may not be super familiar with the concept or the term neurotechnology, can you just explain kind of what it is and kind of how it’s kind of evolving into the wearable technology space?
| SK | Yeah, so neurotechnology is kind of a big confusing word but it’s actually fairly simple. It’s essentially just technology that can monitor, record or influence your brain activity. So, this can be done by wearing a smart watch, a smart ring or in similar ways, kind of things like that and a lot of it is still being tested and it’s still evolving today but many leading technology companies right now – a big one is Neuralink, which is Elon Musk’s technology company. They’re currently working on testing more advanced products such as this, so, it’s evolving but it’s still fairly in the newer stage.
| ZBP | Now, so, you already started talking about this – at least tangentially and I was like, oh, I wonder if she’s going to start – what are some of the current real-world applications to the neurotechnology? I know that I’ve read some of the articles about Neuralink and some of the testing that they’re doing and some of the first people that have kind of been using the products – but you know, tell us kind of what we’re seeing in the landscape currently.
| SK | So, real world applications – the most popular one is kind of just monitoring all aspects of the brain. So that can be stress, you know, chronic pain activity. Neuralink specifically, their current program – what they’re doing is they’re testing a product right now that can help paralyze people, control the vices with their mind. So, pretty advanced stuff, which is kind of far ahead than what we’re seeing right now in everyday use but very interesting, nonetheless.
| ZBP | Yeah, honestly it sounds wild when you say it out loud and you start explaining – like oh my gosh, controlling things with their mind. I can’t help but think about the old Star Wars movies and the Jedi mind tricks as they are maneuvering things -
| SK | Yeah.
| ZBP | Without actually physically touching them. Now, I know that this industry, like almost any advancing, you know, industry is usually fraught with legal and ethical issues. So, tell us, at least as we are looking at this now in a current landscape, what are some of the biggest legal challenges we’re seeing with this wearable neurotechnology?
| SK | So, the big legal question surrounding this is going to be the idea of privacy. Data privacy, data security because the main issue is how can you have somebody’s consent when you don’t even know the possibilities of what this kind of technology can do. You know, there’s endless potential of the data that is able to collect, which makes it really difficult for someone to express consent to that large of an area.
| ZBP | If I hear you right, it’s almost like we don’t know what the capabilities of all this technology could be -
| SK | Right.
| ZBP | So, it’s hard to consent to the unknown.
| SK | Exactly.
| ZBP | I think that’s a fair – and when we start to talk about this neuro data – which I’m guessing, right? Is a term that’s related to the data that’s collected by the neurotechnology? Why is the definition of neuro data so important from, like a legal perspective?
| SK | Yeah, so first of all, the definition – it’s kind of a fancy word for basically just digital information that has been generated by your brain activity but its extremely important because it affects how this information is going to be regulated. So, if it’s classified as it is in some states as sensitive personal information, such as other medical information you might have – that’s going to be under more stringent regulations in states that have those kind of data privacy laws, whereas if it’s just seen as, you know, anonymized information and its not seen as sensitive or healthcare records, its going to fall outside of those privacy laws and not be – those aren’t going to be applicable.
| ZBP | Now we talked a little bit about, you know, concerns around informed consent – and you talked about the classifications of the data as possibly being health related. You know what, and I’m just thinking through this, right? As your talking I’m literally thinking through this and I’m thinking, you know, if the data – if you were to get the data related to my heart rate, right – my pulse and you were to record that. I went to the dentist, I can’t remember – it might have been yesterday. Yes, it was yesterday. They took my blood pressure, and they do at the beginning, they take, you know, the systolic and the pulse rate, they record it in my health chart. That output by my heart, you know, is recorded and is considered private health data. If I’ve got another organ of my body, i.e. my brain, which is also generating output, i.e. data, would that also not be private health information and honestly, it’s just something I’ve never thought about.
| SK | Yeah.
| ZBP | But as you talk abut it now, I’m thinking you’re right, there actually would be a pretty strong argument that data generated by my brain would qualify as private health information.
| SK | Yeah, and that’s going to be a big issue I think, you know, policy makers will run into – in the future when we see how this develops.
| ZBP | Right. Now, all of a sudden if I necessarily verbalized my thoughts, maybe that’s not protective health information. So, it’s interesting how this kind of relates to each other and kind of how were – and as I think you kind of hit it, you know, how policy makers are going to have to address some of these concerns and some of these issues with this technology.
| SK | Definitely.
| ZBP | Now when we talk about like a broader societal impact and I know this might be a little bit early to predict – I know you don’t have a crystal ball but when we talk about the societal implications of not just the neurotechnology, but the other thing that everybody loves to talk about which is Artificial Intelligence. When we couple these two technologies together, do you see larger implications out of the two of them being combined?
| SK | Yeah, there is definitely going to be larger implications when you’re combining a human with a computerized program such as AI. A big one that comes to mind in the legal landscape is going to be accountability. For example, if you have, like a brain computer interface, which is essentially just a system that allows the brain to communicate with an external device. And this commits a crime or does harm of some sort. The issue then is who are you going to hold accountable – is it going to be this person or is going to be this AI device because, you know, who caused this injury. Where is the injury coming from and the question you have to ask yourself and that Courts will then be facing is can someone commit a criminal act when they only have a guilty mind? Your kind of missing the actus reus.
| ZBP | Yeah. You know, interesting, right? Because when we – and I don’t deal much in the space of criminal law because I just have never dabbled much in it and I can tell you some stories about the minimal dealing that I’ve had but that’s for another podcast episode. But when we think about it, right? At least in the civil context, one of the things that we’ve talked about on this podcast a lot from a Tort Law perspective and my partner Kenton Steele has talked about it on numerous episodes when we talk about emerging technologies and the new business models that are used is one of the things from the civil perspective – focus on a lot is the issue of control, right? Who was driving the car, i.e., who had control when it was involved in the accident? And that could be whether it was an autonomous vehicle, or a vehicle solely driven by, you know, a human is the question of control and throughout Tort Law, you know, we’ve always kind of looked at that – who controlled the train? Who had the barrel that fell from the second story window? Who had control of it when it fell and landed on somebody? And out of all these questions, who owns and or controls the premises where the person fell and was injured? So, control seems to be one of the really big questions in the civil world that we focus on a lot. You know, the hypothetical that you just gave, this kind of human/brain interface with a machine and the machine’s actually doing the actions falls into question, who has the control, right?
| SK | Right.
| ZBP | Because I see a scenario and I’m sure most of the listeners here see scenarios where you can slice it depending upon how much control the individual has, i.e., if the individual has direct control over the device and there’s no sort of artificial interface then, you know, I think the argument is the individual has total control over the device. Whereas if there is some sort of interface that is utilizing Artificial Intelligence, all of a sudden, if the human’s thoughts go to the Artificial Intelligence platform before that goes to control the actual device, then you’ve got a question, does the artificial platform have some form of liability in this? And I think your question, you know, is a valid one is just thinking of a crime generally is not a crime. You usually have to take steps to do something and so, if you’re just thinking about the crime, the thing that actually does the crime is outside of your immediate – I want to say control but where are we from a criminal perspective so that is very interesting, you know, a hypothetical at least, we’re going to probably see played out in the years to come.
| SK | Exactly and you make a great point with the idea of control because that’s another slippery slope of the implications that it’s going to have because then you – how do you determine control is another issue. How can you divvy up the level of responsibility that a human has for those actions and that AI has. You know, everyone says ‘it has a mind of its own’, how does that come into play then when it’s taking action like this?
| ZBP | No, I am going to ask you to pull out your crystal ball and kind of make some predictions about the future with this, but in order to kind of – I don’t want to say control this kind of technology but in order to kind of get a grasp on it, what type of legal frameworks or safeguards do you think are needed, you know, before this neurotechnology actually becomes mainstream?
| SK | Yeah, I think something we really need to implement is some level of Federal regulation because there is so much uncertainty, there is so much confusion – I think if we have a little bit of guidance from our Federal Policy Makers that could help them direct the states on how to handle, how to go forward. Uniformity, really. That’s what’s going to get rid of uncertainty. That’s going to help people embrace this change or not embrace it, depending on which way they go. But that will help give the states kind of a push to go where they need to go. If they have some level of guidance from the Federal Government.
| ZBP | Awesome. And we as kind of wrap this up, I know that there’s a lot of stakeholders in this. I know there’s – I mean, lawyers obviously. Technologists, people who utilize this technology and are creating it and are advocating for its advancement and then policy makers, right? Like you just talked about. How do you think – or do you see those kind of three stakeholders? Obviously, still needing to take into account, right, the individuals and society as a whole, how do you think that we can best ensure that ethical development of these and the use of this neurotechnology?
| SK | That’s going to be extremely difficult because it is so new, the only thing were really going to have to focus on fully thinking through the widespread implications that implementing this level of technology would have, you know, what are the changes that’s going to come with it? How is it going to affect our everyday life? Even from a legal standpoint, you know, how is it going to affect our constitutional rights? We’re getting the government, or whosever controlling these devices now, a whole new level of access to our minds, our data. So that’s what they really have to focus on is to take a step back and look at the actual implications from a long-term standpoint and also just from everyday life. How is it going to affect the average human just living? Because now they’re collecting all this information. It’s crazy to think about but -
| ZBP | It is. It really is. Sophia, I really appreciate you taking the time to join us today and educate us on neurotechnology and kind of some of the legal implications and ethical implications that our society will face. So, thank you for taking the time.
| SK | Of course, thank you for having me.