The Reminger Report: Emerging Technologies

Rethinking Education and Work with AI

Reminger Co., LPA Season 3 Episode 68

In this episode, Zachary Pyers and guest Evan Schwartz explore how AI is reshaping education, creativity, and professional development across industries. 

From the challenges of traditional rote learning to the evolving role of young lawyers and developers, they discuss how AI can eliminate repetitive tasks, empower creativity, and demand a new ethical framework for responsible innovation.

REMINGER REPORT PODCAST ON EMERGING TECHNOLOGIES

 

Evan Schwartz - Episode 2

 

ZBP      Zachary B. Pyers, Esq.

ES         Evan Schwartz

 

 | ES | You know we, if we look back at the education of both of you, both of us are teaching at a higher education level, and I’ve seen this with my son in the lower education levels. He, he accelerated well. He got straight As. I financially incentified him. That was a, that was a choice I, I was paying that guy to go to school for years so bad idea if you’re offering money for straight As. They, they will get ‘em. And he was doing very well, say first through fourth grade, and then I noticed he was still doing well on the, on the metric, but he was turning down opportunities to challenge himself because  now at that point, he could only lose his status. And when I dig, I dug in a little bit with his teachers on this, and I saw that the curriculum was beating his imagination, his creativity out of him, not physically but just mentally through status.  Is, two plus two is four, memorize that. Here’s your times tables, memorize those. I ask this, you give me this back. Give me, here’s a, a map. Draw the rivers, memorize. Repeatable excellence over and over and over again. Humans weren’t built for that, but our current society, take AI out of it, needs humans to have repeatable excellence. A business doesn’t want to hire you, they want to hire you to do a job, repeat that job over and over again and be at, approximately as good at it as you were the first time or get better.  That is not what humans were built for, so just like the machines in manufacturing abstracted away or took away from us the burden of physical labor, you know, the big earth-moving machines, all of that took that burden from us because it used to be us and horses plowing land. And so AI is going to take the burden of that repeatable excellence from us, and my hope, and we have to choose this, is that we will probably live to see our grandchildren grow up in a world that doesn’t beat that creativity. That 2, 3, 4 year old that thinks anything’s possible, is not afraid of failure, is willing to take that risk will be rewarded for it because AI has now taken that burden from us. I cannot imagine how great that world would be.
| ZBP | You know, you’re, I just was having this conversation with my wife a couple of days ago and my mother-in-law, my wife’s mom. I don’t even know how it came up, but we were talking about civics education and the importance of civics education, and, and how some civics education has declined since when my mother-in-law went to school. She’s in her late 60s. And my wife was talking about how she was forced to memorize the presidents in order and how she, she had to be able to recall them in order, and I instinctively said what good did that do to you. I mean, not that we shouldn’t have an appreciation for history or we shouldn’t be studying history, but to be able to recall all of the United States presidents in, in order from George Washington to Donald Trump, I’m like, does that really, really help you learn what the office of the presidency does, how it, how it relates to, you know, the legislative branch or the judicial branch or
| ES | But it trains your mind to be able to memorize a series of steps in order. You’re building that muscle memory.
| ZBP | Sure. So then we were talking about this and it came up, and I started thinking. I’m like, that’s a, that’s a repeatable task. But you’re right, we, we still need to have, I agree with you, as humans, we still need to have the muscle memory to remember what we did wrong the last time so we don’t do it again in the future and to, to be able to have some parameters on the creativity, kind of understand the creative process. I do think it’s interesting and I, and I agree that the creative aspect of this is important and when I was thinking of the example you gave of the photographer, one of the things that I thought about is, how do you train that creativity because I’m sure if we talk to the photographer who utilized the AI to create that final image, how many hours of training did he have to have with a, with a real camera, a digital camera.
| ES | Right.
| ZBP | Right, taking those shots, to be able to provide the AI platform the appropriate prompt to generate that image, and, because I’m sure that he didn’t start out being as talented of a photographer the first thing. He started as
| ES | Right.
| ZBP | He is now and so I’m sure that training process came along through countless hours and weeks and decades of trial and error until he found what created those artistic images that looked best, and then he, and then based upon all that experience, he’s able to give the AI platform a prompt to turn out the . . .
| ES | So then, you’re, you’re touching on a, you’re touching on an area that’s a, varies, that’s, you’re right at the nerve with me on this because I, I also have the same concerns. So if you’ve watched, if you’re watched any periodicals, you, you can see that Microsoft has laid off a ton of developers.
| ZBP | Yes.
| ES | Now Microsoft is terrible at telling its own story, terrible, because what you don’t hear is that over 75% of them got another job at Microsoft. They repurposed them, they didn’t actually lay them off. So it’s, it’s like, I don’t want to say it’s fake news. They did get laid off as developers but they found something else. But my concern here that’s similar to that is, they’re keeping their architects because the architects know how to talk to AI to get the code they want, but how do you build architects. So I, I posed it to the guys. I said, guys if we kill our fundamentals, if we kill our entry level positions, how do we get them, and they said, and I, I will quote this and I, I’m going to go find the guy’s name and send it to you because he, he talked me off the ledge. I was thinking we are making a terrible mistake that once all of us gray out and disappear, we’re, we’re hosed. The only difference again, and as we talked about, the only difference was the change in the camera for that guy. So why would I train on a physical camera when I can train with AI in the same thing? We use terms all the time for things that don’t exist any more but everyone knows what it is. We don’t really do the, if you go into the wild west and we talk about all of their tack and bait and stuff, we use those terms regularly but no one’s actually out there planting a horse. So the reality is what I would expect this artistic photographer to do is then now build a training course for the next generation on, let’s talk about lens flare. You’re never going to see a real lens. You’re going to use lens flaring within AI and you’re going to experiment with it and you’re going to experiment with lighting and colors and you’re going to understand how colors affect lighting and how filters soften lighting, and you’re just going to do the same learning but with that tool. So it doesn’t change the tool. Or it, I mean it just changed the tool but it doesn’t change the way that you learn. You’re going to change the way you tool. So I’m seeing in, from a IT perspective, the next architects are going to grow up by training the entry level guys how to prompt AI to be able to program and then you’re going to go through it again and you’re going to test it and see why it worked and didn’t work and now build this, and then eventually that guy will become the architect, but it’s going to take an entirely different curriculum than it did before.
| ZBP | You know, so this is interesting that you say this. So I recently read a, I think it was a Twitter post, from Andrew Yang, the former presidential candidate out of New York who indicated that he was talking to some lawyers in a big law firm in New York and they were suggesting that young associates, first, second, third year attorneys, who do a lot of research, a lot of document review, a lot of kind of intensive tasks will be replaced by AI, and he was cautioning people to no longer go to law school because we won’t need lawyers anymore.
| ES | Right.
| ZBP | And, which is a little bit different of a problem, but one of the things that I as a practicing lawyer have struggled with is how do we get the lawyers the necessary experience now that AI is coming along, and I think the example you just gave of the software engineers, I think is a wonderful example and essentially plays into the same way that we’re going to have to be thinking about young lawyers, associate attorneys and whatnot.
| ES | All of the industries are going to go through this. They’re talking, I, I was reading another article about a financial advisor and a financial guy on doing markets and tradings going, we’re, this is all going to be automated through AI. I don’t know. I don’t believe it. AI is a statistics model. What do you do in statistics when you have something sitting plus-3 Z forms outside the model? You get rid them. You get rid of the edge cases so that your model fits your normalized curve. Well, humans don’t do that. Why is that thing out at the edge? Is that indicating a change in the market? Do I have a left or right skew in my data? Do I, do I even have the right model against it? So, and those are the things that humans intuit, that humans can ask questions, can see connections of data that didn’t line up. For a long, until this thing becomes omnipotent, it's only going to be able to base decisions off of the data it’s fed, the sensors it has, what’s into it. It’s not omnipotent. We don’t even have plans to make it omnipotent. We’re giving it historical data. Anyone in the financial world will tell you that historical, you know, historical activity is no indicator of future gains. You’ve got to, it, there’s a lot of pressures on it, and I don’t think people appreciate how creative the law is. Who’s going to create new law? New law has to be created and it has to be done creatively. There’s a reason there’s a judge overseeing. If it was just down to the law, it could be here’s what you did, you did this and this is the time gone and there’s no judges, but we put people in there because our world changes, our cultures change, our belief systems change. People will have to be part of that process.
| ZBP | Yeah. Now as we talk about kind of AI and its application, one of the things that I think that comes up a lot is the, kind of some of the ethical issues as we design these programs for the future. Do you have any thoughts on the role of empathy and ethics in tech development as we kind of look towards the future in those areas?
| ES | Yeah, look, I would say if, if you’re a company and you’re investigating AI right now and agents of any kind, if you haven’t sat down and built your morals and ethics policies around AI, I, I call it compassionate technology. It, there’s, the EU has certain rules around how AI. I’m expecting that to come to America and we’re going to see the same thing. So imagine you, you needed to apply for credit. Let’s take it real personal to the bulk of it. You’re trying to buy a house and your information goes into an AI and you’re declined. Why did it do it? You better be able to show why that machine turned you down. What was the risk, what was the factors? And it’s going, it's going to have to be played out in the law, which is why you, you’re, it’s going to be very important here as we go through to understand what, why the machine should and should not make a decision. Because unfortunately once the data’s encoded in the neural net, it’s very difficult to unwind that to understand unless it’s outputting every single step of why it’s thinking this to get to that answer. So I would strongly recommend anyone investigating AI to first and foremost sit down - what are the ___ ____. So and a great example is at AMCS, one of the things, there is no AI that we implement that is outside of algorithmic. If it’s algorithmic, if I can always repeat it, the same data, get the same answer, that’s a different thing. That’s just math. So figuring out the best optimized route, that’s algorithmic. But if there’s anything in there that is transforma, generative style AI, something like that or some sort of deep reasoning, we have to have a human onramp. We have to have full traceability. We have to have full auditability monitoring, and we do a review of these decisions always. There has to be a human in it to make sure that any decisions that are being made first of all, first and foremost will fit within the compliance rules that govern it especially when it impacts people’s lives. That’s, that’s a critical piece, but more importantly, how are you going to drive the positive change? Because if, and this is a battle I don’t know that I’m winning, but I, I preach it to the person plus AI. It’s, I can see, because your fiduciary responsibility on a board is to make the shareholders money. They teach this in MBA. It is what it is. So I can see the desire to reduce costs, but if you’re, all you’re doing is reducing headcount, the lowest you can go is, is zero. You can’t get less people than zero. It’s, you’ve crossed that point on the line, but if you can amplify the resources are there, it’s an infinite gain. Your growth opportunity is unlimited so now you have a responsibility to figure out how you grow your business into what markets you can do to get in there. That is the responsible approach to AI is using it as a multiplicative or an enhancement to the people that you have on whatever role you’re going to apply it. I might not need, like we talked about in the CSR, I may not need the other 24 CSR people doing CSR because now that one does 200 until my, my business grows, but what else could those 24 people that now know how to command agents and, how can I use them to grow my business into new markets, reduce complexity around contracts, whatever I can apply them to, to grow my business, I as a business owner now have an onus to grow this business with this technology. I, I’m not naïve in thinking there will not be any attrition. There will, but you can’t go to zero. Don’t go to zero because if you do, then the AI can only learn from itself and if you’ve ever seen anything on social media, let’s see this, this has moved around a lot where people gave it a picture and asked it to make something that looked just like that and it was wrong, then they took that picture, fed it back in and asked it to do it again and it got worse and it got to the point where the last one, it didn’t look anything at all like the original picture. That’s the kind of fading you will get if you take humans out of the equation. At some point it will degrade. Entropy will kick into the system. Synthetic data will feed on top of synthetic data and you’ll get something that doesn’t look like anything of your original vision. You might get a little bit of runway, a little bit of a lift, but expect that crash to hit you hard.
| ZBP | One of the things that has been talked about in the, especially in cross-industries as it relates to AI and the AI platforms is can we trust the platforms, can we trust the models and how do we trust, and I, and I, we’ve seen this play out in the legal industry where people have become over-reliant or relied on certain aspects of AI for too much without understanding its limitations and then all of a sudden they realize that the product it turned out was wrong but it’s too late because the lawyers have already relied upon it, they’ve submitted it to court or what have you, and we’ve seen this issue so one of the things that I, that I’m curious about is how you see the balance of innovation, which is of course important and, and, but also balance that with user trust to find kind of a happy middle ground.
| ES | Use case is important. Keep it narrow so, because the problem is, is just opening up AI, you have infinite test cases. You go back to software development. I’m developing it to do this thing. Yeah, there’s a few edge cases but it’s a finite maximum latitude/longitude that I could test. So I could put quality around that and make sure the machine’s doing what I want. If I just leave it open-ended and I’m counting on the prompt to do what it’s supposed to do, I, I have infinite test cases. Terrible use case. So at AMCS we’re, we’re very closed in the way that the prompt goes in. We’re very narrow in the way that it works. If, if you can wrap that around a very specific use case and you can make sure that you’re testing possible outcomes. In other words, if I don’t allow injections into the system, I’m just using it as an amplification, the most important rule we have is I’m not using it as a knowledge base. I’m not using the LLM as a data store to give me an answer. I’m using the transform to work out a process, a series of steps so I like that logic part. Here’s my eReg or whatever my context is. Your answer has to come from here. Here’s your tooling so your tools have to come from here. So an example is you need to be able to get a list of customers with an overdue balance or something like that. You’re going to go to that took, it calls it API, and it’s going to give you the answer and you’re going to spit this answer out. What I’m looking for because you’re in AI is, get me the customer’s name, get me the customer’s balance from that API, what is its culture because I might want to talk to that customer in the language of choice, what is its preferred method of communication, SMS. Those are all things that come from it. So the knowledge is coming from somewhere else. I’m using the LLM to work out the work flow, the steps, how to call the APIs to do things. I’m not asking the LLM to tell me what it may or may not have been trained on. I can’t trust that. If, and you’ll see this in some of the reasoning models now where it’ll go out to the internet and research and it brings back the reference link where it got the data. That’s beautiful. And that tells me that all the AI was for was learning how to do a Google search, learning how to get a list of resources, looking through those, thinking it has a reference, bring it back to you. And now I can go and check those and go, this isn’t even a reputable source. I’m not taking this. So it just comes down to dividing knowledge from the LLM. Let the LLM learn so that it is a person capable of doing things. So if you think about it, humans have the same problem. There’s every human, the Mandela effect is because we’re all wrong in the way we remember stuff. It might be, and, and maybe you’re into it, but I, I just think, they did this experiment and they showed three versions of a penny and no one could pick the right penny. We handle them all day long. No one can remember exactly where the date is, where the stuff is, so humans are bad at this, too, which is why we have reference material, which is why we go to knowledge bases to run real queries and get the data. So treat AI the same way. Don’t depend on the LLM to get your, because all you’re doing is asking it and not asking it to get the source from somebody else. Then you’re, you’re getting what you asked for. It’s compelled to answer you. You have to keep that in mind. It’s, Steven King has a thing where they asked it for a hundred Steven King quotes. There’s only 10 out in the world, but it gave 100 because it's compelled to answer, and even Steven King’s like, I might have said that, I don’t know, that seems pretty good. He couldn’t tell which 10 were his. So that’s, that’s the problem with it. So don’t use the LLM as a knowledge base, use it as a logic sensor to be able to figure out the work flow. Give it the answer with a source of answers somewhere else that you can easily validate because you’ve narrowed the scope. You win on that every time, every time.
| ZBP | You know, I actually was using AI previous, just before this, recording this, because I was trying to find the time limit by which I have to file a certain document in litigation, and it’s not some, this specific document doesn’t come up all the time. It comes up sometimes but not a lot, and I typed it into a AI search and it pulled up a result and I’m like, okay well, where did it get this. And just like you said, there was a little link that you clicked on it, it took you to the source document, and then I looked at the source document and I said, this document doesn’t even say what this told me it said. It was close but then I looked at the source document and I was, like, okay which then led me to the actual Civil Rules which are what governs this, which is where I probably should have started, and then, so after I went from the AI search to the source document to then essentially the original source document, I was able to find, so it may have sped the process up slightly because it got me to the ultimate narrowed down range of what I should be looking at, but I don’t know that, it didn’t, certainly didn’t give me the, it didn’t give me the correct answer with the correct source. It gave me the correct answer, but the source was not exactly right on.
| ES | Yeah, and look, even now they’re not necessarily perfect at consuming documents. I’ve, I’ve uploaded six or seven PDFs, asked questions about them and it was good but then it would just freak out and give me the wrong answer. And I’m like, no, I know that’s wrong. Where did you get that from? And what it boiled down to is when I went through the document, there was a very close, almost definition 3 reference. It could have been either one of these, so now there’s vagaries in the document and it couldn’t figure it out. So we’re going to get better at what I would consider the context model where if you’re going to load data and if you’re going to make a, a model for legal, lawyers to reference, you just can’t upload all of the legal briefs and hope for the best. You’re going to have to put some metadata next to it and then the AI works on that. It is, it is quintessentially the same solution but for two different industries. We’ve, we’re now accepting that MCP, model context protocol, can sit over top of APIs that tells AI how to use those APIs and the context of them. If you think about what an API is, not to get too techy but it’s a list of functions, but if I’m saying under the conditions of Customer A, I would do Function 1 but I always have to do Function 2 because it’s this type of customer. That’s not in the API list, it’s just normal functions. You have to know to do that. I can now put that in the NCP, the NCP model and now AI knows how to do that and it becomes more effective. You would do the same thing with your data. All of your case law, you would have to metadata go, this is the right answer, this is the right answer and rather, because it doesn’t have infinite memory either, so the better part would be distill those documents into metadata. This is the SharePoint problem from the ‘90s and early 2000s, is how do I take all of these documents, extract all of the cool metadata into columns because if I’m going to sit here and do all that work, it’s not  saving me anything because I’m constantly receiving new data all of the time. That’s going to become probably the next challenge of how can I extract from a whole bunch of disjoined, disconnected data sets and get the right answer because you can almost deduce two or three right answers and that’s usually where it starts to fail. And you don’t want it in the model’s memory because now over time, you’re going to have a hundred different things that start with 1, 2, 3, 4, 5, so when you say can you give me the fifth one we talked about and it goes into its memory and there’s hundreds of fifth things, it, which one, oh I’ll just pick this random one and throw it out here for you and you’re like, what is this, what, we’re not even talking about this. Those are some learnings that you have to go through, but I think, honestly it's like anything else. Until someone gets burned with it, they’re just not going to do the homework like you did. They’ll take that answer quick and run with it because they believe it and that’s, that’s the terrible side but that’s why education is so important. We, we had some struggles at JU that there was a lot of professors who didn’t want to use, it felt like cheating, and I’m like, no. Use it but hold the student accountable for the outcome so when I look it and it’s wrong, I’m not going to, I’m not going to beat you for using ChatGPT or whatever you used, but you gave me a wrong answer. That’s a big --laughs-- and now when you fail a test, you’re going to have to go back and go, man, I’m going to have to go check these things, right.
| ZBP | Yeah.
| ES | There is no change without pain, I’m sorry to say.
| ZBP | No, there is not. Evan, I really appreciate you taking the time to speak with us today. I, I enjoyed the conversation, so thank you for coming on and for, for talking to us.
| ES | Thank you, Zach. I appreciated it.