The Reminger Report: Emerging Technologies

Exploring the Future of Technology with Nathan Whittacre

Reminger Co., LPA

In this episode of The Reminger Report Podcast on Emerging Technologies, host Zachary Pyers welcomes Nathan Whittacre, a seasoned expert with nearly 30 years in the tech industry. They explore transformative trends shaping the future of technology, including the rapid evolution of AI, the promise of increased global connectivity, and the potential of quantum computing. 

Key Topics and Highlights:

1. The Rapid Evolution of Technology

2. Connectivity and Innovation

3. The Promise of Quantum Computing

4. AI Policies and Ethical Use

5. Preparing for the Future Workforce

6. Balancing Regulation and Innovation


ZBP

            Yeah.  Now one of the things, I’m going to kind of shift gears ‘cause you, we’re lucky to have somebody who’s been in the tech industry for, as you indicated, almost 30 years, and it’s not, not every day that we have that experience or are able to tap that wealth of knowledge, so what I’d love to do is talk about the future of technology and where you see these general trends going.  We have seen, and I mean, I don’t need to tell anybody who is listening or to tell you how rapid we’ve seen various technologies increase, especially within the last 30 years.  I think about where, where I was 30 years ago.  I was 12 years old, and I, I still remember dialing up the internet and thinking it was the coolest thing in the world that I could get on and, and surf the worldwide web through aol.com, and I still remember the sound that the, the modem made dialing up.  We’ve obviously come a long way in 30 years, but I’d love to talk about what you see as a futurist, right, in this kind of area, so if you could, what kind of emerging technologies do you see having a significant impact in the next decade?  I mean, I’m sure that there’s some that probably come to mind for a lot of our listeners, but maybe there’s some that we haven’t or are not as familiar with.

 

NW

            I think, it’s interesting you mentioned the dialup internet.  It took 18 years to get 100,000,000 active users on the internet from the time it was out to, 100,000,000 active users.  It took three months for ChatGPT to get 100,000,000 active users.  The speed which technology is innovating is just accelerating exponentially.  Now it’s interesting though, because AI has, that term AI was coined in 1957, so it’s not a new technology by any means.  One of my least favorite classes that I took in college was a programming language class, and we had to learn this language called Lisp, which is used extensively in AI or one of the languages that was used to develop AI systems and so back in the 1990s/early 2000s, I was learning AI technologies in computer science school.  So it’s not a new technology but it’s become ubiquitous in what we’re doing and I think what AI has done in the last few years is, it’s gone from a research technology, something that a university, an MIT or Stanford or somebody, some researchers were using it to develop automated cars or some type of automation, to a took that we can use day to day in our businesses, and as soon as that happens, that accelerates the adoption rate and that accelerates the technology because there’s so much investment that goes into it and so many changes.  So I can’t, I can’t start this conversation without talking about AI, and there’s a lot that we can use today in AI.  I think that what’s going to happen over the next 10, 20 years is we’re going to see it actually working better than it is today.  As the adoption rate increases, the use is going to improve, but the goal, I think, for businesses in general is just automation as much as possible.  We want to get rid of those tasks that we don’t necessarily need to do to be productive but they have to get done, and so AI is going to be the best assistant that we can ever hire, especially for executives inside the organization because it can do a lot of those things that we don’t necessarily want to do, and then it kinds deeper into the organization that a lot of our team members can use it to accelerate their work dramatically.  So AI, I can’t, I can’t not mention it, so that’s, that’s one area I think as we increase the connectivity of the world, I mean there’s still a lot of areas of the world that aren’t connected with, through the internet.  I think we’re going to be able to see a potential of new innovations come out because, I don’t know, I think you mentioned the futurist ____.  I really like to think that there’s, there’s people out there in the world that are the Einsteins of the world but maybe they’re living in a village that isn’t connected to the internet and don’t have the ability to get their ideas out there today but as we continue connecting the world with Elon Musk’s Starlink and there’s other companies that are working on these internet technologies throughout the world, I think there’s more opportunity for the world to get more and more connected and I think it’s going to have a lot of social impacts, government impacts and it’s going to break down walls and continue to break them down.  I think in some areas of the world it broke down fast and other areas it’s going to take a long time, but connectivity’s still a big, big issue.  I mean, there’s lots of places even here in the U.S. that aren’t fully connected on high speed internet.  It’s crazy to think if we’re living in a large city that people don’t have access to high speed internet but there’s a lot of rural places in the country that there’s not connectivity and the federal government’s putting a lot of money into that right now through some infrastructure grants so that’s, I think, an important part of it, too, is just continuing the connectivity of the world will continue those innovations.  So finally, quantum computing is probably the next in the next 50 to 100 years, the next transformative technology that will increase our speed of computing power dramatically.  We’ll be able to solve problems that we were never able to solve before, math problems and technology problems and things like that.  I’m not a big, I don’t understand quantum computing.  That’s a field that I’ve never really dived into, but what I’ve read about it is, would blow our minds, what quantum computers can do because it’s, it solves all the problems simultaneously and comes back with the right answer.  It’s a very, it’s like a fuzzy technology that, I grew up in a world of ones and zeroes.  It’s, that’s what I learned.  It’s either, it’s either on or off, and then when you say well it could be, it could be 1 to 4 and all the answers could be right all at once, it’s, doesn’t make any sense to me but that’s, that’s the one that’s going to be in the next 50 years is going to be one to watch.  It’s going to dramatically change, I think, how we, how we solve big problems.  So those are the three that I would say that are transformative over the next close term with connectivity and AI and then long term is in quantum computing.

 

 

 

ZBP

            Now, so I will tell you that I’ve read at least a couple of articles on quantum computing and some of the advancements that have been made recently because as I understand it, they’re still kind of getting, I don’t want to say they’re testing or trying to prove that it’s possible because I understand that there’s a lot of mechanical and physical, like physics-related issues in the quantum computing that frankly is way above my pay grade as somebody who did not study hard sciences, but I will second, I think, what you said is that in the longer term, quantum computing may change a lot of things that we don’t even necessarily realize we’ve needed solved at this point.  And I do think to your connectivity point, I think one of the things that was highlighted when I think about it, I remember listening to the stories at the startup and in the middle of the pandemic about high school kids trying to do online classes in some of these more rural areas and having to drive to the local Walmart or McDonalds because it had free Wi-Fi and they were taking their classes in the parking lot because where they were at home, they couldn’t get high speed internet and it was spotty at best so if they were going to attend classes via Zoom or Microsoft Teams or some other platform, it was a real struggle for them.  And I, living in Columbus, it’s a relatively major city, I take the high speed internet for granted.  Heaven forbid something doesn’t load automatically, I become frustrated and I’m like, I don’t understand why there’s a delay.  I don’t care if it’s a 4K video that’s three hours long.  I don’t understand why it doesn’t download in a few seconds.  Well then I have to remember back to the dialup days that to download a song was an overnight process at times.  One of the things that I think a lot of people have talked about when we talked about the future, and we’ll go back to the AI component, is the fact that there’s a lot of people who have been warning about it, about what might happen, and I know that there, these warnings have come from various people and there’s various extents of a warning, but one of the things that I think that we’ve seen here in the United States at least is we haven’t seen a ton of regulation over artificial intelligence.  I know other countries are doing other things and I’m not super familiar with everything that’s been done, but how do you see the government’s role not just with AI but a lot of these new and developing technologies, the role for regulating it vs. leaving it unregulated, the benefits or the detriments to both.  How do you see government’s role in this process?

 

NW

            I think the government’s always a bit of a laggard in, in compliance.  They, they wait for things to go poorly and then solve them after the fact and so I think it’s on us as businesses to do our best to self-regulate some of this work.  If you don’t currently, I recommend developing inside your businesses an AI policy.  Define what is allowed and what is not allowed inside your organization.  I think it’s, it’s the same that we, as we were talking about bring your own devices or hybrid work.  As long as you have a policy inside your organization of how you’re going to use it and implement it, you can be maybe ahead of the curve of where you should be and so, sit down as you’re doing annual planning for next year, maybe you put together an AI policy inside your business.  I think that’s a good, good strategy to start with.  If you, if you don’t want to allow any AI to be used for content generation or automation, just say it.  Just define that as your policy.  But the issue is, is sometimes your employees are going to go around the policy and use it anyways because it’s so each and so I think it’s important to say, okay we’re, we’re going to allow it in these scenarios and we’re going to pay for the subscription so that our data is not getting out there and training other language models or systems and we’re going to these specific products.  I like Microsoft Copilot a lot because it lives in the Microsoft ecosystem and it’s protected, it’s protecting your company’s data so I like that model of, I can buy this language model, I can buy this tool that, that they’re guaranteeing that my data won’t leak out at, out of my environment, and so I think developing those policies is really important.  I think a lot of the fear comes from two sides.  The first one is the fear of it’s going to replace us or a lot of jobs, and you mentioned you’ve been using technology for a long time and I think that’s what we’ve been saying for 50 years.  All of these computers are going to take over all our jobs.  Well yes, it’s going to eliminate a lot of jobs but they’re often jobs that we don’t necessarily want to do.  Who loves data entry or who loves summarizing a conversation or a pdf or taking notes?  These are jobs that somebody’s doing today that could be more easily done with automation and then they could do a higher level job so I think what it requires us as a society to do is find the jobs that people really want to do and train them to do higher level thinking jobs and we’ve been doing that for hundreds of years through technology in general.  The other thing is, is AI is going to do something to our society that’s going to cause a ripple effect that we don’t want, the, the Terminator thought, this apocalyptic thought of AI, right, that sky, it’s going to become Skynet and destroy our world, and, and you know people again have been preaching that about technology forever that, that computers are going to take over society and cause major issues and it can.  I mean, it certainly can be destructive and used harmfully and I think that’s the same argument for a lot of bad things.  Guns can be used to kill people, innocent people and they’re harmful and there should be regulations around that.  Same thing with, with technology.  There, we should regulate it so that bad people can’t use that technology to cause harm on other people, but in the end it’s really people that are going to be using the technology to cause the harm and so as long as we regulate that, regulate the actions that people take around technology, I think we can keep it safe, so that, that’s my perception on it and I think historically that’s, I, I don’t think we should be fearful of it.  I think we should be excited about it but we certainly need to ensure that it’s used correctly going forward, that it’s not causing societal harm.

 

ZBP

            So a couple things I wanted to comment on and then one thing I wanted to ask on to our listeners just to provide some clarification is one thing that I’ve noticed at least the, in the legal field, there have been a number of providers for legal research and document summarization that they have started introducing AI tools in various forms to do various things, and one of them is to summarize documents.  And we haven’t, at least in my organization, we’ve been testing various ones.  We haven’t necessarily adapted or adopted one specific one yet, but I know it’s coming.  We have the AI policy in place, and one of the things that I think about is that we have law clerks who are still law students and we have young associates who’ve been practicing a few years who would have done some of these tasks, summarizing huge amounts of documents or doing extensive research where they, they were really hammering home and spending hours to find a handful of cases that may have been difficult to find where now some of the AI-assisted research is not only finding those cases but then summarizing them for me with proper case citations in a nice and neat memo, so I think about what you’re talking about is, that’s how we trained a lot of our young lawyers on doing these kind of tasks but they were incredibly time intensive and can now free up those associates to do other more complicated tasks.  And in fact I had this scenario play out last week where I had done some AI-assisted research and it turned out an extensive 10-page memo on several legal issues and I had given it to an associate and the associate actually asked me, Zach how did you have time to do all of this research, and I said, because she knew how long it would take her to do that research, and I said well actually it was AI-assisted.  It took me, it did take me some time but not nearly the amount of time she had thought it would have taken her to do the same amount of research, so, but it freed her up because she didn’t have to do that research and now she was doing something more complicated which was compiling the research and actually applying it to the facts of the particular case, which I thought was helpful.  But one of the things that you had mentioned, especially with AI, is the difference between using, and I’m going to, public models, right, which will train on your inputs and essentially put the information that you’re putting into the AI, will use it then for future, meaning that our information becomes more public vs. the closed AI systems that keep our information private.  And I’m probably not explaining this correctly but I’d love, because I know you mentioned it and it’s, honestly it’s one of those concepts that I’m not sure a lot of people understand when they start using some of the more open ones like ChatGPT, that the risks of their information they’re inputting is going to then be used to train the language model for the next user, so could you kind of explain that, I think, to our listeners who may not be as familiar with the concept or the implications?

 

NW

            Sure.  I think what I, what I like to explain to people is anytime you get something for free, somebody’s still making money off of that use of the tool, so when we use Facebook or LinkedIn or one of these tools that is online or ChatGPT for example, and we use their free version, the company is going to make money somehow and so we end up being the product, we end up being the product.  So social media, our content is used to get eyeballs in that platform for paid advertisers to put ads in there that mixed in with the content that we’re producing, that feeds more viewers.  With the AI models like ChatGPT, the free versions using our inputs to train it for future models or future data, so we, if, if a company puts in, if I uploaded my book to the free version of ChatGPT for example, it will use my information to train it to answer questions for somebody else, and I use some personal stories and things in the book and suddenly it could share that story, maybe not about Nathan but maybe about John that went hiking and explained this scenario being a guide and how that relates to technology, and that’s part of the model now.  With these, and that’s the free models.  With these closed models, what they do is, they have a trained model so just think of it as a computer program, that it’s a snapshot in time of what it knows, so it’s learned everything about all the court cases or all the, all the literature that’s publicly available, all this, all this data that’s learned over time, and then it stops learning from public data and now that model is your private data so anything that you load into it whether it’s your company’s financial statements or if you’re working on a brief like you were talking about and you put that in there and have this discussion, now that’s your data.  It’s not going to use that interaction to train it for somebody else and I think we need to be really careful about understanding what our policies are with these, this software that we’re buying and using of how it's using our inputs and our interactions with other people if it’s closed, if it’s keeping that data in our environment or if it’s using that data to train for other people, and so in business I think we need to use those closed systems almost exclusively if possible because we want to secure our data and infrastructure.

 

ZBP

            Now one of the things that you talked about was this concept that it’s, going to the AI and some of these other technological advances is going to free up people to do, to put aside some of the more mundane or tasks that we don’t necessarily want to do, and I was actually just reading an article this morning which, about all of the jobs that have disappeared in the last 50 years.  One of them was telephone operator.  Another one was elevator operator, talking about people used to have to physically be there to operate the elevator for other passengers, and these are things now that I take for granted that we don’t need an operator, that the switchboards are all digital and connected and happen automatically.  As we, as we think about the future, are there any skills that you see being necessary for professionals, and this is a broad scope, but that you see as being helpful to those professionals as we continue to adapt to and utilize technology in a wide set of fields?

 

NW

            I think it requires us to may be narrow our expertise a little bit, to become experts and really focus on what we’re good at.  And what humans are generally good at is the creativity, idea creation and thoughts, leadership, thought leadership about different things, so I think it, it is going to require us as business professionals to upskill what we do, getting really focused on what you’re talking about.  We’re not doing as much research anymore.  We’re doing more taking that research and converting it into something that will work for our environment, and it just requires I think more, I don’t want to call it intelligence but really it just, thought work inside the environment, so I like relating this, when we’re talking about AI is, these are really excited interns that are going to want to do a lot of work for us and we can use them well so that is one of the skills.  If you’ve never had an intern or never had to work with somebody that may be very intelligent and very excited about doing the work but maybe doesn’t have a lot of experience, that training, that intern is a skill that we need to have.  If we’re going to use AI, we need to have the skill of basically dealing with an intern or a novice and so that is a skill but also taking that information we get back and ensuring that we can convert that into something that is very useful, so I really liked your example of gathering that brief, doing that research, you used a tool to do that and then you had to do something with it.  It isn’t, that’s not the final product, and so that’s really what’s going to come down to it in the future is taking this information that we’re gathering and making it useful as a product down the way.  So I think the other thing, just moving away from technology, I think it’s going to make, just thinking futurist here, the trade work, when we’re talking about trades, is going to be more important in the future, too.  I mean, AI’s necessarily not going to fix your car or be your HVAC repairman or, I think those type of skills become more important as this shift from the middle or lower middle of tasks that we don’t necessarily want to do anymore, it requires expertise in all areas.  You’re going to want an expert plumber, an expert air conditioner repair person or a manufacturer to come in and do your work, and then you’re going to want the expert doctor that can be the perfect heart surgeon for you.  I think it just requires expertise in all the areas of our life and we just have to be better at that.

 

ZBP

            Now one of the things that when we talk about the futurist and technology, one of the things that we can’t help but bring up is some of the ethical considerations that we see potentially coming up.  Now when you think about companies and technologies, do you see any areas in the future where you’re, where you see an ethical dilemma popping up and you say this is actually a potential concern, right, because I know like a lot of things, and I think you kind of alluded to this, that AI and a lot of these other technologies that are, quantum computing and whatnot, it’s really the bad actors using them that sometimes create the problems, not necessarily the technology in and of itself.  But are there any technologies that you see that stick out to you where you’re seeing a real potential ethical issue that you think needs to be addressed or is a concern, or do you see it necessarily like you may have already suggested, that it’s really that the technologies are, I don’t want to say they’re not ethical or unethical, they’re a-ethical, I don’t even know if that’s the right term, but they’re ethically neutral and it’s really the humans that kind of distort or change that perception if that makes any sense.

 

NW

            Yeah I think, I think there is one major ethical concern and this is something that, that courts and I think our regulators need to figure out and from a law standpoint is, the creative license of these systems.  So whether it’s, AI systems can write songs today and they can do creative artwork.  In your business, in law, they can potentially write an entire draft of the Constitution.  They could redraft it or they could do a, argue a court case or do a brief or a draft or whatever.  There’s so much, you can ask it to do anything but it’s a statistics, all AI is, these language models, is statistically guessing what the next appropriate result should be so as it’s building a sentence or drafting a song or an artwork, it’s just saying okay statistically from everything I know, what is the next appropriate thing to do.  And how does it know that?  It knows that because it’s learned it from reading books, reading works that we produced for thousands of years.  And so ethically the question is, is what is created?  What is a new thing that it’s producing?  And it’s, I guess it’s an argument that’s been happening in the creative world for a long time.  Can you reuse the same three notes in a song that you heard 10 years ago produced by another artist?  Can I reuse that or is that a, is that copyrighted?  Is that protected under that creative license?  So I, that’s kind of an ethical concern is what is a new work of art, what is a new work product, and does AI, because it has so much knowledge and it’s just using statistics, is it really being creative or is it just taking information that it learned and producing it again, and that’s the tough question I think as a society we have to answer.  And then the other thing is how do you guarantee that somebody is producing something legitimately or is it AI-generated, and we said, six, eight months ago we were saying well there’s these AI-generating software that can guess what AI generated and maybe six, eight months ago that was correct but now these AI testers or whatever it may be are failing miserably.  They can’t, they can’t tell what’s real and what’s not because now we’re training the language model on our writing styles, on our communication styles, again I can upload my book to an AI system, tell it this is how, this is my writing style, this is how I write, and everything it produces will be exactly I form sentences, how I communicate and it’s exactly what I do because it statistically built this model around me.  So it’s, I think, harder to tell now of what is human created vs. computer created, and so ethically it's, that is a dilemma.  How do we, how do we define that and do we really care.  I mean that’s the question is as long as it’s created and it’s productive and work, I mean do we really care as a society?  I’m not saying I do or not, I’m just saying that’s a question that we have to answer.

 

ZBP

            It’s funny that you talked about and mentioned this because I was recently listening to a piece on NPR which I sometimes listen to as I drive to or from the office, and they were talking about a new language model that’s being used essentially to artificially create podcasts, and what they were saying and how they were talking about this is that if I were, as a podcast creator or host, if I were to upload your book to the system, I could say please generate a podcast between the two of us, Nathan and Zach, talking about his book, and it would review the book, pull out information and create questions and then create responses and I thought to myself am I going to be replaced as a podcast host.  In the near term future, if I get access to the software, would I even have to continue to have these conversations?  And obviously I love doing it because I get to meet interesting people and make connections which I never would have made before, so obviously there’s a benefit to it and I did start to think about it, and I thought hey, from a creative standpoint I don’t even, I may not even have to generate these questions anymore because I do, I read your book.  I thought it was really great.  I generated the questions.  I didn’t use the AI software to generate questions about your book but I did start to think.  I'm like, there is a little bit of that creativity because I probably could have figured out a way to upload your book to a large language model and then say please generate some questions that I could ask the author about, and it does start to influence what is human-generated vs. computer-generated or AI-generated and like you said, do we even care.

 

NW

            And it is, that is a question that we have to ask ourselves.  I, I think where it comes down to, though, is we want as humans some type of interaction still.  I mean we are very social creatures and it’s important for us to have this interaction and that’s something that I think we’re not ever going to just say let the computers take over because what it, then you could have a, an AI listening to the podcast and summarizing it and interacting and asking questions to the podcast and making comments or whatever it may be, and then we just say well computers are going to do everything from us and we’ll stay home and eat bon bons and watch, watch TV all day.  It’s AI-generated TV.  But it is, it certainly is a question that we have to ask ourselves as a society - how much do we want to do as humans and I think if, if technology in general is connecting the world, is connecting us together, is making it easier for us to connect and removing those barriers for interaction and connectivity and being able to tell stories and enjoy each other’s company, I think we’re using it appropriately.  If we’re, if we’re eliminating that part of life, then we’re not.  And it’s interesting.  I’m, obviously you’ve gotten this.  I’m, I’m huge into technology.  I’ve been my whole life since I was a little kid, four years old, my first computer.  I’ve loved technology, but once a year, at least once a year, I enjoy going into the mountains and backpacking and not having a piece of technology with me for a couple of weeks sometimes, a few days other times, and just being with either my kids or family or whatever it may be or even myself and people that I meet on the trail.  I think it’s important for us to disconnect from the technology every once in a while and just get back to our human roots and enjoy what it means to be human also.  So there’s got to be a balance in all things.

 

ZBP

            Nathan, I really appreciate you taking the time to be on our podcast today.  It was a great pleasure of mine and I’m sure our listeners will enjoy everything that you had to say, and I just wanted to say thank you.  It was great meeting you and great having you on.

 

NW

            Yeah it’s been fun to be here.  It was great questions, and I really enjoyed, enjoyed the interaction, so thank you Zach.

 

ZBP

            Great.  Thank you.