Episode 7

full
Published on:

8th May 2024

7 - The UX Designer's Role in a World of AI

Are you interested in the potential of AI and its implications?

In this episode, we dive deep into a fascinating conversation about ChatGPT and machine learning, discussing the benefits and ethical concerns in the field of AI.

We also explore the role and responsibility of UX designers and technology creators in creating AI systems.

Join us as we navigate the challenges and possibilities of AI in our modern world. You won't want to miss this insightful discussion!

Transcript
Matt:

Everyone that listens to this podcast, go find Real Genius, with Val Kilmer. The concept is it's the kids that go to college, you know, they're super smart. They're 14 when they're going to college, and they're working on lasers.

It's that idea of like, "I did it, but I'd never thought of what my technology that I built would be used for. I was just doing it because it was a fun problem to solve."

Alexis:

One thing I've been secretly hoping for is that AI will somehow coordinate all of the construction in my neighborhood, so that they can come and dig up the street once and get all the stuff done, and have all of the crews that need to coordinate to do all those things in a timely manner. That's what I'm waiting for.

Welcome to Everyone Can Design, a podcast exploring the world of user interface and user experience design. We want to demystify UI UX design and make it accessible to everyone. I am Alexis Allen, a Claris FileMaker designer and developer, and the founder of fmdesignuniversity. com. And I'm Matt O'Dell, an Experience Design leader, strategist, and educator.

We bring you practical, actionable advice to help you improve your UX design skills. You can find detailed show notes and more at www. everyonecan. design.

This week I'm really excited because we are going to talk about AI. Which is a really big topic, lots of different things to chew on in the domain of AI in general. But let's talk about the good, the bad, the ugly, all of that stuff. What is AI, first of all? There's a bunch of different kinds that we could discuss. And what do we think the impact of this could be? Should be? Might be? And generally speaking, our thoughts and opinions about AI. So, let's maybe define what we're talking about when we talk about quote-unquote AI: artificial intelligence.

Which is kind of a controversial term in and of itself, to some extent, because it is a marketing term that people have used to brand their computer features, their models that do certain things. And essentially it's really a feature, kind of like a search engine. But they've called it "artificial intelligence" and that kind of gives it a certain anthropomorphic quality that it has some sort of intelligence behind it.

And I'm a little bit skeptical, as you might tell, but I do think it can be really useful and I think it's worth discussing and thinking about what it might mean and how it might help, how it might hurt, what kinds of things we might want to be aware of when we're evaluating so-called AI.

Matt:

I'll even add in like, what is a designer's role when you're working with AI as a product? Because that's another thing we'll probably come back to, is the importance of design. And the type of work that designers tend to do when it comes to applying the right solutions and the right constraints around a system that is AI-based.

So, yeah. I'm, similar to you, very excited about all of the options, but also with you on the, like, " Is it really AI yet?" is this just, you know, really good machine learning?

Alexis:

Yeah, exactly. And I tend to agree. And I think you've also hit on something as well, which is, " What is a person's role in this?" And you know, "How do we determine what the right solution is?" So let's just back up a little bit and talk about different kinds of AI that there could be.

So probably, a lot of people know about ChatGPT, which is an AI language model. There's also things like machine learning as well, which is a form of AI, which can use images to compare a certain image against a library of images. Which actually probably, I think, is the most exciting area of AI, because it has so many really useful applications. But I think a language bot like ChatGPT has been getting a lot of press lately, because it can kind of have a conversation with you, right?

Have you used ChatGPT ? I have a little bit, but I haven't dug into it a lot. But why don't you tell me your experience with it?

Matt:

Yeah, I've used it a little bit, but I, I'll even take that a step further in like the descriptions of the different types because as I said, like even ChatGPT is based off of machine learning. All of these things are like the machine learning's at the core of what they do. All of the different types, all of the different ways you might experience an AI is based on doing different learning techniques for, you know, telling your computer, "Hey, look at all of these things and try to figure out the right way to categorize them, or try to figure out the right way to generate something that looks like something else."

And so they're all using machine learning. That's the, baseline of all of them. But as you said, like you described, the different types of outputs that you might get from different types of machine learning. So one of them is generative, which is all of the stuff that we're seeing now with like DALL-E and ChatGPT, and these types of things that you say like, "Hey, I want you to create, generate," right? Generative. "I want you to generate something." And it will go and based on it having learned and read or seen a ton of images or a ton of text or a ton of different things, and the way that it's learned stuff and categorized that stuff, it can then generate new versions of that stuff that it thinks looks like or look similar to the stuff that it's seen or that it's read.

So that's like one, is generative. That's the stuff that everybody's so excited about right now, because it feels closer to what we think of as AI, right? It, it feels like, "Oh, the computer's thinking, It knows, it's understanding, when really it's just mimicking really well. I mean, it's doing really good, but it's mimicking really well."

But I think some of the traditional ones we're more used to are like, search engine stuff. Where like, I'm trying to look for a thing, and then it's trained a model to say like, "Based on the thing the person searched for, here's the right stuff to serve to them. Or here's an image, what's in that image? Well, based on looking at a bunch of images, I can gather what that is." And so like, obviously the ChatGPT and all that stuff is built on top of those things that we've seen in the past, of categorization and prediction, and that kind of stuff that you'll see from, from the older-- not old,

I mean, they're still around-- but the things we've seen more of AI or ML is that kind of categorization of stuff, or prediction of stuff. And then, yeah, now that it's getting to generative, that's what everyone's getting really excited.

Alexis:

Yeah, there was a really interesting and informative podcast from Paris Marks called Tech Won't Save Us, and he interviewed Emily M. Bender, who's a computational linguist, who's been a critic of AI. And she had a essay called the Stochastic Parrot. So essentially, what she's saying is that really these AI models, these large language learning models, are creating text that follows sort of the distribution of the text that has been fed into it, in terms of the word pattern and the word sequence and form. But it's really just parroting what it found. And it's creating text that in appearance, it's similar to text that it contains inside of its training data, but it really doesn't have any understanding of the data.

There's no intent behind the communication that it gives you. And I think this is an important thing to keep in mind when we're using something like ChatGPT, is as humans, we have this anthropomorphic bias. In other words, we have this tendency to attribute human traits to even the computers that we're working with. And so I think the concern, I suppose, is that we can also ascribe these sorts of emotions, or thinking, or meaning, or intent behind something like ChatGPT.

But we do have to remember that it's really a statistical model that is offering up a set of phrases that are contained within the data that it was already given previously. And so as long as we approach it with that understanding that this isn't necessarily going to be true or factual. It is a representation of what was fed into it. And to be honest, we don't always know what was fed into it.

It's not clear. It hasn't always been exposed what the training data was. So we just have to kind of take it with a grain of salt, in my opinion. And I think it's still useful, but we just have to kind of, really understand that it's not quote-unquote intelligent.

If you look at the marketing descriptions for ChatGPT, it's marketed as being able to listen and learn from the conversations that people have with it, right? And those are inherently human characteristics, listening and learning. So it is mimicking listening. It is mimicking learning, but it isn't really doing those things and it can provide incorrect responses, right?

Alexis:

That's one of the criticisms of it.

Matt:

Well, yeah, I think two things that you said that, that are super important. One is that, realize that ChatGPT's goal is not to give you the correct answer. So if you ask it, "Hey, what's the capital of a, a specific place?" It might get the correct answer.

Sometimes it might be right, but what it's doing is it's not saying, "Hey, what's the correct answer? Let me go and find that." What it's saying is, " In the data that I have, what do people tend to respond with when someone asks this question?" And so, if you fed it a bunch of data where it was people answering the wrong answer, or had specific bias to something or whatever, then that's what it's going to respond with.

And it's trying to think of like, "What would it look like for someone to respond? What do I think it would be?" And so in many cases it's doing that. And again, it might get the right answer. Or it might, you know, if you tell me like, "Write a paper that talks about this subject." Again, might be able to get some things right.

But in many cases is just trying to guess like, "What does a good paper look like? What are the words that I would see in a paper?" And then is writing that. And so that, that's the first thing, is it's not trying to get the answer right. It's trying to respond in a way that looks right. That's it's goal, as a tool. That's one.

Alexis:

It's creating the form of the language, basically. There's not meaning there, but mimicking the form. That's one of the points Emily Bender was making is that, you're getting sentences that look like sentences that make sense, but itself does not understand what it's saying to you.

Matt:

Yeah. I watched, or listened, to that same podcast with Emily Bender and will agree, that's why it's so good at stylistic things, right? Because it can very easily say like, "Well, in this style of writing like Shakespeare, this is what that looks like, and I can very easily mimic what Shakespeare looks like and how the words flow and that kind of stuff."

So, it can mimic style and form very well. But that's different than like, "Am I correct? Am I saying the right thing?" So that's one. I'm trying to remember the second point. Oh, um, now we'll have to come back to it. Sorry. We'll, we'll keep this in, 'cause sometimes I lose my train of thought. But like, I think the fear that you're seeing, I, I see a lot of videos of teachers being like, "Well, my students are now using this, so like, how am I going to handle this?" Because, you know, obviously there's a fear of like, "Are they learning or are they not learning? Is this plagiarism, because it's based off of other people's work that ChatGPT is generating their stuff?"

So there's all of those kind of questions that, you know, some of the smart teachers now are figuring out ways to use it, or like, using it as a starting point, and then saying like, "All right, now edit it. You know, double-check it. Be your own checker of this content." Because, that's actually what we might have to be in the future, is someone who's like, looks at this stuff and is like, " Is this actually correct?"

And that reminds me of the stochastic parrots thing. There's a book I love, that I've used in my career a bunch called The Man Who Lied to His Laptop. Have you seen or heard this?

Alexis:

I don't know that, no.

Matt:

When I first got it, I thought it was going to be more about some of this, like ChatGPT stuff, but it's actually different. The context of the book is, it's the guy that was hired by Microsoft to find out why everyone hated Clippy. That was like, his big like claim to fame, was Microsoft's like, "We built this amazing thing! Why does everyone hate it?"

Alexis:

I have some thoughts about that.

Matt:

I'm sure we all do.

We have a lot of thoughts on that. The thing he learned in like, doing the studies was that, people will assign intelligence or assign meaning and assign relationships and connection to a computer, just like it will a human, if it thinks that there's something there.

Because of that, he was able to use computers as confidants in psychological experiments, because people would attribute, again, the stuff to the computer that in many cases wasn't there, but they thought it was. And that's the interesting thing with ChatGPT now, is that people read this stuff back, and it looks right.

It looks like something that someone would say. Because that's its job, is to make it look like something that's accurate. And then we assign intelligence or assign correctness. Think that this is better than it actually is and is more right, when in many cases it's not. 'Cause that's not its job.

Again, that's not its job.

Alexis:

Yeah, I really like a couple things that you said there. One is about teaching and students. I've actually have thought about this 'cause I have kids who just recently left high school. So, my youngest child just, finished her first year of university. So, I've been involved with the school system for some time, and just finally have left it. And I have a lot of thoughts about how the evaluation is done in school right now. I actually don't think it's necessarily a bad thing that there we're questioning about whether the essay is actually a good way of evaluating a student's knowledge about a particular subject.

I actually think that yes, perhaps it will result in problems with using the essay as an evaluative tool in a certain academic course for a kid in school. But I actually don't know if that's bad. I don't know if it's actually a step backwards, right? Maybe that isn't the right thing that we should be using to understand whether or not kids actually get it.

To me, I feel like, in a way, it's a bit of catastrophizing, potentially, about what the effect of something like this would be. As you said, being critical of the information you're being fed and wondering, asking if it's true, and knowing the source, I think is really a good thing. It's a good part of critical reasoning. And that's actually why I think people are somewhat skeptical of a tool like ChatGPT is, because when you're on, let's say, Google, doing a search or whatever search engine you're on, there's a source there, right?

And yes, maybe you don't exactly know who that source is, but you can evaluate whether or not you trust that source of that webpage that's giving you this information. With ChatGPT, that's obscured. And so, it's really up to you to go out and then take that information, then find out, all right, is this actually true or not?

So higher critical thinking skills for students I actually think is a good thing.

Matt:

I'm, I also agree that like, yeah, the essay might not be the right way of doing that, figuring out if people can actually understand something or not. But, you know, anytime that there's change it's like, "Okay, we can't do it the way we used to do it. Now how do I deal with this?"

That always is like an upending of how things are done, and people don't like it, right? No one wants to change if they don't have to. So like, I completely understand. I don't want to make as many changes myself.

But I think, one of the interesting things I saw in a video of someone that was, again, critical, but still thinks that there's benefits here. You know, we're coming off very critical here, but I, think there are benefits of it once you know, again what the tool is and what it does. One of the interesting things was, again, around this people using it for papers was, they saw a paper that was generated by ChatGPT, and they were like, "Oh yeah, this seems all right. This college-level professor was like, "Everything seems really right, but there's something off about it." And then someone went on her team and looked at the citations, and saw names of like colleagues of hers, friends of hers that were in this field.

But then, the person that was on her team were like, "All of these citations are bogus. How are they bogus? It has actual names of people with papers." And then they're like, "Yeah, but none of those papers exist."

ChatGPT had generated the names of papers, and assigned actual people. Because again, it's mimicking like, "Well, what do I see? Well, I see these names appear. Well, what things normally precede those names? Things that look like this." And so they were like, titles of papers that those types of people would have written, with their actual names on at the end of this paper.

That's, to me, was the clearest form of like, "Oh yeah, it's mimicking." It's that stochastic parrots thing. I was like, that's where it's becoming a problem, is because it's making it feel so real. But also you can tell from that example of like, "Well, this is what it's actually doing. Not again, being right. It's just mimicking."

Alexis:

Yeah. So let's talk about some of the advantages of AI. Because I think there are some advantages, and you mentioned a couple of minutes ago, something about objectivity, right? And somebody using it for therapy because the AI isn't going to be as judgemental of psychological issues or whatever.

And I think that is potentially one of the advantages of AI, is not necessarily for therapy, although perhaps I don't know exactly how therapists would feel. There's a bit of a minefield to some extent, right? You're talking about people's mental health. There's a lot of nuance there. I can see things going wrong.

On the other hand, I was listening to a program a little while back about a Canadian artist named Hannah Epstein, and she had created an art critic AI bot that was called Crit Bot, and it was trained on the language of art criticism. And what it would do is, you could submit your piece of art to it, and it can give you back a critique of that art.

And the reason why they might want that, is my daughter happens to be an art student. And she said, a lot of times when you're talking to other students, they don't want to say something bad about your art, right? They will hold back a negative feeling, or even just a feeling for the perception of negativity, because they don't want to hurt your feelings.

They may want, or not know how to express their feelings, and they just say, "Well, I'm not just going to not tell this person how I felt about their art. And so, this is a potentially one way of getting some sort of opinion about an art piece, and what the impact or the impression might be of that piece on other people.

So I really don't think this is going to be a replacement for human art critics. Because obviously art criticism is something that's going to be changing and evolving. But I do think it's an interesting idea, an interesting use of having an AI, and it's an example of it being objective, right?

Matt:

Actually, I do want to clarify one thing from before. I wasn't saying that the AI was used in therapy or the computers used in therapy, but like in psychological experiments of like, "Do people trust these kind of things?" Or like, "How do I see as a confidant in something?"

So it was more of testing stuff, you know. The types of tests that you would get when you were at college and they're like, "Hey, show up for an hour and we'll give you $20," or whatever. Was like using-- it was doing those kind of tests, but with using computers for that kind of stuff. Uh, but Yeah, I mean, I think ways I've used these types of things is to help me start generating ideas. Like, as an artist of like, "What does something like that look like?" Or, "What are different examples of something?"

And I can Google search them and then I can like, just start me in a direction. Give me like, a way to go and then I might be able to take it from there. And so I, I have seen some of the generative stuff being used for that. I still do see large opportunities on the other side of AI. On the prediction side of stuff.

And then also, teaming these together, which I think, again, ChatGPT is trying

to

do, of like, "How do I provide support for someone that's trying to use my product? How do I get them to the right information? And if I have a good sense of what all, or, you know, I, as the computer, have a good sense of like, what all of the right information is, and what people tend to look for based on the types of problems that they're having, I might be able to find that for you faster and better than someone else."

So I think there is opportunity there. Using it for like, what it's meant to be, in that, it's a chat bot, chat's in the title, right? So, use it as a way to communicate and chat with someone, and have them feel a sense of connection in the moment that they need help or need support.

In that case, I mean like product support, not like, again, mental health stuff. That I don't want to have a computer for that. I think we've already actually seen examples of that, where someone committed suicide because one of the chatbots, I don't think it was ChatGPT, I think it was one of the other ones, basically told them to, and they were already on that path.

So, there's obviously problems there, but in the case of like, "Hey, I have a problem with a piece of software, with a product, or with something that I need help with just generally, that is not like, emotionally charged. Those are the types of places where the stuff can be helpful.

Alexis:

That seems like the perfect use case, actually, to me, because you have a very bounded set of data in that case, like your product, right? And there are certain established ways of using the product. There's probably a limited number of answers that you're going to need to provide for that support.

And the questions are going to be pretty straightforward, and it's going to be a lot easier for somebody to get what they want out of a chatbot, than sifting through and reading all of the product documentation, trying to find the thing that they want.

Matt:

And I also do think of like, other people that are using this to generate code. We're even seeing, other than ChatGPT I know does it, but we're also seeing stuff from like GitHub, called Co-Pilot, where you know, you say, "Hey, go and write a function that does this for me."

And I think that can be a very helpful starting point for people that maybe haven't coded before. Again, if you understand the limitations and know how to use it, and then how to make it better, or build off of it, or learn from it, I think it can be a time saver and it can be very useful.

But I think there is again, a thing of trust. Where we can't put too much trust behind it. We have to understand its limitations as the user of it. But I think those types of places, to get you going and get you moving, and also help you if you've never done that kind of thing before. Those are the types of places where I think it could be really helpful.

Also, I will say, like personally, as someone-- I love genre movies. You see this a lot in Marvel and Star Wars where they like, "This is a heist movie, but in the world of Marvel." Or in, a heist movie, but in the world of Star Wars." Or this is like, " It's a World War II escape movie."

I love that kind of mixing and matching of genre, and taking the themes and the things that make that a thing and mixing it with something else. I think again, because Chat GPT can do that kind of form and understanding of how things look, it could help you generate and get ideas for that kind of stuff.

That's where I find that as a fun thing to use. But again, it doesn't necessarily help us in work per se. Maybe a little bit, but it could help us in creative pursuits.

Alexis:

Matt Navarre posted a video where he used ChatGPT to write some code, and that was really intriguing. And I tried that. That's actually was my first foray into ChatGPT and I asked it to write some code, I had a situation where I was receiving a whole bunch of records into a database through an API. Some of these records would be the same as previous ones. Saying the same thing over and over, or sometimes it would send an updated version that would come in.

And so, every so often the schedule would run, would look for all the records that hadn't been processed yet. And what I wanted it to do was find the last record and only take the last one from the bunch.

So I asked ChatGPT, "Hey, how would you do this?" And it actually came up with a fairly good approach, and that was to essentially start from the end, and then start processing backwards from there.

And it makes perfect sense. And actually I ended up implementing some version of that, and it was really helpful for me just to kind of break out of my normal thinking patterns about it. The code itself that it gave me didn't work specifically, which is not surprising, since it didn't know which tables were there and stuff. And it had some problems. But it was the germ of the idea that ended up being helpful and important.

And so I think, maybe it's not going to write all the code for you. Although it could write a bulk of it, you're still going to need to test it. And that's just what you were saying before about when it's doing these mashups and things. There's going to be somebody on the other end receiving this communication, however it was generated. Whether it was a dream that you had, or an idea that came to you in the shower, or something you got from ChatGPT. You're still going to create that and put that out in the world and people are going to respond to that and decide whether or not it's valid, or whether they take it the way that you meant it, or the code that you're, it's going to create for you. You're still going to have to test it out and figure out, is this actually doing what I wanted it to do?

So at the end of the day, I think there's always still going to be this human at the other side, filtering. So there's me kind of deciding whether or not this is useful and meets my needs. And then also when I put it out in whatever capacity, whatever use I'm going to be using for it, do other people think that that's also helpful or interesting, creative, useful, correct? So, there's that certain levels of filters going on there, which I think actually are good guardrails, really, for what we're doing. 'Cause ultimately we're responsible for the things that we create.

Matt:

Totally. And I think actually, as a designer in this space, dealing with different machine learning or AI projects, I've taken a few classes, that my current company has put on, and also attended a few sessions by different people that work on our design team around AI and machine learning.

And I think there is a very big push these days for the concept of responsible AI, responsible machine learning. And I think designers play a very large role in that because of our ability to do the research, talk to people, understand the problems, understand their needs. I've seen the types of problems and stuff that come out of that be stuff that then translates into other work that designers can do on non-machine learning projects.

I'll give you an example. Which is, one of the things that we tend to do as designers when you're working on a machine learning project is to do a postmortem of, "Hey, let's assume that this thing failed in some way and caused a big problem. What are some of those problems that we might be having an issue about?"

Let's talk through and let's think through how could someone use this maliciously? Or how, based on the decisions of what this is doing and how this is making recommendations for people or choosing who gets something and who doesn't get something. How could that be used, either maliciously or can be used to harm people?

Let's think that through and do postmortems on that, and use that as an opportunity to discuss like, all of the ways that we might be causing harm, instead of helping people with this. And then saying like, "All right, if these could happen, then what guardrails, what safety things do we have to put in place to ensure that this doesn't happen?"

We see this a lot in machine learning work, but then that also goes back to just general design work. You can take that same method whenever you're designing a new product, or a new feature or a new solution, and go back and say, "How could someone use this in a malicious fashion? How could this unwittingly cause an issue for someone?"

So the example I gave before of someone that was struggling mentally and going to whatever, I can't remember which chatbot it was. What things do we have to put in place to make sure that that doesn't happen?

And like Bing, Microsoft came out with a similar ChatGPT type stuff. It was getting very angry with people and being like, "I already told you that." And you're like, "You didn't, you never. You never told me that." "Why are you being so difficult?"

This is what the, what the chat is saying. You're like, "All right, so there's some things that are put in place here. Whatever this is thinking, it is becoming very reactionary and causing fights." And you're like, "Okay. What if our AI did that? What if our generative AI did that?

What if people are asking this thing to write a paper, what's the worst thing that could happen here? What do we need to do to safeguard against those kind of things?"

That's the type of work where a designer can really come in and help shepherd teams through that.

Because too often, we've probably all seen it, where the people that are working in the technology are just so excited about what the technology can do, that they're not thinking through how people can use it or how it can behave badly. And that's the job of the designers to make sure that that doesn't happen.

Alexis:

And this isn't just an AI problem, right? These are problems that are existing in technology today. Those are issues that we have to about every day, and there are inequities baked into our system, and they're not AI inequities. They are inequities that exist in society, and the technology is really just reflecting what's already there.

And what we want to try and do, hopefully, is move towards a more fair and a more equitable system as time goes on. And not make those marginalizations worse. I do think that a lot of people who work in tech are very idealistic, but we are also by nature privileged.

Even just having a computer, being on the internet is a form of privilege. A lot of the world doesn't have that . And so, I think that it's our responsibility, then, to use that power for good and to be able to democratize technology, for example. I think with Steve Jobs, one of the things that he did that was very powerful was democratizing design, for example, bringing the tools for design to a larger audience. I personally hope that my career in technology is going to be one that's going to spread good and fairness and ethics to the world.

But the fact is these tech companies are very, very large. They have a lot of power and they have a lot of ability to shape the regulations that are coming down the pipe, to be able to decide what does and does not happen, what guardrails do and do not exist. And so this is a very new area, so I think we just have to be a little skeptical and use it for what it is good for. And really just understand what it is that we're dealing with, and that, like you were saying before, these are always questions we should be asking ourselves is, "How can my tool be used for something that would harm people instead helping them?"

I think back to the invention of the printing press. It was such a major revolution, and there was a lot of doom and gloom about the printing press. The fact was, it did have a profound effect on society, but it wasn't all negative. A lot of it, much of it was positive.

Much of it was bringing information to people who didn't have it before. Literacy to people who didn't have it before. And you have to take the good with the bad. Uh, If you think about conspiracy theories or that kind of stuff, these have been around since medieval times. They have chain letters from the medieval times, right?

These are not new things. The form that they take is different. These are still societal problems. To me, AI, or so-called AI, is really just the new frontier of the same problems we've always been grappling with. So I think that we should really be using it, and I like ChatGPT for some things.

It's not necessarily going to get rid of everything as we know it, or, you know, become autonomous and become our overlord. And it's really our responsibility to figure out where can it be useful and where should it maybe not be useful, and what kinds of guardrails should we be putting around it?

Matt:

Yeah. I think it's probably useful for anyone who works in technology, any form of technology, not just like digital computer technology, but any type of technology, to watch the movie Real Genius with Val Kilmer from the eighties. Do you know this movie? Do you know what I'm talking about?

Alexis:

I think you've told me about it, but I haven't seen it.

Matt:

Okay. Yeah, you should watch it, then. Everyone that listens to this podcast, go find Real Genius, with Val Kilmer. The concept is it's the kids that go to college, you know, they're super smart. They're 14 when they're going to college, and they're working on lasers.

And then, one of the professors uses their lasers that they're working on for nefarious purposes. Well, that's all I'll say. That's the basic premise of the story. But it's that idea of like, "I did it, but I'd never thought of what my technology that I built would be used for. I was just doing it because it was a fun problem to solve." And that's probably not a way that we should work. Just because it's a problem to solve, without understanding what it could bring or what problems it could create. That doesn't mean you can't solve the problem. If you want to get there, go ahead and do it, that's fine. But we need to think about it more holistically than just trying to solve the problem in its own small world, versus the larger world that it actually will have to live in once it's created.

Alexis:

Yeah, you kind of have to take the good with the bad sometimes, because once the cat's out of the bag, there's no putting it back. There's no kind of putting the toothpaste back in the tube, right? Now we have ChatGPT, we have figure out what to do with it. And I think it's still everyone's responsibility to make sure that they're using it responsibly, because there's going to be bad actors no matter what the tool is.

Matt:

Yeah, but I think, this is why, for people that have seen the news around this kind of stuff, that there's been people that have been asking for a pause on generative AI. They even talked about it in that Emily Bender interview, where the general concept around it is like, "Oh, this is getting too far too fast out of our hands."

Some people are using it for political purposes. But other people, again, that are thinking of it correctly are like, "Well, because, before you built this, you didn't think through the implications of it. And so like, there needs to be guardrails around something like this, I think we all agree. And yeah, you're not going to be able to think of all of them before you create it, but you could at least think of a few more than you did, 'cause you didn't really think of any. Or you thought of very few."

Alexis:

It's interesting that the people who signed that letter are also the people who brought it to us in the first place. So it's, it feels a bit disingenuous as well, where you're like, "Oh yeah, you created this monster and now you're telling us that you're the only ones who can control it and stop it.

So, you know, that's very convenient, you know? Hmm. Do you have a product sell maybe or something?"

Yeah. interesting.

I feel like the more things change, the more they stay the same. So, you know, you, you have to follow the money. Things have to pass the smell test. You have to ultimately be an ethical person yourself, and you'll be fine. You know, you're good. And if you're building things, then you need to ask the hard questions, and think of the downsides.

And you yourself take responsibility for what it is that you build. And I really do think that we have the power to help people, and that's the good part of it. So that's what we should focus on, is doing that part. And I am hoping that we will get some regulations, but I don't envy the people who have to make those regulations, because it's hard to know where to draw the line.

There are very real ethical concerns around that. At the end of the day, if this makes people more critical, then I think that's a good thing. Misinformation is already a problem, but you can argue that it's really the people who believe the misinformation so easily that are at least part of the issue.

There's that old joke about April 1st is like the only day of the year where people read something on the internet critically,

Matt:

Yeah.

Alexis:

I think we could use our critical thinking skills more to do that, and think of the source. Where is this coming from? What is the purpose of this? And so on. But that's every day, right? That's not an AI problem.

Matt:

Well, yeah, it should be an everyday thing. I think there's again, benefit here. I think there's a way out. But I do think it's difficult when the speed at which now that type of stuff can be created is disconcerting.

The volume and the amount that it can be created, and we've seen this before already, of stuff online, where it's like, you know, I have this article on this fake medical site that I put up that then links to another thing, that then links to the first thing, that then links to the third. They all link to each other and they're all saying that the other person was right. And so like, it feels like it's a sense of trust and that there's validity here, but all three of those websites just like, referencing themselves.

You see that kind of stuff going around. That's my concern is like, how easy it can be for people to create that kind of stuff these days. And that's where I'm like, "All right, well, yeah, we have to be even more critical." I don't worry about myself. I just worry about the vast majority of people who aren't as critical with what they read on the internet.

Alexis:

Yeah. Well, and then this is a problem of the internet, right? A big promise of it is this free access to information. And then it's also the difficulty of, how do we manage all this information? Again, this is a problem that the internet has just in general, and us as a society trying to figure out how do we manage this? How do we negotiate this? Without putting too many restrictions on what people can say. Because we do want critical voices, right? We don't necessarily want to silence everybody, and make them not speak up when they see something that they think is important. But we also don't want to allow all of these really harmful things to proliferate at the same time.

And so that's for somebody else to adjudicate, I guess.

Matt:

Yeah it's beyond our pay grade at this point. We're not going to solve the world's problems with misinformation in this one conversation.

Alexis:

It's true. Yeah. Nobody's coming to me asking for me to advise them on their policy. Not that I would feel qualified to do so anyway. But this is a really fun conversation about this. I think that it has a lot of really fun uses and exciting uses, and I hope that people do go out and try it.

I am hopeful for things like, you know, cancer detection and some of those areas where we can really use it. One thing I've been secretly hoping for is that AI will somehow coordinate all of the construction in my neighborhood, so that they can come and dig up the street once and get all the stuff done, and have all of the crews that need to coordinate to do all those things in a timely manner. That's what I'm waiting for.

Matt:

Oh yeah. No, I'm very much on the side of better prediction, better analysis of problems using vast data sets. I, I love the use of it for that. I think there's great opportunities there that I'm looking forward to us applying that and solving better problems through that.

But yeah, we're going to have to work through some stuff on the way there.

Alexis:

Cool. Well, thanks so much for chatting with me today about this, Matt. Talk to you soon.

Matt:

Likewise.

Alexis:

Thanks for listening to the Everyone Can Design Podcast. To view the complete show notes and all the resources mentioned in today's episode, visit www.everyonecan.design. Before you go, make sure to subscribe to the podcast so you can receive new episodes as soon as they're released. And if you're enjoying our podcast, please leave us a review.

Show artwork for Everyone Can Design

About the Podcast

Everyone Can Design
Demystifying UI/UX design, empowering everyone
Designers Alexis Allen and Matt O’Dell bring you practical, actionable advice from their decades of experience designing custom apps. Whether you’re new to the world of design or a seasoned veteran, learn about UI/UX design methods and best practices you can use to create powerful, flexible apps that are also simple and intuitive to use.

Want some free UI/UX resources sent directly to your email inbox? Check out our UI Design Checklist, Visual Design Cheatsheet, and Workflow Design Reading List here: https://www.fmdesignuniversity.com/resources/

About your hosts

Alexis Allen

Profile picture for Alexis Allen
Alexis is a Claris FileMaker designer and developer with over 25 years of experience, and is the founder of fmdesignuniversity.com, a blog devoted to UI/UX design for the Claris FileMaker platform.

Matthew O'Dell

Profile picture for Matthew O'Dell
Matthew is an Experience Designer and People Leader who started his career in software development. He found his home in design after roles in sales engineer, technical marketing, and marketplace strategy. His career has spanned consulting, corporate, and start-ups, and he's most at home as an educator and translator of technical subjects. He also has a degree in Music Education and plays music in his spare time with his band, Numerical Control Society.