morgenbooster
Shifting Focus on AI: How it Can Make a Positive Difference
If you scroll through LinkedIn or follow the broader conversation around AI, you’ll notice something striking: the discourse is deeply divided.
Some celebrate the powerful tools and new possibilities that AI has brought with it—promising social innovation and climate solutions that could help build a better world.
Others raise serious concerns about AI’s environmental footprint and the long-term consequences
for life on this planet.
At 1508, we’re not here to pick sides. Our value proposition promises that all our design solutions will be accountable by default. Responsible first. That means asking difficult questions before jumping on the next big trend.
So to better understand what AI for Good really means, Lisbeth, one of our transition designers, conducted a journalistic investigation, interviewing experts, reviewing research, and gathering examples of AI being used with good intentions.
At this Morgenbooster, she’ll present what we’ve learned, so others can learn, question, and reflect alongside us.
We’ll explore:
Why some are skeptical about embracing new technology as a means to do good
What people are doing with AI right now that could be seen as good
Whether these solutions are truly doing good
What NGOs and companies could start doing to navigate the possibilities of AI
This session is for anyone curious about the potential of AI but committed to doing it right.
Whether you work in a company or an NGO and are exploring AI but don’t want to compromise on your values - this is for you
[00:00:00] Lisbeth: Good morning and welcome to this Morgenbooster. My name is Lisbeth and I'm a transition designer here at 1508. and, I'll be hosting this Morgenbooster, but I will actually also get a special guest. And, he will, tell a little bit about what he does with AI in the end of this session. so I will start and he will take over.
[00:00:25] Lisbeth: And the topic for today is shifting focus on ai. How it can make a positive difference. And actually, when we created this title, half a year ago or something, we discussed can we shift the narrative a bit and talk about all the good stuff we can do with ai and then. It we realized that it's actually not possible just to talk about all the good stuff it can do.
[00:00:51] Lisbeth: You have to balance it with all the concerns and all the critique as well if you want to do good in the world. so this more boost will not only be about the positive, I will also try to balance it with the critique and the concerns as well.
[00:01:10] Lisbeth: Alright, first example. Is "be my eyes", I don't know if you know this app, but it's an app for the blind and the visual impaired people, where you can take a photo of something and then the app tells you what's on that photo. And it used to be an app where you just had a human volunteer telling you what's what, was on that image.
[00:01:33] Lisbeth: But a new feature they got was that now you can take a photo and then an AI is helping you in real time to tell you what's on that photo. and it got a lot of reactions right away. And, this one from Lucy Edwards was one of them.
[00:01:50] Person in video: I'm Liz and I found catwalk. As you guys know, I'm obsessed with fashion makeup.
[00:01:56] Person in video: So I wanna know what's inside this and I'm gonna get B my eyes, AI virtual volunteer to tell me what's inside the pages. Let's see if we can do it. I'm gonna open it to a random page. Cat picture a few. Take a picture. Cat
[00:02:10] Person in video: picture. Virtual volunteer. In the first image, there are three models wearing a white dress, a black silver outfit, and the black orange junk.
[00:02:16] Person in video: In the second image, there are also three models. Would you like me to describe any of the answers in more detail?
[00:02:20] Person in video: I'm crying. Oh my God. I've always dreamed if I buying books like this, I don't shoot, I'm like crying over a book. But it's more to me than that.
[00:02:30] Lisbeth: So she is definitely happy about this app and it helps a lot of people like her.
[00:02:35] Lisbeth: So is this AI for good?
[00:02:42] Lisbeth: We'll come back to that later because first I would take you a little deep intohere in 15 eightwe haven't actually taken a stance yet on what we think about ai. We are curious about AI and we make space for all the discussionssurrounding ai.
[00:02:51] Lisbeth: So we both have people that are very skeptical. We have people that are cautious. But we have also people that are enthusiastic about AI here. and we really like to have all these discussions around aito make the things we do better and to figure out what are stance we want to take in the world with this.
[00:03:10] Lisbeth: And that also led me to like what, moan, booster type I wanted to have. and it led me to change the approach a bit because usually at a Morgenbooster I would come here and tell you what we believe something is and maybe show you a case of something we have done. But this time I changed the approach and.
[00:03:29] Lisbeth: It dug up my old journalism skills and then did an a more objective investigation on this subject.
[00:03:55] Lisbeth: what is AI for good and can you do AI for good? So I tried to, I went to a conference. I read books. I listened to podcasts. I dived into scientific papers, a lot of different stuff. But I also interviewed people to hear their perspectives on this, bothscientists, but also skeptical people, critical people, and enthusiastic people to hear what are people saying about this?
[00:04:03] Lisbeth: to get a map of all the opinions and all the things out there that people are talking about. and of course I also talk to people inside 1508 and discuss this.
[00:04:17] Lisbeth: And before I share a bit of what I learned, just to get one thing in place, when I talk about ai, I don't talk just about chat, GBT and the tools, of the big lms. I'm also talking about products where, AI is a feature in the product, for example, for, facial recognition or recommendation or prediction.
[00:04:38] Lisbeth: So also the machine learning algorithms. That you can use with ai, but of course also the tools, chat g bt, mid Journey and all those.
[00:04:50] Lisbeth: Alright, so what is AI for Good? and I asked all the people that I interviewed this question and they were. Not agreeing at all at this. I hope to get some answer that I could use for you today, but they actually disagreed a lot, ranging from people saying anything that helps people is AI for good, right?
[00:05:13] Lisbeth: To people saying anything that helps us solve the big problems. So if we solve something with the climate, crisis or something like that, then it's AI for good. To people saying that AI for good, is a contradiction. The term itself, you cannot do AI for good at all. so lots of different opinions.
[00:05:35] Lisbeth: and I think what I learned from this investigation is that you need to take all the different sides, into consideration if you want to do an AI product yourself and want to do good in the world, at the same time and.
[00:05:51] Lisbeth: I figured out that there are two different aspects or dimensions of goodness, when we talk about ai.
[00:06:00] Lisbeth: And the first one is, the depth of the problem that you solve. So how deep do you actually go to the root solving a problem when you do your solution? What is it that you want to do in the world with your solution or your product? And the other one is the level of responsibility in your product.
[00:06:20] Lisbeth: So how responsible, how many concerns are you looking into and how many risks are you mitigating, when you do your solution? So the depth of the problem, you could say that it's a range, ranging from. It's a bit negatively loaded, but from symptom treatment, where for example, if we are talking aboutunemployment for young people, then, it could be a solution where an AI helps a young people to get a job, and that's good, right?
[00:06:50] Lisbeth: So an AI goes out there and helps people get an actual job. To ranching to, solutions that are addressing more the root causes of why the, the young people can't get a job at all and maybe try to eliminate the systemic problem we have that when you're young it's difficult to get a job,
[00:07:16] Lisbeth: and then we go to the responsible part and. It is very much linked to all the concerns out there. And as you can see, I learned a lot about what concerns people have about ai. and I could probably write a book about all these concerns, and it would take me a long time to go into depth with all of them.
[00:07:35] Lisbeth: But I tried to make it a bit. More simple than this and to group them. so I made these three groups. so the first type is the human concerns, where, the, concerns impact the, an individual or a smaller group of individuals. Then we have all the environmental concerns that are about the planet and what, is infect impacting our ecosystemsand the planet.
[00:08:04] Lisbeth: And then we have the structural ones as well where we, it's, they're more like, they're more not so easy to comprehend, because they're more like systemic in their nature. but they are more structuralproblems whereAI is reinforcing some of the problems out there already.
[00:08:22] Lisbeth: But let me just dive into each of them.
[00:08:25] Lisbeth: So human concerns, there are a lot, and these three are just the ones that I heard the most, when I talk to people. the first one is, that AI often gives, biased results. we have also heard that a lot, regarding like chat GT and all the language model out there.
[00:08:43] Lisbeth: They're based on the things they're trained on, the results can be biased as well. and, then there is this thing about, we have a fear that we trust too much in what the AI says. 'cause AI are usually very, direct in what they recommend. They say do this or, I'm sure about this, but actually it's more like a confidence level.
[00:09:10] Lisbeth: It's not really sure about something, it just tells us. But because it just say something, then we trust it, and we tend to trust it a bit too much. And then there's the last one, that's, the, loss of control. So that's when we start to give some tasks to an ai, and give AI the possibility to take decisions for us.
[00:09:31] Lisbeth: Then we lose control of what is happening, and then it's also more difficult to actually intervene if we are not happy about what the AI does. So these are the three main ones, definitely more than those.
[00:09:45] Lisbeth: Then we go to the environmental concerns, and this is like putting your hand in a bees nest or something.
[00:09:51] Lisbeth: There are like tons of opinions and numbers and it's really difficult to figure out what's up and down, regarding the environmental impact of. and every time you hear a number, like MIT here saying that, that the power, required, was doubled in a year, then somebody come and say yeah, but it's just like partly driven by the generative AI and maybe not only a ai, and everything is questioned here, so it's really difficult.
[00:10:27] Lisbeth: what most people talk about is the electricity usage, because AI uses a lot of electricity. but besides that, when we use a lot of electricity, that also puts pressure on the electricity net. and and we talk a lot about like using green power to power ai, but because we use more and more energy then, then we cannot have enough green power to actually support that use of ai.
[00:10:56] Lisbeth: So that's also one of the main critiques. Then there is a talk about water. the data centers out there where the AI is hosted. they use water, a lot of them to cool down, the environment in the data center. And that of course takes water from somewhere. And puts pressure on the ecosystems, around the data center.
[00:11:17] Lisbeth: And then there is, the talk about the materials and the processes, that are there when you produce, especially the chips, for using ai, where, they, use a lot of chemicals and and substances that are not too good for the environment. but yeah, there are lots of different things you could talk about here.
[00:11:41] Lisbeth: Then we have the structural concerns. and I won't go very deep into those because they are very complex anddifficult to comprehend. Butthe Two main concerns here isthe rebound effect. I don't know if you heard about that, but let's, when you do something more efficient, then often you also save some money and then you actually end upusing the thing more than you did before.
[00:12:03] Lisbeth: And then it doesn't lead to lower consumption, it leads to higher. An example could be when we had the light bulbsbeforein old days, we hada light bulb and then we remembered to turn it off when we left the room because it used so much energy and it was costly to keep the light on. And then.
[00:12:20] Lisbeth: We had theLEDs and now they were more efficient, so they were cheaper to have lit. and then we didn't, now we startedforgetting we had to turn off the light when we leave the room. Also, we maybe bought more lamps because now the LEDs, they were more effective and cheaper to have. So actually we are starting to use more energy than before we got the LEDs.
[00:12:40] Lisbeth: So that's a rebound effect. Shortly described. and then the other structural concern that isaround AI is thegreen growth. and that's a big talk in itself. but it's the idea that you can grow your businesswhiledoing green stuffand saving the planet at the same time. and there are a lot of people out there sayingyou cannot do this.
[00:13:01] Lisbeth: so when you go out there and sayI want to save the planet, but at the same time I have a company that I want to growPeople say that you cannot do. I will show some examples where I'll dive more into thesejust to make it a bit easier toknow what I'm talking about. The thatthe human and environmentalconcernsvery are the things people talk about when they talk about responsible designand responsible ai.
[00:13:23] Lisbeth: whereas the structural concerns, are not something a lot of people talk about. That's morethe critiquesout there that are and the skeptical people that are discussing this. but it's definitely a thing that you need to be aware of if you want to design an AI productthat is used for good.
[00:13:41] Lisbeth: And you could put it in a triangle like this so you have some kind of, element that you can have when you're looking at, a solution using ai. and I will try to use it in some real world examples because it helps make this more concrete.
[00:13:57] Lisbeth: so let's go back to, the Be My Eyes example here.
[00:14:03] Lisbeth: let's try to assess this one. So where are we on the problem solving impact? And again, it's more like a scale, but here I would say that we are more in the symptom treatment, part, of, problem solving becausewe are not changing how the. The visual impaired are like, moving around the world.
[00:14:27] Lisbeth: We are just helping them in that specific situation with something. So we are not changing how the Soci Society looks atat visually impaired people. so more in this department, I would say. And then if we look at the possible concerns here, There are many. I just tookthe most relevant ones.
[00:14:46] Lisbeth: we have of course, the environmental impact because this, this AI they use isthe open ai. and it's a big multimodal, LLMthat uses a lot of energy. So every time a person sends an image to this ai, it uses a lot of energy. and what can you do about this? Because this is very easy for if you're doing a product to use thethe open AI API and use their tools.
[00:15:13] Lisbeth: But what can you do yourself if you want to mitigate that concern or risk? I. And I think the only thing you can do is to look if you can use some other models that are smaller, and maybe you could here, they could have combined a small model with some OCR technology that is less energy consuming.
[00:15:32] Lisbeth: But again, it's difficult. And if we look at the human concerns here, then there's this thing about the AI sounding con, sounding confident. so here, when she asks it something, it just tells her. This is on the image and it's confident about what's on the image. But actually it could be wrong, but she could trust it a lot.
[00:15:55] Lisbeth: So if it said to her, you have to go to the right, then she would probably go to the right because it's telling her that. Andbecause There's no human in the loop here, then that could lead toto situations that are not favorable for her. So here it could communicate like the limits or sometime some kind of like, how sure am I on this?
[00:16:16] Lisbeth: but yeah, it's difficult. Then there's the bias results. if she takes a photo of, a person standing on the street with a long hair, it would probably say there's a woman on the street. But is it a woman? It could be a man. Maybe instead we should push the AI models to be more neutraland maybe, if we can say you have to be tell it's a person instead.
[00:16:42] Lisbeth: or you could offer like multiple views of it. It could be a per a man, it could be a woman. And then there's a risk of manipulation here, because she doesn't really know and she cannot see it herself. If she takes a photo of a lot of products in the supermarket, it could tell her thisvegetable looks really nice and green, and this one over here, it looks a bit brownish.
[00:17:08] Lisbeth: This is very beautiful. And then all the words it's using to describe the things, could man manipulate her to actually choose one product instead of the other. So that is also something you need to be aware of. And again, here you could work withgetting the AI to give more neutralinterpretations of what it sees.
[00:17:27] Lisbeth: Yeah. And there are of course also structural concerns herethat you could go into, but I will just take another example where the structural concerns are more important.
[00:17:38] Lisbeth: and here this one iscarbon ray. carbon Rayis looking at the big problem in the cement industrythat cement is actually responsible for, I think.
[00:17:47] Lisbeth: 80% of global CO2 emissions. Sothey Were actually setting out like, we want to do something about this problem. Andthey Have done an AI tool that makes, the process of producing cement, more energy efficient. they have thistool where you can have a dashboard and then the humans get, recommendations to how they can improve, that, that production to be moreenergy efficient.
[00:18:14] Lisbeth: And, if you look at their website, they say oh, we are reducing this much and this much but they're alsoTalking about like the annual savings that companies can have fromfrom using this tool. So they're talking both about thesavings and talking about thereductions at the same time.
[00:18:34] Lisbeth: So if we start looking at the problem solving impact, Ithink we are still in the symptom treatment department because they're not going out there and actually trying to change how we use cement. maybe we should not use that much cement and look at other materials. Maybewe Should reuse more of the cement we have out there already.
[00:18:54] Lisbeth: There are other like. Possibilities tochange this if you want to go to the root cause. And if we look at the concerns here, this is a really good example of the rebound effect. So here, if they are making this more efficient, then the companies will start to save money because now they don't have to buy all that electricity topowerthe plants they are using for creating cement.
[00:19:18] Lisbeth: And thenthat could lead to increased production because now they have money to actually create new factories andbuy new equipment. and that would lead to actually higher emissions than lower emissions. I don't know if that is what happened, but that is at least a concern that could be here also.
[00:19:35] Lisbeth: there is this Yeah. And, it's difficult. Whatshould You do about that? I don't havean answer for that. And also they have the green growthconcern here that they're talking aboutall the things that they're doing good. and at the same time, they're talking about all the money they save.
[00:19:52] Lisbeth: and againdifficult to say what should you do about this? Ifthat's Your main purpose ofthe product. Of course you could avoid the narrative and not tell that in the same story. but again, it's difficult. And again, there are both human and environmental concerns here, but I just thought that the structural ones were themost important.
[00:20:14] Lisbeth: Last example, and.
[00:20:41] Lisbeth: This example is, about a problem in New Zealand. in New Zealand, they have an indigenous language called Maori. And, after, the colonization of New Zealand, the language, is starting to disappear. they don't have a lot of native speakers left of the Maori language. and it's a big problem because, people are starting to, like pronounce things differently than they did before the colonization of New Zealand.
[00:20:47] Lisbeth: and there is this radio station in New Zealand called Teco Media. They saw this problem and said can we do something about this? And they had a giant archive of old broadcasts where they have Maori people talking and they said we will try to use this library of Maori content to create a language model that we can then use to get people to talk the language or speak the language again and be better at pronunciation.
[00:21:15] Lisbeth: So they created a language model, and created an API as well. So that, other people could create apps for learning this language. And let's, start to dive a bit into the concerns first. because how they did this actually creates very little concerns, about, about the products because they don't really have, that many of the things here, what they did, Oh, of, oh, sorry, I, I just forgot this one. They had a little concern here, and that is that, the language model that they are, they built is hosted, in a place where they have an in Nvidia chip. So they're using an NVIDIA chip here. And, and that's of course, because of the production of the chips, but there are actually no really other, possibilities out there if you want to do language models.
[00:22:06] Lisbeth: Not that it is good enough to do what they wanted to do, but what they did, Is that, they have total ownership of the model, the indigenous people. So nobody else owns this. They can decide who uses this app, API, and they decide who uses this, language model. and they have full control.
[00:22:29] Lisbeth: And also this model is based on the Maori material that they have themselves. So they have also control of the materials and they have, rights to use that material as well. They haven't scraped it from somewhere. they have the intellectual property of that, of that material. Also, they are using a very small model.
[00:22:50] Lisbeth: They are trying to make it as, energy efficient as possible, and they're hosting it in New Zealand. and I think this is their setup. So a very small setup, not meeting a big cloud data center anywhere. just this small little thing. and then again, they chose not to focus, on growth or profit.
[00:23:11] Lisbeth: they just focus on, on protecting their language. And yeah, they also avoided the big tech login because they're using their own model. then they're not like, dependent on the big tech companies at all. And if we are talking about problem solving impact here. I wouldn't say that we are all in the area of addressing the root causes here, but they are further up the ladder here.
[00:23:41] Lisbeth: but I think that's also because they're doing something else than just this product because what they do is that this project is just part of something bigger that they do. They are actually, doing a lot of other stuff. They had a problem with this model that they created that, it wasn't protected by the laws in New Zealand, so that actually the big tech companies could go and scrape all that content if they wanted because the laws were not in place, to protect them.
[00:24:09] Lisbeth: So they worked, together with other, Organizations to create a new law that would protect the material. And also they have created a model for other indigenous people to take this thing that they have created and do it in their own community and their own countries as well. So they're not just having their own little, isolated, idea here.
[00:24:33] Lisbeth: They are spreading it out to other people as well so they can use it, for good in their communities as well.
[00:24:43] Lisbeth: All right, so after seeing all these examples, it's, as you can see, difficult to say, are these solutions good? definitely this. opens up discussions, and I think that is really important with this subject. Have those discussions. talk about what are the concerns? If I have an idea for a product using ai, what could the concerns be here?
[00:25:10] Lisbeth: And what problem are we actually solving?
[00:25:13] Lisbeth: So if I should give some good advice, start looking at the problem. Start looking at is it symptom treatment, or are we treating the root causes? And actually also ask. Do we actually need AI for this solution at all? so that's also a question here. The other thing you should think about is acting responsible.
[00:25:33] Lisbeth: and actually these, principles are here. They are the principles we have in 15 eight that I presented at the last mom booster. and after doing this project, I think maybe we should add a third one herethat we will have to look into, then. if your idea and product, with AI is basically about growth and efficiency, then I think it's a good idea to think twice or at least be aware that you will meet a lot of critique out there.
[00:26:05] Lisbeth: of course there are good examples here that you can do it, but you need to be aware. And the fourth one is also something I heard a lot of people say when I talk to them. Be transparent, open up, talk to people about your concerns, and talk to people about your ideas and get their feedback, get the discussions going so you know what people will think about this.
[00:26:27] Lisbeth: then you can also start to mitigate some of the risks that could be coming with using the AI here. And then you can also start to if you have a bigger problem, start to work together to solve the root cause of the problem.
[00:26:43] Lisbeth: And. If you have a product already using ai, that would be more like symptom treatment that treats the root causes, then I don't think you should see it as something bad or something. it's just then it could be just a really good example of AI for good. And, and it's okay to have those examples, but then you can also try to push other, agendas any, elsewhere.
[00:27:10] Lisbeth: So try to support change in other ways, like changing the laws like Chi Media did and doing other stuff besides the product that you are doing.
[00:27:21] Lisbeth: And, you probably heard me say. Systems and structure and all those bigger problems a lot of times in this moan booster. And because of that and because we know it's, really difficult to comprehend and there are a lot of things to talk about here, we're having a moan booster in October as well, where we will dive into the more systematic, problems of ai.
[00:27:43] Lisbeth: I. And then, because I talked to Casper, he was one of the persons that I interviewed and he, I thought that he had a really good view on how you could work with ai. I invited him to come here and tell his story about what you he's doing. So you can say, go over Casper. Okay,
[00:28:03] Audience: thanks.
[00:28:11] Audience: I haven't really seen all the slides, so please bear with me. Yeah, I'll go through this. Yeah, I'm Kasper Bko and I am, the founder of No Objective, which is a non-for-profit, which is based on doing, basically turning minority insights to majority action. And what does that mean? It means that I take science and make it accessible to as many people as possible, but I do that through a systemic lens, which means that you cannot look at a problem in isolation.
[00:28:36] Audience: You have to look at it through the entire system that evaluates. So the same thing with, with ai. So how do I use it? How do I work with it? Is that. I think actually there's a very nice presentation, by the way, I thought it was super good. I think one, would be my conclusion and I'm sure many won't agree that you can't use AI for good in a profit, for profit company.
[00:28:59] Audience: And the reason is that you will always try to make it about productivity gains and efficiency, and it will always lead to increased planetary pressure. We know that's a fact. It's not something we can debate. It's like rebound and German Paradox is there. As soon as we grow, we increase pressures on other boundaries.
[00:29:17] Audience: And a good example is when the cement company, from the construction industry originally, is that when you reduce emissions, it's one out of nine planetary boundaries. So yes, you might be able to reduce emission on that boundary, but then you increase on all the other ones when you grow. So it's simply not a possibility.
[00:29:34] Audience: So what I did when I started this company, I used AI for a lot of different things. But one, I have three questions that I have to, they have to answer is that one, is it needed? If it's not needed, then I won't do it. is it something that has a value to humanity or sort of the planet? And it could be many things from graphic design, trying to interpret it, complex, knowledge into sort of actionable language.
[00:30:00] Audience: I can also use AI there. And then the second thing is that does it reduce demand? Anywhere in the system. It doesn't have to be directly, but has to do it somewhere else. And, that's not always easy, but it's something that you can think about at least if it's possible. And the third one is, do I take responsibility for the emissions, or the pressure that I make up when I use this thing.
[00:30:24] Audience: So I try to do different things, but, so what I generally do is that I work for bigger companies or European Union, but I also try to do a lot of different experiments to see. What can happen when we use different technologies, and in this case it's ai. So II actually used an AI song app and then I said, okay, it's gotten so goodthat it actually can create really nice music.
[00:30:48] Audience: Andand That's of course great, but what is the root cost that we're having on the climate and social movements and the social movement Is that we are lacking funds. We are always struggling to get funds'cause we are always having torely onon bigger grants to actually do something.
[00:31:03] Audience: So I said, could we use AI as a way to when we set up thismusic streamingband, whenever the streams, they're all. Profit go directly to socialand climate movements. So that's what I try to do. So I try to create sort of songs that are now a hundred percentgoing to fundthese movements.
[00:31:20] Audience: So insteadI'm payingto have the AI tool and toput through my company, but everything else goes directly toother sources. And I know that I'mnot a musician. I'm actuallytone deaf, so I'm the worst person to do this. But the point is that maybe someone else will be able to break the barrier.
[00:31:35] Audience: Maybe someone else will be able to do this and use ai. As a source to get through, to penetrate, to actually get the money flowing toto the right places. So that's why I see, I try to make it the root causes that we are missing the financing. So how can we use this? So that's what I did, and then I released an EP with the six songs and I'm gonna continuously try to release
[00:31:57] Audience: more. even though at some point it, It, would be better if someone else takes over. Let me say it like that. yeah. And then I did singles and I'm trying to promote them in many different ways, to get them out there. And I don't know if it, is this gonna play?
[00:32:17] Lisbeth: yeah, I think so.
[00:32:18] Audience: Okay. And then I'm
[00:32:24] Person in video: screaming.
[00:32:25] Audience: But the point is that it's normal music and not great.
[00:32:31] Audience: But the point is that I'm trying to promote it and make it something like it was seen before. And it's all made, everything is actually made with ai, also the images. But every image that you see here in this music video is from things.
[00:32:47] Audience: That you appreciate that it's not about consumption. So I had, I gave myself that sort of test about what do I find attractive in life that is not aboutconsuming. so all of these things are moments, snapshots of something that we all maybe remember from being young or beingelderly. these things where we are together with people.
[00:33:05] Audience: So that's the point of this, trying to say that, can we create this cultural warfare using the algorithms to fundbasicallyclimate and social movements. Yeah. And then there's a lot of different, different takes on this and differentsongs.
[00:33:21] Audience: please go check it out and if you think it's coolplease share it because it's, the more clicks it gets, the morefunds we get to the climate movements.
[00:33:28] Audience: And, please do it yourself 'cause I'm sure you will be better thansongs than me. That's it.
[00:33:33] Lisbeth: Yeah, that's it. Thank you very much.
[00:33:42] Lisbeth: thank you Casper.
[00:33:51] Lisbeth: And we also have some time for questions because I think this is a subject that demands that we open up for a discussions as well. SoI'm Really curious about what's your view on AI for good anddo you have any questions about the things we are talking about here?
[00:34:03] Lisbeth: Yeah. thank you. That's very
[00:34:06] Audience: interesting.
[00:34:10] Audience: That you use for the music generation makes it? Yeah, it's called, it's actually just one. It's called Suno, S-U-I-N-O. And then what, the only thing you really have to do yourself is create the lyrics because that, that it will do a awful job at that, and then it will give you different options.
[00:34:27] Audience: So you type in a prompt for what? So Shang Ra and it's, takes a bit of tweaking to get there, but it's actually quite nice.
[00:34:35] Audience: it's a very weird thing because now in my home we listen to our own music, all of also my kids, because it's a good way for also kids to learn to appreciate music in a different way.
[00:34:45] Audience: So when you created it, just have a sense of pride in it. So it's nice. But of course we, we try really to. Not overuse it and only do it as a collective family. 'cause you could also end up, using, using it a lot and then it becomes like just a more emission. So use it with a purpose, but, at just, it's quite cool.
[00:35:04] Audience: Someone who's turned up. You did a really good job. Thank you. But the app did a very good job, to be honest.
[00:35:13] Person in video: Yeah,
[00:35:14] Audience: coming from a company who is, putting a lot of pressure on us as designers. Use AI and put AI into all of yourworkflows and stuff like that.
[00:35:25] Audience: How can, if you can have any suggestions of like how we can come with like these concerns and things like that to like our higher ups in the company, is there ways that we can try to say it's not just about using it because it's like the sexy topic right now, but like these are concerns.
[00:35:44] Lisbeth: Yeah. I don't know. I don't have the golden answer here. I think it's, it's difficult. but at least I think it's, you need to start like discussing it with them and be like, starting to point it out to them, that there are these concerns. but I don't have the golden rules of what to do.
[00:36:03] Lisbeth: No, sorry. I
[00:36:05] Audience: mean, I think one thing that I tried to do, and also when I worked at the research institute, then, we said that, or we went to management saying If you're using ai, use it to increase quality, not quantity. Because what we were seeing was that all of a sudden, the company was selling more and more services and it got seedier and seedier because the result had to be on that.
[00:36:28] Audience: Instead, AI can actually be good at helping you enhance the quality of your design because you use it. In that way. But if you use it in a way where you just need to be more efficient, then it's like the worst part. So think having that discussion openly and I lost and then I started my own company that, that's, how it is.
[00:36:45] Audience: But, but yeah, but of course that's the risk.
[00:36:52] Lisbeth: Yeah. Any opinions or any perspectives on this? Yeah, we
[00:36:59] Audience: have those. We have a big question because that also where. Do they want to push the user AI more and more, especially to compete with, the other companies down there? I was wondering if you ever foresee a car vibe using a, this, using AI to also just promote authenticity.
[00:37:23] Audience: also to just say, we are not using AI because of this different, impact that we can have on structural human. Environmental. So is that something we have thought about or you would maybe think a little bit more about because you're working with
[00:37:42] Audience: ai? I think, I definitely think there will be a, I don't think it'll be against ai.
[00:37:47] Audience: I think it will be how it's used. 'cause I think you can be authentic using ai. Ido really think that there's gonna be a time where people will see through the same sort of. Graphics made by AI that canalmost predict them now. 'cause at least I've seen so many of them that I, can't, that we get blind to it.
[00:38:05] Audience: So I think there will be an, a part about being authentic, but mostly I think it's gonna be about being open and saying how do we use it? And being transparent as a company, and then saying that we don't want to do more. We want to do better quality. That's what you get with us. but I, I hope there will be a counter to be honest, but I don't think there will, because for me, AI becomes more
[00:38:26] Audience: general.
[00:38:27] Audience: Get the same result because we are Yeah. You
[00:38:29] Audience: know, getting the
[00:38:30] Audience: data is coming from the same sort of proof of data. So that's why I was asking about authenticity rather than becoming general and using ai. 'cause we might also end up all looking the same. All doing the same.
[00:38:46] Audience: Yeah. But I think it's a, I mean I think it's a valid point that the homogenization is a really big factor if you use ai, not with a critical lens, but I think the, the.
[00:38:57] Audience: good listeners to learn. And I think we need to digitally educate ourself in how to use it. Because if you are just writing a prompt, you will get the answer where it's very secure, but then you have to learn to prompt to ask them for the nuances, and then you have to do your own research.
[00:39:12] Audience: It's not like just take it. And then we hope for the best about seeing the deeper layers, but Using it intentionally and or in that way, then I think you can get a much more authentic version because you start with a structure, you get some insights, and then you have to formulate your own ideas about it.
[00:39:29] Audience: But it's, when we look at it through efficiency, a productivity lens, it, you don't have the time to do those reflections and then it becomes incrediblyunauthentic and then we homogenize everything. So I think that's abouthaving the time. To actually engage properly if you use the tool and that's what, not what business leaders like to hear because it's the opposite of what you should do.
[00:39:48] Audience: Yeah, we don't have time. Exactly.
[00:39:50] Lisbeth: Exactly. Yeah. What's this question here?
[00:39:53]
[00:39:53] Person in video: I was wondering if you had any good ideas to facilitate talks about this, where it doesn't end up becoming black and white because that's the from I see right now. It's A very gray area that we are still all trying to figure out, right?
[00:40:08] Person in video: Yeah. but what I see is, especially in industries as the one I'm in, where it can impact people's jobs and workflows, it becomes, you are fully against it or you're fully for it. so it's just, if there's any sort of. Room or tools to facilitate good talks that, softens it up a bit more.
[00:40:27] Lisbeth: Yeah, it's definitely very divided. That's also what I hear when I go out there. and it would be lovely to create this bridge so that you can actually discuss it.
[00:40:36]
[00:40:36] Lisbeth: and I don't know if you need tools like this one, the triangle or something where you can talk about the concerns. but it would be lovely to have some kind of conversation to, to have, to bring into, to these discussions.
[00:40:49] Lisbeth: Dunno if you have, any good ideas.
[00:40:51] Audience: No, I think it's a, I think it's a good idea to try to structure the conversation around, a systemic holistic issue. Also, try to do it here. I think, I think that's important. I think that's one thing that is missing also generally when we talk about it, is that the, how does it affect people's jobs, Are we, how do we use it so it doesn't ensure that everyone will be out of a job? And I think that's the failure in all of this, debate that we're having is that ai, if it takes your job, it literally means that your job wasn't actually that important. So how can we ensure that AI creates the job that makes your job important?
[00:41:30] Audience: Again, I think that's the sort of narrative we have to take around it, but I think like having a systemic lens as you did is super important.
[00:41:37] Audience: Yeah. Yeah. to a certain degree.
[00:41:41] Audience: You both already answered the question that I'm gonna ask, but I'm still curious, which is, do you both, in the personal level, not professional, but profession, personal level, do you believe that people know how to use AI and how to work with it in a way that it's well informed and they know exactly what they need to do and how to do it
[00:42:04] Audience: in a way that is convenient for everybody?
[00:42:06] Audience: There don't, there's no guidelines. It's a new
[00:42:10] Audience: We are still all learning.
[00:42:11] Person in video: Yeah. Yeah. So
[00:42:14] Audience: do you think we are doing it, not doing it? And if we are going to do it, how, in your personal perspective would be a good way to do it?
[00:42:24] Lisbeth: Yeah,
[00:42:25] Audience: I'm very curious. Yeah.
[00:42:27] Lisbeth: I think there is definitely room for improvement, And, more education around like the concerns and the critiques. I think we, we definitely need to do that. Yeah,
[00:42:44] Audience: I definitely don't think we are doing it. I think, 99% of everyone using AI is using it in, incorrectly and fail to understand what it's capable of. And I think it's a little bit the same with social media.
[00:42:55] Audience: We need really quickly to have our digitally, education on how to use these things. And I see it as that's the big issue because our system. Doesn't allow for that because it requires time, it requires frictions. And also, to be honest, there's a lot of good things we could take directly from the journalism industry and say what are the points we can directly put them into ai?
[00:43:16] Audience: And that's what I'm trying to do when I do it, but it just makes it not efficient anymore. And that's the difference. so I really hope, and I think everyone has a responsibility to try to engage with it in a responsible way. But I also know that there's a pressure from outside on, not using it in that way.
[00:43:34] Audience: So I really hope that these forums and when you go out there, that we can have that conversation about how to do, use it more meaningfully and with, sense of understanding that this matters. It actually matters how we use it because it trains, the model, the other models. So how you use it, yeah, it's a snowballing effect, right?
[00:43:54] Audience: Yeah. So it's, so that's the thing.
[00:43:55] Audience: So there's also the sentiment, global sentiment that we are running. Yes. Yeah. Which is a lie. It's just started. Yeah. and I hear people saying, I already lost the
[00:44:07] Person in video: train of jumping into the I field. And I'm like, yeah,
[00:44:10] Audience: it just came here. Yeah. And there's this sentiment of like sadness that something
[00:44:15] Audience: has been already lost when everything is there waiting for us to actually do properly
[00:44:19] Audience: and ethical values.
[00:44:20] Audience: Right.
[00:44:23] Audience: yeah. But I think that speaks a lot to, again, this acceleration society where, we feel like we have to keep up all the time. It's a new software, it's a new thing, and at some point we're just like, I, there's no more things. I can't do more stuff.
[00:44:36] Person in video: You break.
[00:44:36] Audience: Yeah. you break. And I think that's the thing we only need to remember is that without rest and without taking care of ourselves first.
[00:44:44] Audience: It doesn't matter what you use, you really have to, that's the first priority you have to,
[00:44:48] Lisbeth: but it's just very difficult when they have these big tech companies fighting to get through the a GI stage. So it's difficult. Just I'll wrap
[00:44:58] Audience: up and
[00:44:58] Lisbeth: Yeah. Lastly,
[00:45:00] Audience: so what I'm hearing from you is that the relationship maybe is to have it as a companion Yes.
[00:45:08] Audience: as a pa or someone, let's call it someone. Yeah. That can support us
[00:45:15] Audience: instead of being the vehicle for us to do everything. Yeah. But at the same time, creating a certain distance because our physical and mental health and the way how we with information Yes.
[00:45:24] Lisbeth: Has changed. Ah, yes. So we need to, to
[00:45:26] Audience: adapt.
[00:45:27] Audience: Am
[00:45:27] Lisbeth: I hearing
[00:45:28] Audience: a hundred percent correct? Yes. And I think, especially last part is really important because when you start to, use ai, it also, even though there's no one pushing you, like I run my own non-for-profit. But I can still get caught up in the fact saying, I should do this because I can do all these amazing things.
[00:45:45] Audience: Now. I should create this. And I have to remind myself, no, you absolutely shouldn't. You have to take care of yourself and stop doing those. So I think it's about just setting limits basically. '
[00:45:55] Audience: cause it creates a certain, a fake, awareness of self Yes. Into all of this. Exactly. When in fact, I can Exactly.
[00:46:04] Audience: You can't do any of it. Yeah.
[00:46:06] Lisbeth: Thank you. All right. You have a question here?
[00:46:11]
[00:46:11] Audience: yeah, it's, of course, it's very relevant to ask these ethical questions about technology. What I was wondering is that I've been into computers since the eighties before. Yeah. It, wasn't a thing in society.
[00:46:26] Audience: Now it's everything. And for every revolution, the, yeah, just the computer, the internet, the smartphone, social. Media, there was als always a kind of optimism and the technology was embraced and, even way too much, especially with social media and kids. and there were no critiques and no kind of self-reflection from the users or parents and suddenly comes ai, it's just another development of technology, software.
[00:46:57] Audience: and suddenly there's a kind of skepticism coming around. And I was just wondering why does that happen now? this is, are we just a small, blip of, Hey, what about, and then the development will just move on. still accelerate what?
[00:47:17] Lisbeth: I see it also because there is the speed of AI right now.
[00:47:20] Lisbeth: So with other technologies, of course they were quick too, but this is like really exponential right now, just, happening all over in all places and affecting everybody. and I think that's why the critique is also that big. But there could of course be other reasons too, but
[00:47:37] Audience: the
[00:47:37] Person in video: incident affected everyone as well.
[00:47:39] Lisbeth: Yeah, of course. but here it's also like affecting the way we live and the way we are. Working and our work life. And of course internet did that as well. but I think this is like even bigger.
[00:47:52] Audience: I also think that we are now at a stage where most of us have reconciled that, social media was probably not the best idea to scale in that way.
[00:48:01] Audience: We all done. Yes. And, I think everyone knows how poorly it affects us. And I think that is the lesson we have in the back of our mind saying, okay, what is happening now? And I think that's a very valid concern for all of us to have. Saying we just keep saying it's progress, but we fail to reflect on what progress actually means.
[00:48:20] Audience: Doesn't mean that we are running fast. It doesn't mean that we can be more alone and all of those things and, outsource our work. I'm not sure. I think we need to take a step back and I think that's what, critiques of AI are trying to say, this is what we need now.
[00:48:36] Audience: Yes. And I just saw, chart between the top AI cases, from Harvard.
[00:48:43] Audience: Yes. From 24 and 25. And, the top AI cases for like uses, in 24 was like basically to learn about or learn, to learn about new things or explore ideas or get better ideas. So basically it was about learn. And in day in this, chart of 2025 is basically. To looking for comfort people are using as a therapy.
[00:49:08] Audience: Yes. like how to find my, organize my life, how to find purpose. So it's basically, it's a, shift. Yeah. Between okay, now I'm looking for like a comfort before it's about to learn. And now it's yeah, my friend. And of course, like AI, as much as you use like the replies or answers is based on what you would like to hear.
[00:49:30] Audience: So how. You understand this and do you think that it's a mural? it's a, the mural, it's like we just see what we would like to see, What your thoughts about it and yeah, the consequences of it. Yeah,
[00:49:50] Lisbeth: there definitely a lot of, like concerns around that and that we will all get our own AI body that we can discuss things with and be in our own bubble and not talk to humans and get perspectives from others as well.
[00:50:03] Lisbeth: I guess that's also happening with the internet and Google searches where we are getting our own list of searches, So that already happened, but this is even. More if we just have one person or AI that we talk to all the time, that always tells us like, yeah, this looks really good. And this was a good question.
[00:50:20] Lisbeth: so it's always talking to us in a way that we likethen we don't get the pushback that we usually get from people.
[00:50:26] Audience: Yeah.
[00:50:28] Audience: And it's actually more concerning than that becausethere's something called socialatrophy and it'sa neuroscience term, but it basically means that because we used to see each other and be together a lot more, we had third places.
[00:50:37] Audience: So we went to the like, the soccer or like we had all these places where we met each other. And then we live in a world where we are slowly being more and more isolated and our connections are even now on our phone. And now I think this even worrying trend that you're saying now is that now instead of talking to our friends, we're using an AI to try to solve that.
[00:50:56] Audience: And social atrophy is actually whatself isolate more your brain strengths and then it becomes, you get worse and worse at social cues. And that means that at the trust among us. Actually erodes because we will take a sentence that you would've said 20 years ago and I'll start to feel like it's actually, oh, does she mean something wrong with this?
[00:51:15] Audience: Because I don't understand social cues anymore. And that means that it gets very hard. We see this very much in the climate movements or movements generally, people don't want to do anything because they don't have that sense of sociality that we used to have. So it's getting harder and harder to mobilize people because they are basically, their brains have been shrinking.
[00:51:35] Audience: And that's what is very worrying about, ai because when that becomes now the new norm, then you can live even less with less people around you. So I think that's a really worrying tendency to be honest. And I think, but we can all, the good thing about that, we can actually do something about it. If we are aware of this, we can try to structure our life.
[00:51:55] Audience: So it's about increasing social interaction, even though it can be hard and like we're stressed and it's. When the, my colleague the other day said, when the help, do I have time to actually spend time with other people? 'cause I'm, kids and work. But we need to do it in the same way as we need to train, we need to keep training ourselves to be social, otherwise we end up in that space.
[00:52:14] Audience: So I think that's a quite good sort of thing to remember and Yes, Yeah, that's tying
[00:52:22] Audience: into it. I was thinking of Be My Eyes, because in the beginning the app was yes, human Volunteers helping, people with visual impairments and. And so if they
[00:52:31] Audience: stripped that away from the product, wasn't that the unique selling point of, Be my eyes,
[00:52:36] Lisbeth: but actually still there?
[00:52:37] Lisbeth: It's, they still have the possibility to go to the humans instead. So they didn't strip it out, just to be clear here, but yeah. maybe they could have more fallbacks to human, for example, if it's not sure about things or, it's a critical situation that it detects that you are like at a metro station and you need to go to the right direction, then get a human in the loop instead.
[00:53:00] Audience: Yeah, because I'm thinking maybe that becomes a currency, this human purpose. Maybe that's one of the measurements for like, when you are to build an ai, is there, does it provide purpose to humans?
[00:53:13] Audience: Yeah, I think that's a really good point because that's a very important thing to also remember is that, our action shape, our values, not the other way around.
[00:53:21] Audience: So when you actually help someone else. You self identify as someone helping, so you're more, much more likely to help other people online. So when you strip away that possibility, it actually ends up, with the fact that less people have to volunteer and do those things. So it's about can AI then engage the other way around?
[00:53:39] Audience: Can you ensure that more people meet through ai? Can you ensure that more people help through AI? That's a quite good question. Instead of saying, can it take away, the human connection, I think, yeah, that, that would be another way to look at it.
[00:53:52] Lisbeth: Yes, we have time for a question more? Yes. Maybe more a comment like, it's
[00:53:56] Audience: been a great talk.
[00:53:57] Audience: There's been a lot of pessimism, but I wanted to just inject a note of optimism. in 1997, Gary Caspar was beaten by deep blue, which is a machine learning model in chess. And in the mid 2010s, the, deep reinforcement learning algorithm started playing, beating the best league of legends players.
[00:54:17] Audience: And yet, YouTube chess channels are full of humans playing. And YouTube and Twitch are full of like humans watching, other humans play these games. So like I, don't have the answer, but I, still see that the fact that creativity, humans are thriving on creativity from other humans enjoying that.
[00:54:36] Audience: So
[00:54:36] Audience: yeah, that's probably a positive. It is a good point. Very good point.
[00:54:40] Lisbeth: Good point. Yeah. Yeah. We just have time for one more question if we have one. yes. Who
[00:54:48] Audience: has the copyright to your
[00:54:49] Audience: music? I do. I do. You own it? I own the music, which is the app. If you pay for the app, you own your music.
[00:54:58] Audience: That's, something I tested that before releasing it on Spotify, but yeah, I own the rights to the music. Alright.
[00:55:06] Audience: Thank you very much for all your questions and for
[00:55:10] Lisbeth: Coming here.