morgenbooster
Navigating AI Criticism with Systems Thinking
1403 København K
This session is for anyone working with or around AI whether you’re influencing policy, shaping strategy, designing products, or making organisational decisions. If you’re interested in understanding how AI influences power dynamics, social structures, and democratic integrity, this is for you.
In our Morgenbooster in June, we explored how AI can be a force for good. This time, we’re flipping the lens.
AI is already transforming our systems but not always for the better. In this session, transition designers Lisbeth Christiansen and Kristian Ohm will take us into the less visible, often uncomfortable truths about AI’s role in reinforcing inequality, deepening surveillance, and accelerating extractive logics across sectors.
This isn’t about dystopia. It’s about awareness, responsibility, and intended insight.
Together, we’ll examine:
How AI is embedded in and shaped by broader systemic paradigms and what that means for sustainability, equity, and human agency.
The risks of uncritical implementation, and how unintended consequences show up across public and private sectors.
Where the real leverage lies: how professionals across disciplines can help shape AI's trajectory from within the system.
You’ll leave with sharper systemic awareness and actionable pathways to challenge and shift AI’s role in society, from compliance to critical intervention.
[00:00:03–00:00:38]
All right, morning everyone. Morning. Please grab your coffees, find a seat. I think there's still some left. Thank you so much for being here. Fortifying the weather could be much worse. I promise you're in for a treat, but let's address the elephant in the room. There is no water on these premises. So, we can't use the bathroom. We can't make more coffee. So, hopefully you've gotten what you have. It should be back when we're done here at 10:00. So, stay put. Don't stress about drinking too much coffee. We'll figure it all out. So anyway, today's theme is
[00:00:38–00:01:10]
navigating AI criticism with systems thinking and it's sort of a spiritual successor to a previous Morgenbooster held by Lisbet just before summer. So, if you have seen that or attended that, maybe you can piggy back on some of those experiences from back when. But other than that I hope that you are in for a small ride together with us. Thank you. So yeah, Lisbeth, do you want to start us off? I will start. Let's see if I can get it. And I just wanted to say
[00:01:10–00:01:45]
before we start that we have Kristian here and that was me. Yeah, and I'm Lisbeth and we are transition designers here at 1508. And to get started I actually thought it would be a nice idea to start a long time ago, far far away in a galaxy. So the movie Star Wars, you probably know this movie, some of you have seen it. You have a hero, Luke Skywalker, you have the villain, Darth Vader, and Obi-Wan Kenobi that helps Luke Skywalker. And now you're thinking like,
[00:01:45–00:02:22]
what does this have to do with AI criticism? We'll get back to that later because as Kristian said, the theme of today is AI and system thinking and we thought that it would be a nice thing to try to put this system thinking lens on the AI criticism that we hear out there, because there's a lot of different AI criticism that you hear in media and everywhere today. So as Kristian said in spring I did this investigation on AI and AI for good. What is AI for good? So I
[00:02:22–00:02:54]
investigated, read a lot of different articles, books, went to a conference, heard podcasts and I also interviewed a lot of different people about what they think AI for good is and what their concerns are and what their hopes are for AI. And from that I learned that there are a lot of concerns about AI. A lot of different concerns but there are also a lot of them that a lot of people have in common. So there are many things that, the same things that,
[00:02:54–00:03:29]
many people talk about and those are some of the things that we will touch upon today and we have selected four of them that we will go through one by one and try to put on this systemic lens. And the first one is one of the big ones. The environmental input, not input, the environmental impact from using AI. So you know the power usage of AI and the water consumptions for cooling down the data centers, the use of rare metals for the hardware. There are a lot of different
[00:03:29–00:03:53]
stuff here in this AI criticism. The other one is the race of the tech companies that we also see. So when I open up my LinkedIn feed like almost every day I see one of the tech companies there saying like now we have a new model here it can do this and this and then the next day it's another company that says now we have a new model it can do this and this and that.
[00:03:53–00:04:23]
And the speed of that also makes them forget about the the risks of actually using AI. So that's the point of criticism here. And the third one is the loss of critical sense when using AI. A lot of people are afraid that if we use AI then we start to lose our ability to think for ourselves and to actually ask the critical questions when AI gives us something.
[00:04:23–00:04:58]
And then the fourth one is that there are biases in the AI model also because of the way they are trained with the material they are trained on. So they are trained for more western like mind than the non-western. All right. To go through this I will use some of these causal loop diagrams and just to explain how we use it here in 1508 then we have some variables. Those are the black boxes here. So here the variables, are the level of job stress
[00:04:58–00:05:29]
and the use of coping strategies. And these can go up and these can go down. And then we have the arrows between them and those variables, those arrows, they have a cause and effect notation a plus and a minus. So if a plus is between them then it says like if the level of job stress is high then use of coping strategies are high as well. If there's a minus then it changes direction. So here for example if you have a high level of job stress then you use coping strategies
[00:05:29–00:06:03]
to fix it, then the level of job stress will get lower. Then we also have the B in the middle and that tells us a bit about if this is a balancing loop, a loop that can go on and on and on forever or if it's a reinforcing process. So if it's something that like just gets more and more and more and more or lower or lower or lower and lower lower. And then sometimes we also use this little notation here to say that there is a delay that it takes some time before this happens.
[00:06:04–00:06:31]
All right. And what I will show you is not rocket science at all. It's just a way for us to map up what are the dynamics of the critique that we hear and what are the processes surrounding this. Let's start with the the environmental impact. And I brought this example here. These are images or like created images of Office Parken in Høje Tåstrup.
[00:06:31–00:07:03]
And that's where they're building a large data center. Microsoft is building it. That's why it has the name it has. And what Microsoft say here is that this data center is the greenest data center or one of the greenest data centers in the world. And that's because they are on 100% renewable energy. They are reducing the heat or the the water consumption by actually using the heat to warm up the building surrounding like the households surrounding
[00:07:03–00:07:39]
Høje Tåstrup can use the excess heat. And a lot of things have been made more efficient in this new center. So it's a lot greener than some of all the other data centers that you see out there. And then you might think like that's a good thing, right? So they use less energy, they use less electricity to actually, to give us these AI services that we need. But is it like that? Let's try to map it up. So very simple, we have the climate impact that AI put
[00:07:39–00:08:15]
and then that's one of the variables. The other variable could be that we then have the efficiency of AI resource use. So if we make a data center like that one then we have a plus then we have more efficiency of resources and that will make the climate impact lower. So that's pretty simple not rocket science and that's a balancing loop. So that could keep the climate impact in check. But what happens and what the critique is about is that it's not just
[00:08:15–00:08:45]
this. There is a side effect of this and that is when something gets more efficient when we can do something for less electricity then it would also be cheaper to actually use the AI also cheaper in the minds of people saying like now it's greener now it's more okay for me to actually use this so then people after some time started to use AI more so that means that in total
[00:08:46–00:09:16]
the climate impact will actually be bigger than it was before because people are starting to use it even more. So over time it will look something like this. So if the climate impact is up and the time is to the right, then in the beginning of course you see the climate impact go up then we do some efficiency fixes then we can see an effect. There is an effect. It will actually lower the climate impact, but over some time it will rise again and probably even more than it was before.
[00:09:16–00:09:48]
And so we will probably do a new fix because that was not working. So we will do a new one and a new one and a new one. But over time it will actually just rise. And this is also what people call the rebound effect. We will come back to this one later, but let's move on to this one. So now we have the AI race of tech companies. And I found this nice graph. It's actually only depicting a period of I think it's one and a half year where you can see how three AI
[00:09:48–00:10:22]
models are competing to be the best benchmarked. So you can see the benchmark score up here and the time over here. And it's very easy to see here that they are fighting to get to be on top here. So, first ChatGBT has the the leading score here and then I think it's Claude that is the first place and then it's Chat gpt again and then Gemini and then Chat gpt and then Claude again. So, they're fighting to be on top all the time and that's part of that AI race. So, how would that
[00:10:22–00:10:56]
look if you map it up? So you could start with saying like open AI's AI leadership relative to Google would start being higher because they started out doing the first model that got the success. So then what did Google think about that? They felt threatened and when they felt threatened they decided we also need to invest in AI. And when they invested in AI, they also got a model that got better capabilities. And that made Open AI's leadership relative to
[00:10:56–00:11:30]
Google, lower. And now Google actually got ahead. So what happened then? Then Open AI were afraid of falling behind. So now they invested in more AI and made their models even better so they could get the leading score again. And then what happens . Then Google was afraid of falling high and then it just goes around and around and it could also be Claude and other companies here instead of Google. So it's just to say that there is this dynamic of these two balancing flows where
[00:11:30–00:12:03]
they just go on and on and on in this race. We'll also come back to this one. So we have bias in the AI models here. I brought a recent study that is called Witch Humans, where the researchers got a lot of data from the different large LLMs for example Chat gpt and they compared the responses from those LLMs to some larger world surveys that are made where people answer
[00:12:03–00:12:37]
questions about how they live, how they feel, how they think, and they compared those two, to see where or how do the LLMs behave and answer compared to different cultures around the world. And what they noticed here was that the GPTs all fell in this category that are the weird countries the western educated industrialized rich and democratic countries. So, what they're saying with this is that when you say that LLMs are humanlike and talk like humans, it's
[00:12:37–00:13:08]
actually not true. It talks like people from weird countries and not people from other cultures. And that really fuels into that talk about bias in the models that we see today. So if we try to map this up, then you could say that there, it started out like, there being a bias towards the western context and that happened because when they created the first models they scraped the internet and the internet is full of content from the western countries. So it was
[00:13:08–00:13:39]
in English and it was in western context. So the models was trained on this from the beginning and then it was launched in western countries first and got a lot of feedback from western countries. Also people from other countries of course could use it, but they had to write it in English to actually talk to it because it couldn't really speak any other languages or was not really good at it in the beginning. So it got a lot of use and feedback from western users
[00:13:39–00:14:18]
and then it performed better in the western countries because it was trained from western material and then the bias just got worse. And then at the same time, so this was a reinforcing loop, this could just get, the bias could get bigger and bigger but then at the same time there was this other loop over here where you can see what happens for the non-western contexts. So here when we have bias towards the western context then the use and the feedback from non-western users will get worse or less and then the performance in non-western context will also
[00:14:18–00:14:55]
be worse and then the bias will just get worse. So here we just have a loop saying like, the bias will just get worse and worse and worse and then you can say like is it a problem that we have a model in the west that we can use in the west? Maybe they can just create their own models in in the other countries, right? They can create their own stuff. But as you can, as you saw, the countries that the weird countries are also the countries that have the resources to create this model and that people talk a lot about the inequity or inequality in digital
[00:14:55–00:15:29]
around the world. So if we have models in the west that can do a lot of stuff but they can't afford it in the rest of the world, then we can get ahead digitally. All right let's go to the last one. So loss of critical sense, here we had this study you may have heard of it, it was in a lot of media when it was published this summer. It was because they published the study without having it peer-reviewed because they thought
[00:15:29–00:16:01]
the results were so important to get out there that they didn't have the time to wait until it was peer-reviewed. So it was sent out and what the study was about was that a lot of scientists looked at a group of students they were going to solve an essay and they divided them in three groups. So one group would have the help of LLMs like Chat gpt or something similar. The middle group they had the availability of Google search or something similar. And then the last
[00:16:01–00:16:27]
group they could only use their brain, no helping tools. And then, when they did this essay, then they looked at the brain scan like the brain activity, how much activity was while they did it. And they also did surveys and stuff like that. But what they saw was that the people that used the LLMs, they had a much lower brain activity than the ones that just used their brain.
[00:16:27–00:16:54]
But that was not enough. They also got the people to come back after using these tool for some time to actually do the same again. And then they at that time they couldn't use the helping tool. So the people that were used to using LLMs and search they now had to use their brain. And then they measured their brain activity again and they saw that the people that were used to using LLMs, they still had very low brain activity even though they didn't have the help of the LLM.
[00:16:54–00:17:25]
And so the the scientists behind this were really afraid that the LLMs are influencing us and making us lose our ability to think ourselves and they thought like we have to do something about it. So there has been a lot of critique of how they did this and also that they use brain scan because are they really the thing how you use this or is it just to make it look really nice. But it fuels the critique and a lot of people have this critique when they talk about AI. All right, I'll
[00:17:25–00:17:59]
also just really quickly map up this one. So it's kind of simple this dynamic because when we are going to solve a task then we think about how hard is this task to solve. Maybe it feels a bit hard but then what would be the logic thing to do would be to practice. So you can practice the task and get better after some time at doing the task. So that would be the logic step to do. But then when we get the LLMs then we have the help and quick way to actually solve the task.
[00:17:59–00:18:31]
So now it feels a lot easier. And of course we could just go around and just have these two things by ourselves and just sometimes we practice and sometimes we use the AI assistance. But what people are criticizing is that we get reliant on the AI for doing the tasks because we all know that when we have this white paper, we need to write something we have chat GPT over here it's just so much easier just to write can I get a few points can I be inspired a bit and
[00:18:31–00:18:57]
we all know that feeling that it's just easier to go there and have the help of the AI so we get reliant on using it for our tasks and when we do that then our motivation for actually practicing and be better at the tasks ourselves will go down. So we will lose motivation and then we will be stuck in the first flow up there in B1 and then we will be totally reliant on the AI tools.
[00:18:57–00:19:30]
We will not be able to think ourself down here and learn how to think ourself and to criticize what the AIs are telling us. All right. So now I have mapped up four different processes flows here. And now I switched it out. I don't know if you saw that but what you see here is that I changed all the variables and that's because what I have mapped up here are actually system archetypes.
[00:19:30–00:20:05]
And system archetypes you can compare them a bit to archetypes from movies. And now we get back to Star Wars because in Star Wars we have the hero, we have the villain, we have the wise mentor, we have a heroine, we have all these characters that we see in movies over and over and over again. And not always in the movies they think about that they need to have this type of characters in their movies. They just put them in there because that's just how we do movies. And it's kind of the same with the the system archetypes. It's just not characters here, it's system patterns
[00:20:05–00:20:39]
that just repeats itself over and over again in different contexts. So the first one with the AI climate impact is one we call Fixes that Fail, where you actually apply a fix to something but then there is a side effect that makes the the problem worse. And you could compare it to, here's a an example from daily life. So when you're really low on energy and really tired, then you drink some coffee and then immediately you feel refreshed. Ah, now I'm ready to do stuff.
[00:20:39–00:21:15]
But the coffee actually influence how you sleep at night. So you actually sleep worse at night because you drank coffee and then your energy level next morning will be even worse that it was. So you have this loop that just saying it it will just get worse and worse. Then we have the one with the AI rays and that's an archetype we call Escalation. And a day a daily example or something we see a lot is price wars at supermarkets. So
[00:21:15–00:21:52]
if a supermarket is leading then it puts pressure on the other supermarket. So now they have to lower their prices to get more customers. So now they lower the prices, then they sell more and then their position is bettered. But then the other supermarket is pressured by the lower prices. So they also have to lower their prices and then they start to sell more. And then over and over and over it goes. Then we have the thing about us losing our critical sense. And here the flow is shifting the
[00:21:52–00:22:27]
burden it's called. And here you could compare it to a company that just got a new printer and the intern got the introduction to how to use this printer. And the sensible thing would be for to give printer training to all the staff so they all know how to use it. But what happens is that people start requesting the intern to print stuff for them because that's just easy, right? So people start to do that over and over again and they get dependent on just getting the intern to print stuff for them. And then the motivation to actually train and learn it yourself
[00:22:27–00:22:59]
is lowered because that just fixes it all the time. Maybe until the intern stops and then you're kind of stuck and then you need to do something about it, right? And the last one about the bias that's one we call Success to the Successful and here there is a, like a pool of limited resources that two parts are competing about. So here for example
[00:22:59–00:23:31]
it could be the efforts or like the training capabilities or how much you can actually train something. So here we will focus on the western context but a daily example could be when we go out and donate to charity then we know some charities beforehand and they would pop up to mind, to your mind. So they would like have an, they would be ahead and more people would would choose them to donate and maybe also to go out and volunteer
[00:23:31–00:23:53]
and then they get more money. So they get more success in what they do and they get more media attention and then they will get more known. So people will use them more and what happens then is the smaller charities will then have less support and less donations and also less success.
[00:23:53–00:24:16]
Okay. So this is four of the archetypes. There are many archetypes. There are eight classical archetypes, system archetypes, and you can read all about them in a book called System Archetype Basics. It's a pretty old book, but it's pretty good at telling about these different patterns that we see all the time.
[00:24:16–00:24:47]
And when you know them, you also start to see them everywhere. Like after I got to know them, I started when I hear podcast or see the news and I like this is Shifting the Burden or this is Success to the Successful. You can see them everywhere. So how do we break out of them? In the book they have some points for what you can actually do to break out of the different archetypes. So for fixes that fail then when you do a fix to a problem then you could for example try to map out, are there any unintended consequences
[00:24:47–00:25:21]
for when I do this, but also when you are in this loop where you have done a quick fix and it's not really working then you need to acknowledge that and actually find a real fix for it for escalation then it's pretty difficult to get out of it but you need to somehow disarm this and get one of the sides to lay down their weapons or get them to actually agree on putting down the weapons. And they actually tried that with the AI race. A lot of prominent people
[00:25:21–00:25:49]
went out there with a letter saying like we need to have a break with AI because we don't know if we can control all the risks that are happening because it just goes too fast. So, they said we need a one-year break to actually stop this. The break didn't happen but at least it got the EU to work a bit faster on the legislation that they are doing, and people are starting to talk about this problem, so something happened.
[00:25:49–00:26:22]
Then we have Success to the Successful and here I think the key is awareness. If you know that you are in a success to the successful loop and then you can start to break out of it and try to think about like how do we actually define success. So for example in this context it could be the benchmark analysis then there could be some point in how diverse are the materials used in the LLMs that could be a success metric. It could also be to go in actually and strengthen
[00:26:22–00:26:54]
the site that's falling behind and maybe create a training data for non-western context. But yeah, it's difficult. And then for shifting the burden here the advice from the book is to work on the fundamental solution. But maybe you have to actually apply the quick fix for a while while you find the right solution for this. All right. So as you
[00:26:54–00:27:27]
can see or maybe think about all this then when we talk about AI criticism then maybe it's not really the technology itself we are criticizing. It's more the way we as humans think and the systems we are part of that creates all the things that we are criticizing. So maybe it's a good idea to actually look at this at a higher plane and that's why you take over Kristian. All right, great. Always good to have some pointers in terms of if you're here in a conundrum. How do you
[00:27:27–00:27:50]
get out of it? But we also have to acknowledge the fact that when we are dealing with complex and societal and even global complex issues that there is rarely a simple fix to something like these. So let's maybe spend some time to think about why does it happen that we keep falling back into these archetypical patterns or systems traps as they are also referred to.
[00:27:50–00:28:24]
Because it doesn't happen at random. It doesn't happen by chance. It happens because there are structures and mechanisms that keeps these things in place that even if we try to apply fixes, try to break out of them, chances are we might fall right back in. And one of the reasons for that in systems thinking terminology is because of our mental models. And the mental models are what we use to navigate our daily lives. It's what helps us make sense of the world and how we want to interpret it. We don't think about it on a daily basis but it represents our values, our core
[00:28:24–00:28:55]
beliefs and all the underlying assumptions that drive our behavior, our thinking and decides what we notice and what we decide to focus on and what we decide to measure. And when we begin to map out systems like Lisbet just showed us and we begin to realize these archetypes and zoom out and consider the different variables and dynamics, we can begin to expose the different mental models that actually drive this behavior. So examples of that could be that we have these
[00:28:55–00:29:26]
underlying assumptions that growth is universally good. We have confirmation biases. we are much more inclined to reward an effect that is immediate rather than something that could have an impact on a longer term. We are very prone to zero some thinking like there's a finite pool of something and we all fighting for the same kind of resource and we also believe often that costs can be externalized. So when we're talking about environmental impact, I may think that one cost is
[00:29:26–00:30:00]
the monetary cost of building a data center or the Office Park as Lisbeth showed us. But the cost related to social or environmental impact that is something else. And we don't expose ourselves to these mental models on a daily basis. We would get insane if we did that. So we allow this to sort of be the subtle way that drive our behavior and our beliefs. And they can be extremely difficult to to expose because they also might change depending on what system we look into. And even if we do that, it might be equally as difficult to imagine well what is the alternative? So if growth isn't
[00:30:00–00:30:30]
universally good, then what isn't growth what gets people out of poverty? Isn't growth what makes me have a job to be able to pay for a house and food and everything? What is the alternative if growth isn't that? What is the alternative to zero something. If I get something, doesn't that also mean someone is not getting that part? Isn't a compromise just everybody losing? And couldn't I just pay a fee if I have to cut down a rainforest to build something else? Like couldn't it just translate those impact and those cost into the monetary one? So it's super difficult for
[00:30:30–00:31:01]
us to even imagine what could an alternative perspective be on some of these things. And therefore, we are much more inclined to dive into optimizing the current metrics of the goals we already have, rather than stop and reflect on what we're actually trying to achieve. And what does that mean? It means for instance if you're thinking about back to a prevaccine era so back to the I don't know 15-1600s the idea was that the mental model surrounding the health
[00:31:01–00:31:32]
system and the health industry, was that disease they occur at random, it might even be God's will depending on who you asked. And the the idea and our mental image of health was the absence of symptoms so if there were no symptoms visible to us, we assumed this person might be healthy. That also meant that every, all innovation and all thinking was driven towards a very reactive treatment paradigm. And it wasn't until various scientists all around the world started to
[00:31:32–00:32:05]
share knowledge and figure out okay well maybe we can actually start predicting why do people get sick and eventually leading to a paradigm shift in terms of maybe the ultimate goal in our health system in our health industry to try to prevent diseases rather than simply treat them. That doesn't mean we don't treat systems or symptoms or diseases anymore. But the main paradigm, the main mental model that drives our vision of health is more of a preventive one. So when we look back on that example, it can maybe feel a bit weird to understand that
[00:32:05–00:32:38]
why wouldn't you always just think about prevention in this context? Because you can't barely look into any system today without having this duality between are we just treating symptoms or are we trying to prevent the problem from initially happening. But because it can be so difficult to actually imagine alternatives to the mental models that drive our behavior, we still in our education for instance, we still apply fixes that may raise our test scores because that is somehow still the goal instead of stopping and questioning why are we even educating people
[00:32:38–00:33:09]
in our food system. We treat the supply and provisioning of food more like a business. So we optimize for over production and abundance and over supply rather than think about what the sufficiency paradigm could be in terms of supplying food and nourishment. And also in our work life system we still allow biology to be a disadvantage for women simply because we have a job market designed for men. So there are plenty of areas where we are, we keep recurring back to
[00:33:09–00:33:40]
the patterns that actually keep the systems in check even though it might be obvious to see why we should stop and reflect and criticize it. And how does this relate to AI? Half of you are probably thinking well if there's one thing AI seems to be extremely good at it's amplifying our existing patterns. It helps us do what we've always done, just faster, cheaper, and presumably better. And if you remember the graph that Lisbeth showed us just earlier,
[00:33:40–00:34:13]
how quickly all the different tech companies compete to excel each other in this escalation effect, it's difficult to see how there would be any time left to stop and think about what we're actually trying to achieve because AI could potentially have a big impact on the way we intervene in flawed systems. But considering how there is barely a company or an industry left that are not rushing towards this quick AI fix, we might fear that the motivation to stop and reflect on these things will slowly diminish.
[00:34:13–00:34:39]
So those were a few words on how the the systems and the recurring patterns are actually kept in place and why it's super difficult to break out of, unless we teach ourselves to get this vision and critical thinking on a more systemic level. So let's instead try to shift our focus towards how systems then do change and what our role could be in that and how we can help that happen.
[00:34:41–00:35:11]
This is also a model. At 1508 we like to subscribe to what's called the two loops model. It's origins from the, it originates from the Berkana Institute and has a lot of properties that we believe make sense when you want to work actively with systems thinking and transition design. So let me, there are a lot of aspects and nuances to this. So I'm just going to walk through like the key takeaways. One thing you might notice here is that there are actually
[00:35:11–00:35:42]
two systems. There is a legacy system and there is an emerging system. And the idea behind that is that systems have life cycles. It might even be a bit wrong to talk about changing a system. You can change things in a system, but it doesn't change the system. For a new system to emerge, another must crumble. And from the Berkana Institute, they use this image of a chocolate chip cookie saying if you have baked it, you can't really unbake it. You can, you can't really switch
[00:35:42–00:36:16]
out the ingredients or anything like that. You have to start over. You have to make a new cookie. You can put in the new ingredients. But there will always be this element of emergence or unpredictability like what is actually happening inside the oven. You kind of have have an idea, but you also kind of need to let the ingredients work for themselves. The second thing you might notice is this little arrow here, and that represents all of the walkouts from the legacy system. So, when a legacy system or dominating system reaches its peak,
[00:36:16–00:36:48]
that's when you begin implementing policies and institutionalization and rules to keep everything in check, make sure that things stay the way that they should. And that can be a good thing because you freeze all of the things that has worked in the past. You get a huge load of confirmation bias because you assume everything you've done up until now will also work tomorrow. But the problem is you're also freezing in place all the things that don't work. And that might make, create a lot of fear and frustration. So we might have the pioneers or innovators or maybe even entire
[00:36:48–00:37:19]
business or industries who can see the writings on the wall. They leave the current system. They try to experiment with new way of carrying things out. And at first that might be a very lonely process like you are, you're leaving the system behind. You're trying something new. But what eventually will be realized is that a lot of people have probably walked out. So as they convene, they start forming these communities of practice. They start proving that you can actually do things in new ways. You can actually apply a new practice that shifted the mental models that
[00:37:19–00:37:52]
kept the original system in place. So at some point when the emerging system gets traction there is this transaction so to speak between the old leaders and the new leaders trying to decide so what do we leave behind from the old system? What can we take with us and how do we make sure that the ones still operating in the old system will have a smooth transition towards the new one. And that is what the final arrow here, not arrow, just the dashed line actually symbolizes that. And so at some point the old system will have fully
[00:37:52–00:38:27]
transitioned into the emerging one. And now the emerging system is the dominating one, the legacy system which has its own life cycle as well. But none of this happens like automatically. There has to be an intention, there has to be someone thinking intentionally about we have to outline the path towards a new emerging system in order to take over from the old one. So everyone who is leaving the current system or maybe the ones who are already left out in the first place they need to band together and have an intention of doing so. So this is not a process description
[00:38:27–00:39:00]
this is also kind of a mental model except that we can look at it and makes things a lot easier. So that is basically the philosophy that we at least subscribe to when we are talking about system thinking and transition design. Practically speaking, we have two approaches to this. One being the one we call transition strategy. That's when we go in and work specifically with a company or an organization like what is their perspective on like what is their strategy if they have grand visions of a different tomorrow. And the other one is facilitation. And that's where we take more
[00:39:00–00:39:21]
of a holistic approach to all of the different walkouts from the legacy system. Try to be the one facilitating the convenience and the movement towards an emerging system. I'll walk through them one by one. So we all know what's going on here. So let's talk about transition strategy.
[00:39:21–00:39:55]
I'm sure all of you have seen something like this before. No, you're like what? Okay, yeah. So this is the textbook example of how we would visualize a hierarchy between a company's vision and strategy. And we move further down there would be a lot more boxes and everything. Either way the idea is that from a textbook best price example the vision would be what is our desired future vision of the world. That is sort of the, that is what we are moving towards. That is what motivates us to go to work every day. Awesome. So what is the strategy? That's sort of the operational part.
[00:39:55–00:40:29]
So how should we move towards this decide future? That should be what drives us every day going in. Considering the dynamics of the market around us. Now the issue is often that the strategy is not at all motivated by the vision. Rather the strategy is much more motivated by short-term gains or opportunism or maybe even all of the dynamics we have just walked us through. Because this is always formulated within the context of a frozen system so to speak. That's not necessarily
[00:40:29–00:40:59]
the strategies for it. But it is a, it's really a shame especially if the vision is representing like a very wellthoughtout theory of change showing like what is wrong with the world today and everything like that. But chances are that might not also be the case. So often that when we have a vision of the future from a vision statement, we have a lot of difficulties actually talking about so why is that not the world today and we actually,
[00:40:59–00:41:31]
do we have deep knowledge about the status quo? Do we know what barriers keep the current system in check? What system, what is the scope of of the system we are actually focusing on? Is it just everything? If we say a sustainable world tomorrow what is not sustainable today? Is it everything? Is it something? And where do we actually have leverage? Where can we actually do something for bringing about this world of tomorrow. So it's not because of lack of ambition necessarily that the vision doesn't set this direction for the strategy but rather
[00:41:31–00:42:05]
it's very very easy to just make it a bit too open-ended. So the strategy is not really informed about this long-term vision. So the strategy becomes a much more short-term operational tool. So what do we propose as a remedy for this? Well, what we hope to actually do is that we help companies ground the vision in deep insights about systemic issues. And by doing that, we enable that strategies can actually have this perspective of a long-term impact. Actually answering the question, so this is our vision of the future, answering what is wrong with the world today. How do we
[00:42:05–00:42:36]
get there? And we allow this strategy to operate on a shorter term as well because we know that the intention behind everything we do is moving towards a different world, is moving towards a systemic shift and a systemic transition. And our approach to this is first and foremost to build this capacity for actually working with systemic transition. And you all being here today could be a first step if you're not already very familiar with these things. We need some kind of language to talk about this. We need to align on some kind of mental models about what keeps
[00:42:36–00:43:07]
current systems in check. How we talk about the different feedback loops from the causal loop diagrams and to be able to strategize for these long-term transitions. We go in and we scope the critique of the status quo like every time we say we want something like what is that a reaction to? If we want this world tomorrow, what is wrong with the world today? And what are the barriers that keep things in check? We also want to spend a little more time about this envisioning of the future system like what are the new mental models? What are the new
[00:43:07–00:43:39]
dynamics? What is the scalability of a future system? Like so we can sort of put the pieces together like begin building the bridge towards exactly that. And again where do we as a company have leverage? Maybe there are areas in the systems we are critiquing that we can actually do nothing about. Maybe some of the issues that keep the system in place are very political. So what is our theory and hypothesis about the ripples effect we can create with different pieces of leverage points that we are trying to affect. And
[00:43:39–00:44:06]
finally that allow us to outline this transition journey to actually inform strategic decision- making. So if we know and we can see this journey ahead of us and we know what the hypothesises are then we can actually allow ourselves to work on a strategic level both with a long-term perspective and still put out fires and make crisis management because the vision will tell us when are we treating symptoms and when are we actually moving towards a deeper root cause issue.
[00:44:06–00:44:40]
An example of that, a recent example that you can also read about on our website is that we have worked with Dankort who initially wanted to make a donation campaign meaning that every time we use our Dankord they donate money to regenerative causes and purposes. And at face value this could be considered like any other campaign get people to use their their code but it still has some element of altruism like wanting to give something back
[00:44:40–00:45:14]
know that the the consumer industry is also what's hurting the environment. So how can we try to balance that out? And in that work we actually try to apply this lens of systems thinking, trying to question so what impact are we actually making if the benefit for the environment is depending on actual consumerism. So isn't that just something that bites itself in the tail. So by applying this lens we were actually able to shift the focus from maybe not only considering what we could do to to ease the impact of consumerism
[00:45:14–00:45:47]
by giving something back but instead say what if we actually try to address the root causes of over consumption. So even though it wasn't initially the purpose of the project here, trying to beginning to visualize these things, understanding the underlying issues that we're actually creating and not ending up in loops like fixes that fail or shifting the burdens and things like that, we were actually able to shift the focus on this project. And this is not so much a shoulder clap to oursel. This is actually more an example of a really bold client who
[00:45:47–00:46:20]
were able to look inward and say to themselves well we might be part of this problem today but we actually believe that we can be part of the solution in the longer run. If you move into the different the other perspective here facilitating system transitions we have to move back to the two loops model because in the middle here as I mentioned when we have the walkouts beginning to experiment in a emerging system in the beginning it's all very fragmented so there's no coordinated efforts maybe they aren't even
[00:46:20–00:46:52]
having the attention of of systemic shift they just got sick of the the current system. So what we, the role we are trying to take here is to be the facilitator of this movement convening the people who have fled the system and help them move towards a better one in a very structured way. So we want to enable these innovators, entrepreneurs and pioneers to band together as a joint movement for systemic transition. So in this case we are not partnering up with a specific organization or a specific company. We are more working with the movement in its entirety and the way we do that is
[00:46:52–00:47:12]
that we begin developing this theory of change for a selected cause like where are we today? What is our knowledge of the status quo of the system? What is our vision of the future? Not necessarily where we have leverage because we are facilitators but where are the critical leverage points and who of the actors do we need to activate to actually be able to impact those.
[00:47:12–00:47:47]
So we identify and invite the relevant actors of the ecosystem. We host these strategy workshops and networking meetings and and make sure that every individual project across all of these actors will still be anchored in the same hypothesis and be anchored to the same transition journey that we have aligned on. And the other aspects that makes this a bit different from the transition strategy is a bit more classical is that we rely on raising philanthropic funds to actually do this because when we are dealing with complex issues on this level, there is no one product owner. There's no one client, there is no one organization who can
[00:47:47–00:48:21]
say we need to solve this because this benefits us. That's why we pay to have this done. That is not the case when we are facilitating movements such as this. And if you attended the the Morgenbooster just before summer as well, we went into details about the Land Use Movement which is a course that we are working with basically focusing on the soil degeneration from conventional agricultural practices and the whole paradigm of over production while underprovisioning and wanting to change that into more regenerative purposes. So I'm not
[00:48:21–00:48:57]
going to dive too much into this but just to visualize some of the things I'm talking about what we have done in that project and what we're working on currently is that we did outline sort of the transition journey from the status quo to the future vision, identify the different leverage points representing leverage hypothesis like why do we believe that changing something here will begin moving towards a new system like what are the imagined ripple effects that we can create here and use that as a launch pad for launching different projects not necessarily from
[00:48:57–00:49:28]
one uh unified organizations but rather than a movement consisting of much very different actors and partners. So that is basically our approach to this and we believe that this is how we need to work with complex issues. We believe this is how we can contribute to that either from a organizational point of view wanting to work actively with a vision, talk authentically about the the world we want to build for tomorrow and also carry that out into
[00:49:28–00:50:02]
our practices of strategy while on the other hand helping facilitating the already, the ones who have already broken out of current systems and facilitate their journey as a whole. And all of these things of course stands on the soldiers not soldiers, shoulders of our rich history of design thinking and business design here at 1508. But we also had to introduce new tools to be able to do this like build the capacity to have a language and a vocabulary to
[00:50:02–00:50:38]
talk about systemic change and transition design. So some of the examples you have seen throughout here is the systems mapping and the causal loops. We also built these shift cards that that are very helpful in terms of challenging existing mental models like if this is the paradigm today what could an alternative paradigm be? Three horizons models is something that helps us understanding what are the current innovations that gives us a hint that a transition is beginning to happen. What do we want to leave behind in the old world and what do we need a lot more of? Leverage hypothesis or the ones I just showed you is basically a hypothesis.
[00:50:38–00:51:07]
It's a hypothesis about if something happens how would that affect the current system and begin building a new one and then of course the transition journey framework that I just showed in terms of the Land Use Movement. So we are building a lot of tools on top of what we already have from from our legacy in terms of design thinking and business design. All right, that was a lot of information. So if you can only remember one thing today, what would that be?
[00:51:08–00:51:34]
Press again. So if you want to address AI criticism, we must think more systemically and not only try to break out of each archetype but also to understand the system that keeps creating these archetypes. So that would be our main key takeaway to you today and then we are close to the end. So just press one more. So thank you very much for coming here today.