Work in Progress – People & Culture
Work in Progress – People & Culture (P&C) - is a talk series that features thought leaders from the P&C and HR industries. Session's CEO Pernille Brun leads these discussions to uncover the latest trends shaping the future of work and the evolving landscape of P&C.
Through these sessions, we explore how P&C and HR can effectively support organizations, leaders, and employees to adapt to the changing demands of the workforce. We also delve into what strategies P&C departments can implement to maximize their contribution and influence on the overall business.
Work in Progress – People & Culture
Hybrid Intelligence and how Human-Centered AI Is The Competitive Edge Of Your Organization – with Jacob Sherson
What if the real competitive edge with AI isn’t faster automation but smarter collaboration? In this episode, we sit down with Professor Jacob Sherson, founding director of the Center for Hybrid Intelligence, to unpack a people-first blueprint that turns generative AI into a true advantage. Instead of chasing “AI-first” hype, Jacob explains how to design workflow assistants, build learning cultures, and create interfaces where humans lead and AI assists. The goal: protecting and amplifying firm-specific human capital, the tacit expertise your competitors can’t copy.
Jacob walks us through the 4P maturity model—Projects, Platform, Policy, and People—and explains why People is the multiplier that unlocks everything else. We dig into practical wins: a bank that used a brave, fail-fast pilot to boost employee advocacy by 35%, and an industrial company that leveraged disciplined one-pagers to map workflows and embed AI touchpoints in days. You’ll also learn the FERC method—Frame, Explore, Refine, Commit—which keeps authorship, judgment, and accountability squarely with the humans, while still tapping AI for speed and breadth.
Jacob challenges the “San Francisco consensus” predicting the end of cognitive jobs and shows us how markets still reward authenticity. Customers value products with a human touch; over-automation risks commoditization and price pressure. Guardrails, phased rollouts, and bias dashboards make ethical practice part of the interface. Most of all, Jacob makes the case for HR to step up as the engine of AI innovation—potentially as a chief hybrid intelligence officer —translating mindset shifts into new business model lines, not just efficiency gains.
Follow this link to support the growing hybrid intelligence movement and stay informed about supporting material, events, and collaboration opportunities.
If you add “WiP-podcast” and how you learned about the Hybrid Intelligence Center to the sign-up form, the team will also send you the slides from a recent HR-conference keynote that covered similar content.
Share Your Thoughts on this Episode
Online Business Coaching
getsession.com
Welcome to Work in Progress, a podcast series where I pick the brain of thought leaders within the field of people and culture. My name is Pernille Bohm, and in these talks we explore the latest trends shaping the future of work and the evolving landscape of modern organizations. In this episode, we dig into the work of Jacob Scherson, who is a professor and the founding director of the Center for Hybrid Intelligence at the University of Aarhus in Denmark. Jakob is also part of the European Quantum Readiness Center and a sought-after consultant within generative AI and how to, on a practical level, implement hybrid intelligence in the organization. Jakab and his team have developed a maturity model, the 4P model for the adaptation of AI in organizations. In this episode, we'll learn more about this model and the role of HR when it comes to building up a learning organization for the successful implementation and use of AI in the day-to-day business. Jakab, thank you so much for joining us today at the Work in Progress session with People and Culture podcast. We're very honored to have you here. You have an extensive background within AI, both uh working with uh quantum computing and physics and uh different kinds of uh talks you've done on the topic. But uh please introduce yourself to our listeners who might not know of your work.
SPEAKER_01:Yes. So I started out being only a quantum physicist, uh, and then a few years ago I transitioned so that now I have a dual professorship. Uh I became a little bit more and more interested in psychology, and psychology became business psychology, and so now I have the dual professorship in in physics in Copenhagen University, and then at the Department of Management at Aarhus University.
SPEAKER_02:All right, so that's how you divide your time.
SPEAKER_01:Yes. Yes and then at Aarhus University I lead this center called Center for Hybrid Intelligence, where we then explore the human and AI combinations, both at the interface level, but also at the organizational level and also macroeconomics of seeing, okay, how do we create a society and what are the mechanisms that would characterize a hybrid intelligence society.
SPEAKER_02:All right. And now you already use a word that some of our listeners might not know what's all about, the hybrid intelligence methodology. But before we we look into that concept and what lies in it, could you please explain to us maybe what's the link between AI and HR?
SPEAKER_01:The the link between AI and HR is really a twofold challenge. One is that AI, of course, has to be implemented into the practices of AI, uh of HR, of course. So that means they need to transform their own practices. But AI is also an opportunity that HR could seize in order to have a broader role in the organizational context.
SPEAKER_02:All right. So and this is where maybe hybrid intelligence comes into the picture. Can you explain the concept?
SPEAKER_01:Yes, and and if I exemplify again with with with HR, then there are really two pathways that AI implementation can go. There's a tech-first implementation, I call it AI first, which really centers on data and uh development. And really, HR has a very little role to do there. But there is an alternative, which is a much more human-centered role. And so if you attempt to, in your organization, incorporate AI in such a way that it empowers people, then HR is really crucial because then they will be the interpreters of which skills do we have in our organization today? What is it that makes us unique? Which what we call human capital or firm-specific human capital, which is really your competitive advantage. And how could we use AI in order to enhance that competitiveness? And that is sort of one of the essences of hybrid intelligence. It's something that we are trying to, it's not something that is has a specific long-term definition for decades. It's actually something that we are trying to define together with researchers and experimenters to say what is the definition of hybrid intelligence that actually can be used to create the best interfaces, the best organization and workplaces, but also the best society for us.
SPEAKER_02:Okay. So you you look at the organization on a broader scale. So it's actually what impact might what whatever we do in our organization have on the society as at a as a large?
SPEAKER_01:Yes, and so it's really an interlinking between those three things because if my dream of hyperintelligence comes through, then at the microscopic level, we really invest much, much more in creating interfaces that link into the professional workflows of the employees, which is a really tough process. It it is not as easy as just taking some training data and having a learning algorithm that trains on that, because it's people, it's transformations, it's mindsets. But if you do that, then you create the learning organization of the future. We're very inspired by Peter Senge and his learning organizations, but really the question is what does a learning organization look like in the age of AI? And we believe that that is hybrid intelligence.
SPEAKER_02:Okay, so that that's a core definition of hybrid intelligence. That that was actually pretty clear. Um, and Peter Senji's work with the learning organization and the fifth discipline that came out in back in the 70s actually has evolved quite a bit. And now we have AI to kind of help us maybe implement some of the structures that they already suggested back then. What have you seen in terms of how AI can help there?
SPEAKER_01:Well, so one of the one of the difficulties of the learning organization is also the strength of the learning organization, that is, that it's difficult to build. And with the emergence of AI, we've really seen a lot of executives who believe have an illusion that the implementation of off-the-shelf AI can provide for them a competitive advantage. So they say we have saved so and so much on our expenses and our margin is shrinking to this or increasing to this. And really what they are misunderstanding is that this off-the-shelf use of AI is easy, it's nice, but it's just as easy for the competition, which means that you seem to be more efficient, you seem to be generating more value, but if everyone is doing the same thing, it actually does not increase more value or more competitiveness for you. And that's where the learning organization comes in. It's been shown that it just takes hard work to create a learning organization, to change a culture. And that means that unlike off-the-shelf AI, which does not provide a competitive advantage, succeeding with creating a learning culture that learns alongside AI and has employees that have the mindset of being in control when they apply AI and innovate with AI, that will become a competitive advantage.
SPEAKER_02:All right. So so the competitiveness comes out of using their tools and also using them in the core business in a way that no one else does because processes, firm-specific workflow assistants, I call them.
SPEAKER_01:I sort of compare a regular chat bot, which is just chat GPT, and then using that, and then to what I call a workflow assistant where we have mapped out. Let's take any given organization, pick five different business units, and then with each week within each business unit, map out the workflows that exist there, and then create together with the employees the automation automation sub steps that they can use alongside their work. So fluidly integrate into their workflow. They can push this button and then something appears or something is done, but it's always them who are in control. They are the creators of the workflow. And AI is very powerful, but helps in very uh specific circumstances.
SPEAKER_02:So in order to make that happen, it sounds like it's more than actually just pure change management, where HR needs to play a role to make it more integrated. And this is also where I came across your work with the 4P. Can you can you explain what's that about?
SPEAKER_01:Yes, and I think that's that what is really necessary right now in order to create a holistic systems transformation is also thinking about all of the different aspects. You can say thinking socio technically around your organizational development. And there are many maturity assessments for AI. Gardner has one and McKinsey has one. And I think what is wrong with all of them is they also look at your core businesses and your sort of various different front office, back office and your technologies, but they ask only one question, which is to which extent have we put AI into our front offices, into our back offices? And what I'm saying is that you can actually, according to Gardner, be very high in maturity level, both in an AI first, where you have really tried to automate as much as possible, or in a hybrid intelligence first organization, which means that these maturity levels don't really provide you any guidance for providing a human-centered organization. And that's why we try to rethink these maturity indices. We have a 4P model that social technically comes around all sort of all corners of an organization. There's the 1P, which is the projects. So which projects do we do? How do we run projects? How do we get generative AI in? What are the kind of business cases that we have for that? There's the platform, which is the technology. Then there's the policy, which is both the governance, the strategy, the vision. This is where a hybrid intelligent vision comes in and a strategy for that. And then there is the most important P, which is the people. And in the four P framework, we say that those four Ps are not equally important because enhancing the people P, the mindset that you have, is the enhancement of the human capital in every step. So we really think of saying we developed this together with Adobe and sort of said that you should measure the success of a technology project by its influence on the employee mindset. So does that particular project create more willingness, more capacity to provide further transformations inside of an AI? So does it create psychological safety? Does it enhance a growth mindset? Does it create a sense of partnership between the organization and the individual? So that many, many transformations like AI are very good perhaps for the shareholders, for the leadership, but not necessarily good for the employees. And hybrid intelligence is meant to be an organizational form that is automatically, by definition, good for the employees.
SPEAKER_02:Ah, interesting. So actually, this mindset you're talking about, this shift in mindset, it sounds like innovation almost, like the growth mindset and psychological safety is the baseline or the fundament for the people to use these tools and these ways of enhancing whatever is there to the benefit of their products and the collaboration and the cross-silo thinking in the organization, and then of course to the benefit of the society as you talk about. So, what can you give us a concrete example of an organization that that succeeded with implementing a people-first uh organization where AI was uh crucial in that transformation?
SPEAKER_01:Yeah, and and so what we're trying to collect is various different sort of small successes. So there are probably no, I'm trying to create stringent enough criteria that no one today is hyper intelligent because it has to be something that we all strive for and and that we all can do better. But we're trying to collect together small use cases. And one example was a use case that we ran with a Danish bank where uh I had the director in. I taught him a little bit about hyperintelligence, and then he did a very sort of uh fail fast and mature and brave uh attempt where he said, I said, you give now a workshop to your employees on hyperintelligence. You don't know so much about it. Your task is to find something with Copilot that you failed miserably with, show it honestly and openly, and treat it as a learning experience and say that that is the culture that you want to build. Give the employees two months to experiment with it, with personal development plans, and then before and after, try to measure these sense of partnership, the psychological safety, the growth mindset. And what was really exciting about that, two months later, they he asked questions like you know, how how likely is it that you would employ that you would stay in the bank? Uh, how good is the bank as an as a job place? Uh, how uh likely is it that you would recommend it to others? And this index jumped by 35%.
SPEAKER_02:Wow. Yeah.
SPEAKER_01:With a person who didn't know so much about hyperintelligence, didn't know anything more or less about AI and how to do this, and they just experimented amateurously, but that mindset really changed. And it changed because the employees saw the commitment of the leadership towards a few a common future that they could create.
SPEAKER_03:Yes.
SPEAKER_01:And so that's one example on a sort of a mindset example. Uh we have other where we are working more on the sort of the technology, uh, where we work with an industry company, for instance, that has actually done digitalization for the last 10, 20 years, has also been very, very good at standard operating procedures. So they have a mentality of saying every time we do something and we know that we will do it again, it is a law that they write down that one pager. And they have hundreds of one pagers, which means that it was very easy for us to go in and work with them and say, let's map out these workflows. They are described, and then ask what could, for instance, a sales assistant have as five or ten different AI touch points in their existing workflow? And they succeeded surprisingly quickly in really transforming that augmentation into daily practices.
SPEAKER_02:And and did that also have something to do with the mindset of just, you know, trying out new things, but also seeing, of course, the patterns that come out of it. Was there a resistance before that to using those kinds of tools? Or what was that about?
SPEAKER_01:So they had a digital mindset where it really was about innovation, where it was about really understanding what the role of the employees would be. We have uh a framework that, apart from the power P, then we have a framework which is sort of a exemplification of the human AI collaboration, which we call FERC, which is sort of a very simple framework that tells you how should you engage when you just go to ChatGPT and start engaging with it. First, you have to frame what is it that you have in your mind and take it out and then put it into words so that ChatGPT or co-pilot knows it. Then you have to use the exploration capabilities of ChatGPT and say, give me five options for what we could do next and say why these are good. And then you'd go into the refine phase. And this is where you look at each one of those. You use all of your experience in order to say, I like this, but I don't so much like this, I would like this recombined. You go in a cycle of that, and in the end, the C is commit because you are then the center of everything. And so that means that if you, in large and small settings, go through this FERC four stage process, then you are always automatically the center of this. Although you are not the creator of everything in that process, you can be responsible for every line of code if you are a developer or every line of text if you have written text around it. So that's the way that it you avoid it being cheating. And you move from a culture where we have a five-step maturity model around that, where either it is considered cheating or it's considered something that only the few are good enough to do and lucky enough to have the skills for, and then to something that we systemically teach our organization as a mindset to say we always go to any given task and remember we should fake furk, not fake. Because if you just take the first output from an AI model, it is not unique. You have not emptied your brain and given any of your context to that. And that's how you create uniqueness, that's how you create your own human touch to these products. And that's what we mean also on a macro scale, that we believe that if organizations start to do this, they will actually start to succeed with creating human touch products. I would also call it mass customization at scale. You can say that we can deliver products where humans have touched every output, even at a large scale, and there begins to be both a supply of that and a demand for that in the marketplace. And that's where we're talking macroeconomically, that the more people start doing that, the more demand there can be for a hybrid intelligence product. And then that also gives branding advantages to the movers who then jump into this journey.
SPEAKER_02:Yeah. And now, of course, with your background both within psychology but also within uh AI and the tech part of it, what what is it that makes it so important that there is this human touch? What have you seen? Where do people fail when they lack that human touch?
SPEAKER_01:So there are there are many different instances of realizing that even if you take an algorithm and then benchmark it in in conditions so you can say, okay, we can test it for creativity, we can test it for IQ, and we do all kinds of benchmarks. And uh both Microsoft and and Apple have just come out with really sobering studies that shows that benchmarks in the lab are not the same as real-world performance. The real world is just much more complex. And and uh the sort of the third framework that we have is as sort of as asking what is it that's uniquely human and what is it that is uniquely, let's say, uh the domain of the uh algorithms, and we call it prediction and judgment. And so there are tasks in which we have lots of data available. We the data is structured, it is labeled, and in those domains, throw machine learning at it and it will perform well. But there are also many more domains in which there's something that has not been written down, something that is still contextual information between the two of us or between stakeholders, unwritten rules, and they exist in our experience. And this is where if you try to take an algorithm that seems to perform well on the benchmark, but throw it out into a real world where it has to remember also what did your smile that when we talked together the last time tell me, then that is where they start to fail on a small scale with product interactions and on a large scale when we deploy it, for instance, uh in regulation or in in in in larger societal cases.
SPEAKER_02:Yeah. And and what is it that might scare you about that scenario if people fail to have that human touch? Yes, yeah.
SPEAKER_01:And so I think that this is really clear that if we fail to invest in, take that hard journey of becoming hyper intelligent, we take the easy route and we listen to tech to Silicon Valley about saying, let's automate everything. They say there's something called the San Francisco consensus, tech leadership that says that within a thousand days, that's three years, the net value of human cognitive labor will be negative. So what they say is in three years from now, adding a human into a human AI team makes the team worse. They're really saying the end of jobs. And then some people are saying they are probably off by five years, but that is very scary.
SPEAKER_03:Yes.
SPEAKER_01:And so um the scary part is that I cannot guarantee that the world will go in a different direction. The world will only go in a different direction if many things come to pass, and that is many people have to start experimenting with it, with hybrid intelligence. Maybe governments have to do subsidies and have to help this process early on when we are learning how to create these interfaces and the organizational development, and consumers have to start wanting it. So we have to create all those transformations in one in order to create a society which is what I would call a hyperintelligent society. And it's defined by meaningful work on a long time scale. So one of the quick key questions that we're asking ourselves, which I think is really deep, is is the value of human experience going to go up or down?
SPEAKER_04:Yeah.
SPEAKER_01:So we can say that that uh human experience today is a valuable asset, but if we now work together with AI, does AI also learn that domain? So that means that ultimately speaking, we have to create hyper-intelligent interfaces where the human learns more from the interaction than the AI does, so that we keep staying ahead. Which means we have to we have to prompt for metacognitive reflection. And so when you interact with this tool and this use case in a hyper-intelligent way, you have to reflect and you have to think and you have to store and you have to really process. So hyperintelligence is not necessarily a lean back and just take it easy. It's it's it's the opposite. It's okay. It's the opposite. And the opposite really means that if we succeed with that, we cannot be replaced. And so the reward at the end of this line is then that human uniqueness remains there. And the other part of it is also that the the consumers will continue to want uh something that is has a human touch on it. Because they they believe, and there's lots of studies showing, that if a product is created by an AI and you are then afterwards told and it could be a medical doctor advice, then as soon as you're told that it comes from an AI, then you deem it less relevant, less ethical, less sort of uh emotional, empathetic. If it's blinded, then we don't actually see a difference in quality nowadays. But when we know that AI has created it, we value it less. But when we know that a human has touched it somehow, it just feels more authentic. And that feeling of authenticity is really something that it can last forever. We can remain in a feeling of authenticity. No matter how good AI becomes, we are still going to, I'm still going to prefer the podcast that was created, and I know it was a human that created it.
SPEAKER_02:Okay.
SPEAKER_01:And I just know that feeling that I can relate to your process, and that's why I want to spend my time listening to that podcast and not the AI-generated process, because I know there are thousands, billions of podcasts out there, because you just have to push a button and then create a new podcast. There was no work going into it, and we value that work.
SPEAKER_02:So so in what you're actually saying is that HR plays a role in ensuring that that learning organization mindset, together with all the tools and all the things we get from AI, can help us not only with our product, but also with our own humanness in the collaboration patterns we can then see. And is that correct?
SPEAKER_01:Yes, and so you can say that I'm advocating for a new role in the organizations, this chief hybrid intelligence officer, which combines together HR some technology knowledge and then strategy and business knowledge. And I think that HR is sort of uniquely placed to conquer that territory because I think that learning all of the strategy, learning all the business is something that they can do. And in particular, also the technology is not that difficult to learn once you once you engage with it. So I think that if again HR takes those two opportunities, one to practice within their own domain and become hyper intelligent in their own workflows, then they can be the birthgivers of hybrid intelligence in the rest of the organization. And they uniquely have that perspective of the value of the human because that is exactly how the tradition has always been.
SPEAKER_02:Yes. Okay. So they they play to that uh in a say in a sense. And where where where should they start? So this is a maturity model, the four P model. Um and and I've I've seen organizations where they've maybe taken one part of it, you you could say the tech part or the tool part, the the second P and then use that as a use case for we are on the way. But you say that maybe they should focus on the people first uh mindset and that implementation there. But is it is it um correct that this is where they should start? Or do you actually suggest that they also combine it with the other people?
SPEAKER_01:So what we what we what we actually I can I I usually when I talk about the 4P framework and as a maturity model, I say that I guarantee that it is the best maturity model in the world for AI. And I guarantee that because the first thing I tell people who try to use the maturity model is something that no one else says, and that is take my maturity model, erase it completely. Make a list around the four Ps of all the possible transformations that are inside of each of the Ps for you, and then order it in your way, in the way that creates really sense and value for you. Decide whether you are creating a six or twelve or twenty-four-month sort of roadmap for you, and then ask yourself what are the kind of friction points and the crucial points in our organization that becomes you? That's back to the learning organization. The learning organization, McKinsey and others have difficulty creating learning organizations because their all research shows that there is no cookie-cutter way, no five recipe way to create a learning organization. You have to embed yourself in that. You have to really work hard and understand the culture, and then understand what are the drivers for change in that particular culture. And it's the same for becoming a hybrid intelligent organization. And that's why the first step is really knowing yourself. It is really taking a clear look at what is your maturity sort of level and what is your maturity along these four pillars, and then what is the value creation that you could create? How could you, by increasing your maturity level by one in the people or in the platform, uh increase your value generation that you have?
SPEAKER_02:Okay. And and where is the uh collaboration with IT? So I I I assume that that might be some interfaces there where they would have to work very closely with with their IT part of the company. Yes. What what what's your take on that? What's it what have you said?
SPEAKER_01:And and IT is is really um it's so IT really has to do safety, has to do governance, has to do the platform availability, so that means supplying the tools. But what where it goes wrong is that, for instance, prompt engineering training is something that very, very often is done in the IT uh divisions, and it means that you get this mechanistic prompt engineering training, which is to say, I have a five-step recipe for a good prompt, and a good prompt is like a mathematics piece that you just have to have this, this, and this, and a check piece. Whereas if if HR was responsible for, with a little bit of hybrid intelligence insight, was responsible for a prompt engineering course, they would say it's about communication. It's about looking at what is it that how well have you defined the task and how much of your judgment, how much of your contextual knowledge have you put into this? And so good communication is the key to doing good prompt engineering. That's something that would emerge naturally out of a uh HR, but would be very alien, you can say, to most IT specialists, because that's not what they are taught at the universities to do good communication.
SPEAKER_02:Okay. So so that collaboration with uh other people in the organization and other departments. Um you mentioned the the chief hybrid officer. Is that something that could come out of HR?
SPEAKER_01:Or do you see that the you need more speciality or yeah, and I think that I think that HR officers could train themselves to be hyper intelligent in their own domain. They could go through the exercises of of creating uh innovation processes for innovating their own processes and and how to use technology for that. And it's pretty simple. I have in my workshops, I sort of go through a five-step process to create to embed human judgment into a workflow that has both AI and human judgment sort of seamlessly integrated. And you can vibe code uh and an interface, which means you can just go to ChatGPT and say, I have this dream of this interface. Make me a demo of that. And this is the realm of the business side. So that means that the non-IT side, that we can now with these tools go very far in our dreaming of what kind of an interface we would like to have. But then we go back to IT and then we say, you know, now we have this vision of this is what we want to build. Let's work together to actually realize and realize the value uh that we have conceived in that. So the the non-IT people are in the driver's seat of that innovation process, but of course it's not something they can do on their own. That's one of the ground rules of hyperintelligence, is never use AI as a replacement of any human skills, not to replace uh a professional and not to prevent prevent and uh replace an IT specialist, because then you cannot do fuck. You can if you try to say, okay, vibe coding is now so efficient that I don't actually need IT specialists, I can program my own applications, then it's generating thousands of lines of code that you have no idea what are, and you're putting it out there to your customers with all the safety loops that we're going to insist in that. And that's a non-FERG way of working because you did not have anyone in your organization or in your decision flow that could actually look at that AI output and say, is that in the very smallest details correct or not correct? Okay. So that is exactly the so that's exactly by embedding in all parts of your workflow this FERC style of thinking, then you really will discover very quickly if you tend to let the AI do more and more steps and that you actually make it do so many steps that either because of capability or laziness, you end up in a situation where a significant fraction of outputs of your organizational work. Actually, it is never reviewed. And that's side that then danger signs starts to become. So this it's really about organizational cleanliness and tightness around that hybrid intelligent mindset that prevents these catastrophic failures from happening because you feel good in the moment. You you outsource more and more decision power to your AI in your workflow. You let it take larger and larger steps and it you feel more efficient, but then there's also less and less control, which means that at some point this comes back and then for instance your IT software is no longer safe or your products and have lost that sense of connection with your customers. If you have too much AI in the process, then your customers will say that 99% of your workflow is now generated. Let's take a lawyer as an example, right? So if the lawyer then then starts to use this AI and it's 99% of the work that's done, then the customers are going to start to say, 99% of something which is really, really cheap means that I need to get a 98% cost reduction in the price. So it will erode your own value generation. And that's where this marketplace consideration becomes so important as a safeguard of that. So if you in your communication with your customers, with your uh stakeholders, say that you are always prioritizing this human touch, then it becomes embedded both in your workflows, but also in the essence of what you how you describe your unique sales pitch, you can say that it's really embedded in our DNA.
SPEAKER_02:How come then do you think that that uh the AI first companies have uh attracted so much um attention lately?
SPEAKER_01:And and it's because it is in complete correspondence with the this uh San Francisco narrative, with the Silicon Valley narrative, you see all of the Sam Altman saying these really remarkable things, and he says, then my advice is to all executives out there become AI first, become AI native, put AI into every aspect of your operations. And that's where uh in my in my presentations, for instance, you know, I make a virtue out of showing some of these examples of companies that went AI first and really went wrong. And so so um, an example of this is Duolingo, and Duolingo tried to say the the CEO, umis van Ahn, he was very foresighted, and he said, Okay, now Duolingo is going to be AI first. He had a long blog post saying, here are the rules of engagement that are new. AI has to come into every work process. If you're doing something new, you will not get any new resources unless you can prove to me that you have tried AI and it simply was impossible. And and this kind of a culture, at the same time, he also said that he thought AI could teach tutor everything. There was no limits, but teachers were still necessary because we needed childcare. And so this was not very empathetic towards the teachers. It was not very it didn't really signal a working environment that the thousands of Duolingo employees wanted to work in. And all of the Duolingo users looked at this and said, that's actually not a product that I want to support. And so suddenly a company that had so much goodwill lost all of that in one instant because they wanted to be AI first. So those are some of the examples of executives that jump too quickly onto this AI first wagon and fail to consider what the end of the line of that is. If everyone is AI first and all of my competitors are always also AI first, then we all become lionair. We all become companies that have only one value contribution, that is, we cut cost everything that we can. We have no extra value contribution.
SPEAKER_02:No. So we know there are biases in the system. What specific human in the loop strategies can HR and organizations implement to ensure these systems augment and don't inherit our collective biases?
SPEAKER_01:So again, I think that by creating this culture of a mindset of adopt adoption, then biases, if we talk about the biases in the algorithms first, biases and in algorithms really becomes for HR less of a technical problem and more of a communication problem to say that the users of these algorithms have to be FERC style, they have to be able to recognize that this is happening. So it's much more about educating the end users around who will be using these AI systems to recognize these. And if we have attentive users, hybrid intelligence style, then the tasks that we outsource to the AI systems are so small that we can very quickly go in and then repair those things. Whereas if we decided an entire hiring flow and said now we have an AI, HR officer that gets to do all of the decisions, that's when it becomes very very crucial, or that's when the the all of the biases get amplified from suggestions into decisions. But if we embed it into our workflow, then we can actually have all those suggestions and it's like, okay, hmm, I see that this is biased, or I see that we can also use technology to fight these biases to create a dashboard. In the hybrid intelligent version, the HR recruiter has a dashboard that shows, for instance, the diversity index of their last few hires and of this candidate selection and things like that. So there are interface interface parts that help our reflection. And this is why I like to call some of the we are trying to create a label for hyperintelligence, and I would like to call it hyperintelligence HI Infinity because I think that with the right kind of technology, we can actually discover an infinity of innovation potential in the human mind. So it's not an AI versus the humans, it's a question of if we build the right kinds of interfaces. I don't think there's any limit to how much we can augment ourselves, our performance to.
SPEAKER_02:Oh, so what you're also saying is that that HR doesn't necessarily need to make programs that's specially designed for women or take some of these things into consideration when when uh argumenting for the hybrid intelligence model or the 4P model or Yeah, I think that I I think that these recognition of the biases that are both there on the algorithmic side but also on the human side are parts of design considerations from a hybrid intelligence system.
SPEAKER_01:So it's not that they should be ignored, it's not that they will go away, but it's something that should be the design requirements to say that we are aware of these things and that we have some some initiatives that make sure that we are uh that we are measuring it and that we are acting on it and that we are reflecting on it.
SPEAKER_02:Yeah, taking it into consideration. Yeah. All right. So you you talked a little bit about the the innovative mindset and how to co-create together. Um have you seen any projects where where the collaboration um went in a different direction that was expected because of this?
SPEAKER_01:Yeah, and so I think that's one of the the really exciting things that we're talking about with um with several companies about saying that um how can you create a system that creates surprises or that you know that that allows you to do brainstorming? And so we have a large, fairly large tech tech uh big tech company that we're working with around their brainstorming for let's say marketing campaigns and something like that. And so so the question is can you design an interface that really feels like a tango more than a control? So so like, okay, so I could be the director, and in a normal, let's say, chat GBT interface, I say something and then it something comes out, and then I decide what happens next. But we've also created sort of brainstorming variations where surprises appear and where you know chat GBT has a tendency to always pat you on the back. Whatever you said, it was brilliant, right? And so could you create this inspir inspired from we work a lot in creativity and human AI, creativity is sort of uh the new frontier of that, to say how can we create that analogue of human, human, spontaneous collaboration where we actually don't know where it's going, but we are sort of in a yes phase and going somewhere with an AI. And I think that's that's really uh uh one of the grand challenges of of developing these interfaces is to get that to work and to be brave enough to to also you know to to allow a certain degree of control over the process and maybe even of the outcome determinations to be also um let's say have a shared agency with the AI system.
SPEAKER_02:Interesting you use the word control because what I heard was also that it was actually maybe sometimes allowing yourself to fail and then see what happens. And and that's a little bit out of control, right? So what how do you how do you build up that um dareness or braveness to fail, even when it has to do with with some tools that people might be afraid will just run away on its own and do its own thing when if if you allow that to happen?
SPEAKER_01:Yeah, and that's really creating that so from Sengi, creating this sort of uh uh uh fail-fast mindset and the psychological safety for that, but but really also very specifically it's a set of of guardrails that you put in when you do AI products. And so for instance, you know, we say a custom a chatbot, you would never test that uh with your customers or launch it out to your customers in a free space. You would always, let's say, do an internal chatbot first or an internal system first. You would test it with employees where where you were saying, okay, we will cut catch these, let's say, uh emergent failure cases. And then the next phase that we will do, and I always advise companies to go through in three phases, the next phase will then be to say, now we find 10 trusted company partners that we have, and then we say, we would like to launch this imperfect product and test it with you. And so then, first of all, we create hybrid intelligence is about making the employees co-creators of the new solutions and co-inventors of the new solutions, but in a broader sense, making your whole stakeholder set co-inventors of your new products is really the a good place to be. So I always say that if you are software as a service or if you have some sort of if you supply others, let's say B2B with a with a product, then the question you could ask is can your products and services help others become hyperintelligent? So when we talk to, for instance, Adobe and say, okay, how could you help others who are in who are using your software becoming hyperintelligent? And also saying that that you dare to expose yourself in your own learning processes and say, we experimented with, we're not finished with our hybrid intelligent journey, but we had a really cool sort of small case here. And mentioned the bank, which was not really saying that they are hyper intelligent total, they just had a positive use case. And if the organizations start to dare to tell those use cases, then that actually becomes a part of their branding. So even before they have fully transformed themselves and are super efficient and super innovative, the mindset that they're starting to create can already create goodwill with their end users, with their customers, with their suppliers, with their stakeholders. And that is ways in which you sort of chop that journey into several different success points. You have success points by showing and measuring that your employees are ever more engaged, ever more happy, ever more engaged with the future of the company, that your customers and your stakeholders are more and more willing to use you as a preferred supplier because they want you to succeed.
SPEAKER_02:Yes. And and where is it you you mentioned before a little bit about having their framework or the guidelines and everything so that you can dare to fail, but then because you know the security is is okay and things like that. What if HR has not that knowledge needed? Or maybe uh the tools and the different things are evolving so fast that you know it's out of your hands?
SPEAKER_01:And and this is where this is where the the policy part of the 4P framework really has both governance and strategy, right? So I think HR can very quickly go in and help form a hyper-intelligent vision, a mission, and sort of a strategy statement. Uh but what is also necessary in order to progress in that maturity level is just they can be it becomes a technical roadblock if you don't have guidelines for what is safe and what is not safe. And what I usually say is that um I recommend companies to have instead of having a green and a red zone for security of saying, okay, everything that is outside of the firewall, experiment with it as much as you want, you know, data because it's free anyway or freely accessible anyway, everything that's inside of the firewall, until you hear anything else, it's off limits. And then there's really nothing that you can do to experiment. But but if you say, so what I if if you don't have a security policy, if you don't really understand the tools, I say ask three things about the kind of interaction that you're trying to do with an AI. First of all, you are uploading data. Uh is that data personal sensitive? Yes or no. So GDPR protected, for instance. Is it something which is copyright protected on a scale from zero to ten? And to which extent is it a business secret? And so that is sort of the zero version of a set of compliance scales that says, okay, well, if you are low, low, low on those three, go ahead. And and in HR, I say, okay, start with an employee handbook. It is the safest use case to get started on. Lots of people want to read it, they don't know where to find it, and they don't want to read several pages. So that's a safe experimentation. If you look at the three questions I just said, it's safe, safe, safe. And then once you have experience in that, then you can start to go on to some of the more sort of uh deep uh personal critical uh cases. Yes. And in that meantime, you also have built up together with IT the knowledge about how to create safe systems for that exploration space.
SPEAKER_02:Okay. And that was a nice example of a place where HR could start and experiment. And okay, so uh to sum this up, it's we've been all over the place already, but I think you you've been there's been a clear common threat through the whole uh podcast so far uh in terms of having the link and the collaboration and to the learning organization and creating creating the safety space for experimentation and uh also why. I think you have a pretty clear articulation around the why. Is there anything HR can do to support a more maybe AI-friendly narrative? I don't know if this if it's the right way of putting it, but do they have a role to play there?
SPEAKER_01:Yeah, and I think that the AI has to adopt uh has to depart a little bit from the human-centered language in which we are very good at talking about quality of life, quality of work, and we have to adopt a little bit more of a business language and be able to say what the value is. And I think that's one of the weaknesses of the learning organization, culturally speaking or historically speaking, is it's been very good at creating happy employees, but it's been less good at ensuring shareholders that there's long-term innovation coming from that. And and so that's why in some of the exercises that I run, hyperintelligent, then I try to connect then this hyper-intelligent workflow innovation to, for instance, saying directly, what are the new lines in a business model canvas that emerges from this? So instead of just talking about the new workflows and the sort of the fulfillment of those workflows, we also make the connection then to say, well, that actually means like that it can that you can do business and that you can have new revenue streams, for instance. You say clearly, okay, well, this is the core enabler for us to make money as a hyper-intelligent consultancy on top of that, or you know, have a hyperintelligent advisory, or that our, you know, that that our engagement with our customer segments enables a new com new discussion type with the segments. And so that becomes very tangible, whereas if you just do automation and if you just do sort of uh uh efficiency enhancements, it really never shows up in a business model canvas. It's because it's just doing what you did before, just a little bit more efficiently, but it doesn't make new lines in that. It's not innovation. So HR can really take on the task of becoming sort of the verbalizer of how human skills and human mindsets translate into value.
SPEAKER_02:Well, I couldn't find a more beautiful way of ending this podcast with you today, Jagab. Do you have anything else you would like to add?
SPEAKER_01:I think that this is so so they can just uh so try, experiment, and and reach out to reach out to me and our colleagues, and and we would love to collaborate. If you have use cases, we would really love to show them and share them. It's a part of us learning together. So that's an open invitation for collaboration.
SPEAKER_02:Very good. And we are going to ensure that we will add the link to where we can get in touch with you and how also how to find your different frameworks. So thank you so much, Jago.
SPEAKER_01:Yeah, thank you.
SPEAKER_02:You've been listening to the Work in Progress podcast on people and culture. If you enjoyed this episode, please feel free to share on social media. For more resources on people, culture, and working in a modern world, please visit get session.com and check out our articles, guides, webinars, and more. Thanks for listening.