Episode notes
GPT-5.4's million-token context, Codex's agentic shift, Apple falling behind, and OpenClaw's surge in China — the debut episode of Digilize Lab.
Chapters
- 1:00 — Discussion on GPT 5.4's context window enhancement.
- 15:00 — Apple's upcoming foldable phone release.
- 30:00 — Exploration of OpenClaw's usage in China.
- 45:00 — Potential for AI to make everyone a founder.
Transcript
Note: this transcript is auto-generated and lightly edited for readability.
Welcome, welcome to the very first episode of Digitalize Lab by Digitalize Agency. With the three of us, we want to have a spot where we chat about the latest news and the most hot topics and also see what is hype and what not. Let's jump right into it in our very first episode. So what's the number one thing of the week? Number one, you're asking? I would say there's a couple of, obviously, there's a couple of good topics to talk about. But I'd say the biggest one, the most kind of known one, of course, new AI models always changes the game. I would jump in actually with GPT 5.4. They dropped this week? Exactly, I think last week, this week, I'm not too sure, but quite recently. And I would say biggest change is still the context window. So finally, they caught up with the competitors and brought in the one million context window, which just, in simple words, enables it to remember more and to stay in the context and in the conversation for longer periods. And that changes the game for not only people using it directly. How much context window had 5.3? Didn't they also release that four weeks ago? I'm not too sure, actually. The development cycle gets faster and faster. It's crazy. They have to. They have to. I mean, the competition is hella fast, to be honest. But no, you could be right. I have to look it up. But yeah, no, that's, I would say, in my opinion, the biggest change. But what's also really cool, it's now basically has like a native style of agentic capabilities and working. Pretty similar to the new Quen model, which you probably know more than I do about. But we're going to get into that later. But yeah, that's my biggest topic. What do you guys think about it? I mean, there are a lot of models released in the past month or something like that, I would say. And Codex was with a 5.3 and a 5.4. I see not even I can keep up with all of those new models. But OpenAI is pretty good. But like you said, the context window, I assumed that they would have focused on that way earlier. Same. I don't know. What do you think might be the bottleneck? Why didn't they do it earlier? You say compute? No, no, no, no, no, no. I don't think so. If OpenAI has something, then it's compute. It is probably just the model architecture. Because if you increase the context window, then you have to think a lot about the actual model architecture and how it remembers. Because the model gets worse and worse with the more context it has. That's why you always should clean out your context and start a new chat if you want to do something new. So that means that bottleneck to actually do a proper architecture that is available for you to scale up your models and give them more topics. Makes sense. Makes sense. I think to be honest, it could actually be that they just didn't see the reason until now. Because they weren't really focusing on the coding space. It was more of God's area. Codex. Exactly. And that's, I think, where context window really excels. But, no, I think that's the main topic. And now that they brought in Codex, I guess they kind of felt the pressure to catch up. Maybe just for someone who's not familiar, what is Codex? I think you know best, but... Codex is one of the many CLIs. CLI, what was it again? It's like a terminal tool. Like a tool you can run in a terminal, I would say. Yeah, what was the abbreviation? Command line interface. Sorry, that was too long for me. Yeah, it is basically one of those CLIs that is an agentic helper for you to program, to code, or basically to do anything. Because, I mean, Cloud Code is the biggest or the well-known one, I would say. But Codex and Cloud Code are in the architecture relatively similar. And they're basically just a multi-agent architecture that helps you to code. Yeah. Interesting. Super nice. Yeah. And especially in those agentic kind of workflows, the new models are perfectly made for them because they were trained on, yeah, unlimited basically. I don't know how many hours, but way too many hours of actual people using their PC, making screenshots, like basically what actual real-world examples would look like. And what they advertise it as is basically that they are natively agentic. And the best one in that sense is Gwen. Yeah, but... I really dislike that natively agentic. I don't know. That's the advertising at least. Yeah, but... Of course it's not in a way. Yeah, exactly. It's all math, but yeah, that's how they say it. And it just tells you how good the model can use tools. Yeah, exactly. That's it. Yeah, no, very true. But it feels at least... That is one of those buzzwords. It feels a bit more agentic. But yeah. I just got a ticker notification. It's a new topic, but Apple is set to release their foldable phone in the summer. Oh, officially? Just got the ticker in. Yes. It's leaked. It's leaked. Okay. It's about as big as the iPad mini. Wait, wait, wait. First of all, which way can I fold it? That way or that way? Of course, the designs aren't leaked. But what I can tell you is it adds an iPad-style multitasking, including side-by-side apps and redesigning layouts for key first-party apps. And it's about to cost over 2K. Yeah, I expect it. I mean, to be honest, that makes a lot of sense, looking back at how they improved and how they focused iPadOS in the past years and how they kind of improved it to a way that it nearly becomes a laptop. Yeah, but it's interesting to see how Apple is still very lenient to their lagger strategy of being always, you know, not the first mover necessarily, but also with AI. They wait until it's ripe. I mean, where's Apple? Like, Tim Cook, where's the fucking update, man? I would like to have some useful AI on my Mac by now. Yeah, Apple Intelligence, I think, is an episode for itself. We need to get our listeners into, what are you talking about? Which update? Yeah, I mean, Apple is now, it was supposed, I think, to launch on the, or be revealed on the 3rd of March, so like a week ago. Wasn't it supposed to launch two years ago, like one year ago? Yeah, I mean, then eventually they released Apple Intelligence. But Apple Intelligence is not usable, nor is it any useful. You can rewrite text. Yeah, but how? Show me on the Mac how you do it right now. Yeah, you have to right click. And like, if you're a first time user, not usable, not usable, non-existing, non-usable. So, yeah. Interesting little side ticker there. It is about time that they, I mean, I don't even want to know how much billions they already put into development of the new Siri and everything. And then you look at Google, for example, which even released models that can do agentic tasks on an iPhone already. And Apple can do it. Super, super interesting. But when you just brought up the spending, there is a really nice chart. I need to check if I can find it, and then we will put it in. The spending of the tech giants into AI is crazy. And the only tech giant that doesn't spend hundreds of millions of dollars is Apple. Okay, so maybe... Apple, you have the line, Apple is chilling down there with some, I don't know, 10, 20, Yeah, maybe we could get to our off-site people here, maybe we could get a rough number there. Yeah. It would be really interesting to know. I think something about 80 million dollars they are using to the R&D AI. Nah, it has to be more. No, no, no, no. That is the crazy part. Because they are waiting, like we just said, they are not the first movers. But they are just waiting till Google open AI or someone... Now makes you interested in a number, yeah. But I mean, they just weren't... To be honest, that's the usual way. Apple always did it. They wait until they, you see, until it's ripe or not. And then they invest into it. But it could actually be that maybe in this sense, in AI, in the space, they may be waiting too long. Either they all outsmarted us and already predicted the moves before we knew where the game is heading, or they are actually laggers. Okay, so the sideline people are telling me it's 13 billion around that Apple invested until now. A little difference to the 8 million. But the number is huge. The difference is huge. I think Meta, I don't know how many companies they already bought alone in 2026. I mean, we can... You can just tell us. Microsoft, Alphabet, and Meta combined have 650 billion. 650 billion. 650 billion. That was Meta, Amazon, and Alphabet. Wow. That's crazy. Just R&D. And Microsoft, okay. And Microsoft, just R&D or also... Combined on AI infrastructure. Okay, so there's also into data centers. But I also read an article, I think even you sent me that, that Apple servers, like AI servers are running on, like they invested, but they're running on 10% usage right now. Yeah, no, they don't even, they don't even, they have, they have empty, they don't even have the servers. They have empty data centers. Empty racks. Empty racks, yeah. Which are not being used, yes, because Apple Intelligent is not being used. And now they're in talks with Google to use their infrastructure. I mean, they're also using Google's model, right? That's also why they said it's delayed, because they don't have the capacity to, like, roll it out full time and in a useful way. I mean, it's billions of users, yeah. Yeah, but it's very interesting. I would love to be, to be part of, like, hearing what they're talking about, because it must be huge numbers they're throwing around. I mean, I did read that they're using Google's model now. They bought into a custom model. Yeah, and that includes their, their, their infrastructure. Okay. Their data centers and so on. Makes sense in a way, yeah. It's also a good, good deal for Google, I would say. I think the best deal for Google. Probably, but at the same time, it's, I mean, maybe a bit embarrassing for, for Apple that they waited so long to admit that we can't do it ourselves. And, uh, now we need to buy into Google. No one remembers that in 10 years. You're probably right. Or everyone will just do this. Or that. Yeah. Or maybe it's a new, you know, Njoka. Eh, Njoka. Exactly. My bad. Yeah. I mean, their, their chip infrastructure and so on is insane. Yeah. That's for sure. That is an interesting topic. But let's move to, um, let's say the most interesting tools of the week or let's say this month that we kind of, uh, found and all that. I think Sebastian has, he has always, always a good, uh, good source for tools. So, um, yeah, let's, let's start with a, with a nerdy one. Oh, go ahead. Um, Andrew Kappathy, I think he is pronounced. From whom? The, the godfather of AI for me. Ah, yes, yes, yes, yes, yes, yes, yes, yes, yes. And then OpenAI founder, co-founder. Yeah. He released, I think it was last week, Friday or something, a repo. It's not actually a tool, but it's a repo. Repository, yeah. Um, called auto search or no auto research, auto search, something like that. Um, and it's basically an autonomous machine learning researcher that researches new techniques on how to make your model more efficient and trains it overnight. So basically really stupid. Wow. You, you, you give the model the end goal that you want, the parameter that it can tweak, the parameter that it needs to increase. Send it off to a compute, whatever, GPU or a Mac mini. And it trains it on them. And overnight you see it trains it and it goes every time in a loop. If the model got better, nice. We save it in the repo. So it's basically, it creates. It's like A-B testing for. It creates its own code. If it is better in the evaluation, nice. Next loop, even more improvement. If not, discard other improvement. Yeah, that is interesting. And AI researchers job. I mean, it is still, and people are paying a lot of money for AI researchers, but it's open source. It is open source. The repo is out there. I already set it up. I can show you the results later. It is awesome. But it's interesting to see because, you know, most of the people were like, okay, if you have that type of knowledge, you know, you would know that the AI won't threaten you. But it's interesting to see that even these, these higher tier or whatever, those are already getting threatened. Yeah. Of fine-tuning a model. So, yeah, and then there's a question, I mean, if Andrew Kapathy does that, and he is one of the most renowned machine learning, fine-tuner, whatnot, if he already says, okay, let's make a model to either save time, make it more efficient, do something else while that, but it's also. But it always needs like a base model, right? You can't like just spawn a new model out of nowhere. No, you need a base model and it starts here and you want to increase the output about 50%. So just to simplify it for, for the listeners, just like basically what you mean is you take for example, GPT, whatever, some GPT model and you want to adapt it to your personal kind of use case. So you give it a lot of data on top of it and it kind of creates a new model based on top of GPT, but perfectly. Exactly. The fun part with that repo though is it doesn't need to be a model. It doesn't need to be an LLM. Okay. You can, you can basically train every algorithm that you want. Also, even like just Python based. Exactly. Wow. And I think it's more about the research question, right? Yeah. Basically, it is really stupid. It dissects, you give it a goal in a markdown file. That is the goal it needs to do. But then it fine tunes the model, which is used to achieve. Impressive. Every time in that loop and you can basically improve everything. Definitely the future. Exactly. So you can do A-B testing on a website and train the model basically on the user conversion rate or email opening rate. You have to show me that later. Super nice. But it's a perfect segue also to my next tool, I would say, because it's also automated and also sent it to you guys, I think. It's basically a way to kind of prove your prompt that you use for an AI model. Basically what we used recently, but in a way nicer way. What is it called? Oh, I have to look it up again, but we can put it in the podcast notes. But it basically, I just came up to my mind. That's why. But it basically, you can check if your prompt is jailbreak proof and it does it all automatically. What we recently did for one of our clients and just way, way more automated. Okay, crazy. And it really checks it in detail. It doesn't need a lot of tokens to do it. And yeah, it gives you an assessment and it can also improve the prompt as well. So yeah, if you want to have like a production ready prompt for AI systems, I guess this is the way to go or at least the best way to go. Nothing is safe when it comes to AI prompts. You can always come through, at least nowadays standards. I just read on the way up here, the stairs, that Perplexity, they launched their computer when, two days ago, three days ago? Three days ago. And they use Cloud Code under the hood and Cloud Code needs an Entropic API key. So what did they do? They put that into the session in order that the Cloud Code can spawn sub-agents that are using that API code. Makes sense. That means someone on Twitter already jailbroke it. Found that API code. Oh shit. And then he thought, okay, they must need to restrict it on IP basis and stuff like that. So it's not restricted at all. Because that is a sandbox in Perplexity computer. Don't tell me it's not. He just plugged the API key into his local Cloud Code instance. And it worked. Opus 4.6. Unlimited usage. Unlimited. Given. Two hours ago. That's crazy. Two hours ago. And it's not even released, right? It's just... It's just waitlist. Only for the max plans. Now we know why. Yeah, but I mean that also shows that even the biggest one, Perplexity, is one of the main players up there. Yeah, it is. We're living in the Wild West again. It feels a bit like when the internet was early. I mean, not that we were around by then. Yeah, no. I mean, everyone is trying to figure it out in a way. And even the bigger ones sometimes, they make mistakes as well. True. Yeah, I think that's to my next point is actually what I think was quite overlooked in Europe and the US maybe is just how insane the usage of OpenClaw in China is. Yeah. Compared to the EU and the US actually. I did some additional research. Guess how much traffic. So the overall use compared to the US. I think first of all you have to quickly explain OpenClaw. Okay, yeah. To our users who don't have heard, like living under the rock and have not heard about OpenClaw yet. You could have also heard about Molds. What was it called before? You tell me. Molds. CloudBots. CloudBots. CloudBots. And now it's OpenClaw since OpenAI bought them. Yes. Go ahead. Sorry. It's all the same, but they had new issues, I guess. No. Go ahead. Tell our viewers or listeners what it is. Quickly, it's also basically an agent orchestrator, which doesn't do much more than take data, send stuff to the API and get a response back. But Pete Steinberg did it in a nice way with a heartbeat and some soul. So that OpenClaw feels actually like the next step in direction. Maybe to summarize, it could be the first truly, truly agentic system. Unless it feels a lot like it, yeah. I mean, much more what you've seen before. So it's much more agentic than everything else. Yeah, but I saw another tweet about exactly that. And it's super fun because ground jobs exist since I don't know when. Yeah, no. And in the background, it is still not AGI. AGI is actually quite easy once it's put together. It is super easy and no one cared about the ground jobs. Yeah, but now it's the same thing for when AI got released 2022 or something, when it actually picked up. Everyone thought it's just magic. But then slowly, slowly, when you understood how it works in the background and it's just math and one word is predicting the next one, it's not as much magic anymore, you know. And I mean, what was the first AI released in 1960-something, Eliza? 1967, I think they started. Yeah, roughly around that time. But it also felt like it was slower and way, way less, you know, like, I don't know. It wasn't as optimized, but it still was an AI. It was based on the same mathematical principles and it worked on a PC that was bigger than this room. Yeah, their bottleneck was the compute. Exactly. And that is solved. That's where we can scale very nicely. Yeah, sorry, proceed. Anyways, I just want to use it in China. Yes. So, to take that away, China suppressed the US in total usage of OpenClaw. I can imagine. By far. But then I did some additional research. What percentage does all tokens used account on OpenRouter, like Chinese models? Oh, so basically the models... What is the percentage of usage of Chinese models on OpenRouter? Only on OpenRouter? Only on OpenRouter. Wow. And that's kind of the metric. Chinese models? Yeah. So like, Gwen, etc. Yes. Gwen, Kimi, DeepSeek, Minimax. Oh, DeepSeek is quite big. Minimax is also quite big. Yeah, I saw DeepSeek only on Ulama. It has 72 million downloads. Yeah. Oh. Yeah, so that's all different episodes. Percentage. Let's go. I say at least 60 or 70%. I'd give it a 55, 50. 61%. Yeah, 61%. Late February, last update. 61%. That is insane. And nobody's talking about it. That is something to think about. That's true. Nobody is talking about everyone's just cloud there, OpenEye there, Anthropic, whatever. That's the Western bubble, I guess. Whatever, but nobody's talking, you know, they're smart again. They're very, very smart. It's kind of exciting. To me, that's really interesting when you look at all of the investments that are being done in the US and all of the different AI models and also the open source models, especially the Chinese ones. They're getting crazy good compared to what the big tech, closed source, you know, open AI and traffic, what they can deliver. And I think that is also the reason why they have 61%. I mean, DeepSeek is open source, open weight. Probably. The others are open source, open weight. And that's how they also get the data and can improve further. Yes. Exactly. But also, everyone can just download that. And I mean, you see the tech community, they try to play Doom on everything. Yeah, sure. So probably they also try to get an AI model onto everything. Yeah, and that's also the... So if you're open source, you have more users. And they're just so incredibly fast. Cheap. And cheap, yeah. No, but I mean the company as such. Are they being subsidized? I didn't look it up if the... Because obviously the government in China does this smartly. I'm sure they are, but I don't have a number on this. Yeah, no worries. Currently, Shenzhen has subsidies up to 290,000 euros for startups that are using OpenClaw. Okay, that's a way to boost usage as well, yeah. And the average range... So they just recently launched this on 13 different major tech companies in China, launches. So Tencent and all the big ones. Yeah, okay. So also WeChat, et cetera. Alibaba, Bytec, Xiaomi, whatever. But wait, did we have now the number of how many... 61%. No, that was the Chinese models on OpenRouter. How many people use OpenClaw in China? Oh, they had within the 24 hours. I just have the Tencent Cloud had 100,000 users alone. Just on OpenCloud. Only on Tencent. And there was such a frenzy about it that around a thousand people gathered in front of the Tencent HQ and waited to get their CloudBots set up. Oh, actually. Their Chinese version of it, yeah. Wow. And the interesting thing is the age range was between nine years old and 70 plus. So everything in between was there. In Europe, you first of all have to find someone that is above, I would say, 60, 70 that would be... That would know about CloudBot and even be interested in it. And resellers... It's too complicated still here to use. Currently, yeah, you have to pay three to 40 euros to get it set up by a reseller. So they're very smart about it. So you go to the store and they set it up for you. So for every people... That's a way to break that age barrier, to be honest. Yes, yes. We should... Yeah. And one of the main reasons for all the people showing up that were interviewed was the fear of being left out or left behind. Fair enough. And that's super interesting because I do feel like there's a lot of people who have that fear here. But they don't... But they're not as proactive as over there. No, definitely not. But I mean, is it proactive? They got an awesome... In a way? ...offer from the Chinese government to... Or not basically... No, but they... Not an offer, but... No, but do you see something here where companies set up OpenClaw for you? No, but... So they... I feel like there are a lot of people... I agree with you on that one, that here are a lot of people that are maybe having the fear of missing out. But the problem is that here's not that offering here yet, I would say. Yeah, but there's not even that consciousness, in my opinion. It's a consciousness, but I guess I think it's mainly about the government just acting proactively. They started doing those subsidies. They, you know, they encouraged the companies to kind of bring it to the market as quickly as possible. 100%, but that's also... And the companies are great in that in China. I mean, look at the electric car market. There's also a crazy history. Sometimes maybe too fast if you then look at some outcomes, but that's also a different episode. True. But yeah. Yeah, I just thought that was very interesting to see. It is. And I'm very eager to see how that plays out in the near future. Yeah, we got to keep an eye on that one. For real, for real, for real, for real. Maybe we can find out how the comparison is actually two different continents for next episode. Yeah. Let's keep that in mind. Yeah. Surely. Yeah, no, but really interesting, really interesting stuff. The one thing I really do want to get into, because I'm personally interested in it, and I just learned about it yesterday, is still Quen. He knows way more than I do, as I said, but I learned what I know is that Quen released the Quen 3.5, so like the next kind of generation of their models. Also a genetic model. That was my thing, yeah, because they at least advertise it again as an agentic native model. I tested it yesterday and today, so not for a long time, but what I saw is it's not the best at tool calls, at least not in small quantitization, so maybe you still do, because they advertise it that it runs on really like even phones, so like that a model can run on really, really, yeah, bad compute or really low quality compute, but I don't know what the performance is then. I mean, first of all, it was really interesting how to see how those Chinese models, model makers, then lay it out again. They're not trying to compete with OpenAI or Anthropic and go for the main model, but rather have the shotgun approach and release, I think they released 12 or 15 different models. I don't even have an overview. That's again, extremely smart, and I feel like they're so strategic about everything. They gain way more market share from that shotgun approach because they have... I mean, look at the 61% on open routers. Yeah, they have now a model that can be used on a phone. They have a model that is used on a MacBook. And Google is trying to tap into that as well. And they're using it already in China. Exactly, and it's actually working. And I mean, you saw with OpenClaw the adoption rate or with OpenRouter, how many Chinese models actually get used here or in general. And that is how they can actually nicely gain more market share without competing with those big labs because Anthropic and OpenAI have 10 times more money than... Yeah, but I don't think they're even competing with them. I just had a thought about it. If you look at OpenAI and Anthropic, they didn't release any models in that sense that are specialized for mobile phones. No, but that is because they are going for the big star HEI. Exactly, but why isn't it, you know, why is it that Google, for example, releases Gamma, like the family of Gamma models, which is, by the way, small models. I think 9 billion parameters. Exactly, that are still smart enough to run tools and actions on your phone, for example. So now there's already, if you want to try, there's an app in the App Store and Play Store called Google Agent Garden, I think. And you can literally run it on your iPhone and set it up to, yeah, do tool calls for you. You can talk to it and say, turn on my flashlight or open this and this app and do this and this in it. And I played around with it and it actually works. And that's, of course, very simple. And it's from Google, not from Apple. And it works on iPhones. But basically, that's the beginning of what the new Siri should feel like, that you can talk to it and it can do multi-step tasks actually on your phone. And Google achieved that before actually Apple even tried to achieve it in a way. And that's, I don't even, I would call it embarrassing, to be honest. Tim, we're coming for you if you don't deliver in summer. I believe so. Mark our words. No, it's actually really interesting. That's the bigger ones that are traffic. Do you know, do you have an idea why they're going for AGI? They're going for AGI and because the first company that achieves AGI wins. But do you think they would even release the news? I don't think so. They would keep it secretly? I mean, now we're going more into the ethics. That's politics as well. Politics is even more technical. But I think that they, first of all, wouldn't release it out of no... Or it depends who invents AGI. If it's entropic, I have a better feeling that they are communicating that openly. If it's open AI, hmm, you know how they are doing for the past few years. But in theory, whoever achieves AGI first can use AGI to make as much money as possible without even anyone else in the world knowing. Because what they say, it has such a big knowledge gap between the humans and then the compute actually or the LLM. So, of course, they are actually going through money like OpenAI in order to achieve that AGI model. But then there are other model makers like then Gwen that are thinking about more strategic ways to gain the market and then the shotgun approach. But there is also the Mistral and all of that. That's really interesting because I recently read a book about it. It's called, I think, Humans in the World of AI or something. But basically, I have to... It's at home. But next episode I can talk about it more. But no, it was also about a small company that figured out how to build an AGI and then it didn't tell anyone. But they used it to basically build a lot of different business streams and like a movie company, which they animated. You told me about this book before. Exactly. And they just, you know, many different business streams, but they didn't tell anyone. And they really hit their paths and they actually on purpose made mistakes in between so that it doesn't look like someone got AGI and made a shit ton of money with it before they even... Yeah, before it came out that it was basically AGI and then it all broke down. But I mean, it is the same if you have a 100% guaranteed trick on how to win against your friend in poker. Then it's a money-making machine. Of course, you don't tell your friend because then he knows and then he spreads it and blip, blip, blip, blip. True, but I would argue that AGI is a little bit bigger than that. Yeah, of course. And I think also the government is... I think it's not just going to be a money-making scheme in that sense. I think the government, if I was the government, I would have huge interest in taking control of that 100%. If it's OpenAI or like a company that would tell the government and then, of course, it would be a regular, like, yeah, different story. I think it's also not a coincidence that it's not a drama with OpenAI and Anthropic and then they... The Pentagon. Yes, the Pentagon banning Anthropic because they said, okay, we don't want our tools to be used for mass surveillance or kill decisions. I mean, it was the automated... I mean, that's a whole other politics topic, but there was... The border for them was automated decisions on weapons, so basically... And mass surveillance, yeah. Exactly. Those are where the two things they don't... They actually said themselves that they think the technology is not ready yet to automatically and autonomously decide if a weapon gets fired or not. And I personally agree on that because, yeah, but of course, then I also don't know which models because the Clouds model that the Pentagon used is a whole different league, also on compute power, than the ones that we get that hallucinate. So, yeah, different topic, but also a big one, definitely. Quickly, also for the people that don't know what HCI is, it's basically in the name... General Artificial Intelligence. Basically, what it means, intelligence that is smarter than us humans. Yeah, and that can also learn kind of for itself and improve itself and therefore is kind of unlimited in the... Kind of an artificial god, so to speak. In a way, yeah, but it needs to learn, so it still needs the data from us from the past whatever, 100 years, 50, whatever, since we're probably, like, you know, getting, gathering data. So, yeah, it's still... True, but that... I would also argue that, like, once it's actually... I mean, it's going to be so much smarter, like, HCI being so much smarter, and it's kind of, like, out of our league, you know, like, what we think makes sense probably can be just twisted by it. It could be. But should we say we're doing a separate episode about HCI, because that is a... 100%. I think there were a lot of topics in here that we kind of started talking about. But something I wanted to touch on, because it's kind of going in the way with Anthropik and so on, do you guys remember that maybe you saw the news with Switzerland ruling out Palantir, the use of Palantir? Germany as well, no? I saw something about on government, like a country-wide level? The Bundeslevel. What is it called in English? I think it's like a... The federal. Federal level, yeah. Yeah, on the federal level, they're planning to ban it. Let's see how the lobbying goes, but, yeah. Interesting enough, they yesterday announced that they're having a partnership now with NVIDIA, and they're going to have the full Palantir stack, so Gotham and all the everything for governments. I think you have to quickly introduce Palantir or what they do. Well, most people say it's a data company. And it's core, it is. Yeah, I would say it's much more than that. Can I give a better... I mean, it's quite broad, so... I would say they use data in a defense space, like they're the biggest player mainly in defense and governmental kind of space. And commercial, yeah. So, yeah, so they're also, yeah, some like them, some don't. They're pretty controversial. But their main kind of products focus on, yeah, data usage for governments and that they can have a better overview if it comes to defense or if it comes to just government. Although their revenue streams are now, the commercial side is even bigger than the governmental side. Oh, is it now? Yeah. Oh, interesting. Okay. Yeah. Yeah, but I heard that also Germany, I think, wants to kind of ban the usage, but it's all not in the books yet. Yeah, but that's exactly what they launched. They launched with NVIDIA a Blackwell Ultra, like their infrastructure from NVIDIA, and the full Palantir stock for governments so that you can... Nothing leaves your system. Also... So like a private kind of... Yes. So exactly what they're scared of. They want to, yeah, I don't know if you want to believe it or not. That's what I'm just saying. Like, I don't think Palantir, because they need the data to... They claim it's a sovereign AI operating system. Okay. And no data leads to infrastructure, and they're clearly aiming at governments and defense. Of course. Especially... Especially European ones, probably. So I just thought it was quite interesting. Interesting topic. Yeah, let's see what the future brings, if it's actually going to be, you know, completely private. Because, as I said, Palantir is quite controversial. But, yeah. Also an interesting topic. Well, do you guys have anything else for this episode? Or do we want to pack the rest in the coming future? Really interesting episodes. I do have one little hot take. Go ahead. The hot take is also one of our categories. We try to bring each episode to you. AI is going to make everyone a founder. Could. That's the hot take. That's the whole... That's it? I think it makes it easier. I think you still have to want it. Like, not everyone wants to be a founder. Not everyone wants to be in that, you know, area of... Yeah. Because it still works. But it makes it a lot easier. I think the old idea of having, you know, like starting a business, it was kind of like about the capital and the team. Of course, it's also the idea. But let's put that aside. And now it's the new barrier. It's more like the quality of the idea. The execution. Yeah, the execution, but also the problem you're solving. I think the difference nowadays is the most important point is you need to have an idea. You need to have an idea that is market ready or you can definitely validate it also with AI. But the steps from idea to actual product or whatever you want to achieve is way shorter. Yeah. It's way easy. You can learn if you want to. You can learn nearly everything that you want to learn. You can avoid a lot of mistakes that previous people did by just informing yourself using AI. So it's not like you... Yeah, you know, it's not... I think there are two big parts to that. First of all, AI made it even easier to gather information. Oh, surely. That is one part. And then also the... At least for me, the biggest part is also you can iterate way faster. You can build one POC, throw it away tomorrow and build three more in the next week. Yeah. And also other technologies improve that even further. If you want a physical product, you get a 3D printer. It is. You can print your first MVP or POC or whatever and it's done in an hour or two. And then you see, oh, this is wrong. If you want to build something, right now is the best time. It was never easier. I recently had that thought when I was on holiday that in humanity's lifetime, basically, in I don't know how many millions of years, there's never been a better kind of time to live if you want to build something, if you want to start something, if you want to do your own thing. Yeah, but that's... Because you can. Also, what I was trying to achieve with my hot take is maybe we're in the shift of actually, you know, not being employed anymore. Yeah. But in five years or so, that it's actually the norm to not be employed and having your own business. Or some type of free, maybe not your own business, but some... I don't think it's... I think we will see a big change in that kind of sense with AI, how that is going to be looking in the future. I think no one knows, but interesting times. Yeah, I think... If not the biggest question that we just asked in the podcast, so I think that's also a different... I think also the bottleneck now shifts from can I build it to is it worth building? Or is the quality enough? It is. Everyone can catch up so quick if the quality is not. It is also important to sometimes, like I see a lot of companies that start out in stealth mode, which basically means that they're not sharing their idea and what they're building until it's really ready to go to market, because otherwise different people, they hear it, they build it, it's done. And then you're done. You can start a new idea. So yeah, even friends recently, we were at the company, they started out in stealth as well, and now they came out and I think it's one of the... I have no idea why I never thought of it. Yeah, shout out. One of the biggest ideas we can actually. They're called Nano and they kind of want to build a digital native back office and really they have everything in there, like from starting a company to bookkeeping to legal issues and I think even more all in one platform. And that whole space didn't get renewed in ages. And it was about time. Probably never. It was about time to renew Xact Online all of these tools that no one really wants to use and not even the legal or bookkeepers that work there, they don't even want to use it. So yeah, about time. Congrats to them. I hope it works out, but it seems like they had a good launch and yeah. For sure. That's my shout out. Nice. But I think that was with a hot take. We shall round it up. Yeah. That wraps it up quite well. We hope you guys enjoyed our first kind of trial episode of our podcast, Digilize Lab. Of course, we aim to improve Korea in the future, obviously. And we have a lot of interesting topics, I think, to go for. Yes. So long story short, be patient and give us some room to grow, I guess. Of course. And feel free to give feedback if you'd like. And keep building. Also a good one. So long story short, be patient and keep building. Also a good one. So long story short, be patient and keep building. So long story short, be patient and keep building. So long story short, be patient and keep building. So long story short, be patient and keep building. So long story short, be patient and keep building. So long story short, be patient and keep building. So long story short, be patient and keep building. So long story short, be patient and keep building. So long story short, be patient and keep building. So long story short, be patient and keep building. So long story short, be patient and keep building.