Skip to content

Latest commit

 

History

History
169 lines (85 loc) · 32.4 KB

transcript-edited-by-gpt4-32k.md

File metadata and controls

169 lines (85 loc) · 32.4 KB

Marcel: Do you want me over here?

Marcel: Yeah that's perfect. Nice to meet you, Fred. Very nice to meet you. Thank you for being here. Thanks for having me. Henrik, can you just start, give us a very short round, how do you and Fred know each other?

Henrik: So Fred and I have been friends and worked together briefly back in 2006-7. I was working on a documentary about copyright. Fred was working at a non-profit called Creative Commons, the licensing system that underpins things like Wikipedia. And so we were in this group of people that were very idealistic about the internet, and have stayed friends ever since.

Marcel: That's perfect. And you've been hanging around in Copenhagen for a couple of days?

Fred: Yeah, I got here a week ago, I guess. And then I went to Sweden for a bit, but Henrik's given me an incredible tour. We went mushroom hunting on Friday, and that was amazing. And then I'm going to... In which forest was that?

Marcel: Sysvile.

Fred: Sysvile, yeah. Yeah, it was wonderful. We got lots of chanterelles, but what was the Danish name for them? Cantareller. Cantareller, yeah. They were amazing. We cooked them that night. Are they good? Yeah, they were great.

Marcel: Ah, okay, perfect. And do you sleep on Henrik's sofa?

Fred: No, he's got an incredible guest room. It was a lovely, it's been a lovely stay.

Henrik: Som Fred fortæller her, så er han i København for at besøge mig. Vi er gamle venner. Vi har været ude og samlet svampe. Og han bor i mit gæsteverelse i vores hus.

Marcel: Enough with the hygge and Denmark, you've worked with artificial intelligence for so many years, first at Kickstarter, then at Y Combinator, we'll get back to that in a minute. What were you trying to accomplish with AI within your career?

Fred: So I took a really interesting class in grad school by somebody named Dan Schiffman, courseware, he's a great teacher, and I think the class he taught was called Nature of Code. and then programming A to Z, and he showed us how to build a spam filter, which sounds really boring, but it's a really interesting way to analyze large amounts of text.

Marcel: So the spam filters we have in our Gmail, which means I don't have to look up hundreds of mails that I don't want to see. Exactly.

Fred: It's a powerful mathematical algorithm that goes back to probability theory that can analyze an email really quickly and say, it's more likely than not that this email is spam. So I learned how to do some of that math. And a couple of years later, I found myself at Kickstarter. I was the second employee there, and just trying to make myself useful at a startup. take on interesting projects, and do everything from look at our analytics, to answer queries from the database, to build a little bit of code. And I picked up a, in America, they called it nights and weekends project. I don't know if that's a phrase here. It's probably. No, we take the weekends off here.

Henrik: I know. It's like probably. Yeah, we don't know that, yeah.

Marcel: We have no age comps. Yes.

Henrik: Fred studied at the university and wrote code. He was interested in, among other things, a course he took at the university, how spam filters work, and how to use algorithms and mathematical analysis of text. He basically learned to write things that could figure out whether an email was in order or not.

Fred: I took on a nights and weekends project at Kickstarter where I was like, we're getting all these incoming projects and most of them are good. And, you know, we're trying to, in the early days, decide which ones were a Kickstarter project and which ones weren't a Kickstarter project. And they took this very seriously. It was like, it has to be a creative project. You can't raise money for your vacation. It has to be for a documentary. So people would come up with funny ways to do that.

Henrik: Fred had a hobby project besides his work where he tried to make an artificial intelligence that could scan the many. Suddenly Kickstarter became big and very popular and they had to find a way to automatically find out which projects were probably Kickstarter projects and which were to be chosen.

Fred: So I had a friend who worked at Y Combinator and he said, well, We're interested in doing something similar with it with an algorithm so that it looks at incoming people who are applying to Y Combinator Would you be interested in working on that? I was like, yeah, I mean I was always interested in living in California So I took them up on it and I moved out there and that was in 2016 So I worked a little bit on that but then I ended up kind of being part of the whole Y Combinator process and meeting there and that's how I met Sam Altman

Marcel: Yeah, that was just my clue. Let's talk about Sam Altman. Sure. Your first meet-up with Sam Altman, what was that?

Fred: You know, he interviewed me when I was applying for the job at YC. And I remember we were in a tiny room and he was rolling around on one of those hoverboards. Do you remember those? A hoverboard? It's not an actual hoverboard. It's the one with the wheels. Okay. It was just very funny to be doing an interview while he was going around the room. I was impressed. I mean, I thought he, um, you know, we just, we stayed friendly while I worked there. And, um, I saw him do what Henrik was talking about, which is get people to think bigger because that's, that's the risk with a startup is if you don't think big enough, then suddenly you're just talking about sleeping on couches instead of the hotel industry. Right. Which is a lot bigger than sleeping on couches. And so that was kind of my first impression was like his skill was getting people to think creatively about how big a vision could be and kind of coaxing them to that point.

Henrik: So Fred's first meeting with Sam Altman is in 2016 for his job interview, where Sam Altman meets up on one of these electric skateboards, hoverboards, where he drives around in the room while they're having this job interview. What catches Fred in the same way is that he is incredibly good at thinking big

Marcel: Try to describe him as a person. What kind of person is he?

Fred: You could tell he was interested in kind of the abstract big ideas and maybe not as interested in the kind of day-to-day.

Marcel: A little bit like Steve Jobs, kind of. Possibly.

Fred: I've never met Steve Jobs. I've heard about him. I've read a book or two. But yeah, I mean, I think he was thinking as big as it could be. And I think AI was particularly interesting for that reason, because around that time, within a couple of years, it looked within sight to do something really big with AI. And I think that's what attracted him. And it wasn't surprising for me to hear that he was shifting to open AI after YC.

Marcel: I'd say Sam is up there with Mark Zuckerberg, Elon Musk, Sundar Pichai from Google, the biggest tech bosses in the world. If you had to compare him to, say, Zuckerberg, how do they differ?

Fred: I mean, I've only met Zuckerberg once. And he was perfectly nice to me. But I don't think we got along as well as I, for whatever reason, as well as I got along with Sam. And I think Sam seems acutely interested in listening to the concerns people have. I mean, if your audience is kind of interested in reading more, he did a great interview with Ezra Klein of the New York Times, which I think is worth listening to or reading. And you can see him and Ezra going back and forth on like some of these big picture concerns. And, you know, he had some of these comments in Congress about, OK, well, maybe you're right. Maybe this is concerning or this is concerning. And I think I think those are are genuine. I also think it's in OpenAI's interest now to kind of ask some of those questions now that they've ended up dominant. So he's also very smart and knows how to play chess really well, both literally and figuratively.

Henrik: Fred fortæller at Sam Altman, måske virkeligen adskiller sig mest for de andre tech-chefer ved at han faktisk bekymrer sig om hvad folk vil ha, hvad folk tænker. Han mødte for nylig opp i kongressen i en høring om truslen for kunstig intelligens, og sagde jo at han mente at der var klare problemer

Henrik: OpenAI, the folks behind ChatGPT, kicked off as non-profit, aiming to make friendly AI that helps everyone. They didn't just want Google and Facebook to have all the fun with AI.

Marcel: And later they shifted gears and went for what they call for-profit or capped profit, which means that there's kind of a lead to how much cash the investors can make and anything extra goes back to the organization's original purpose. And OpenAI says they made the switch because they needed it for server capacity, high priced research, stuff like that to train those large language models. Doesn't that just prove that at the end of the day, no matter how noble the cause, it's all about the money?

Fred: I mean, I think that's putting too fine a point on it. I mean, if you look at how much it actually cost to get OpenAI to where they are right now relative to that, it's actually a little bit on the smaller side. I mean, it's billions of dollars probably. I mean, I think they took over $10 billion from Microsoft. But it's a little bit unclear where it's all going to shake out in terms of actual profit. I think something that Sam talks a lot about that I'm skeptical of is, is this will just obviously generate wealth in the future. And I'm not sure about that. I think it's an interesting tool and I've been enjoying using it and I think it portends really interesting things for the future in technology. But the idea that it's just going to generate outsized profits and money is like, the jury is still out on that one for me. So I think there's a lot of interest and there's a lot of belief that it will generate huge amounts of money, but we'll see.

Henrik: OpenAI started as a non-profit, a company that couldn't make money, and when it became clear to them that it was expensive to train these models, to attract the right scientists and employees, and to make Microsoft knock on the door with a lot of money, then OpenAI changed, which among other things was financed by Elon Musk and others, to be for-profit, but with a profit-loft. And Fred asks himself if any of the ideas Sam Altman has about how extremely profitable AI is and transformative for society, if they will actually happen.

Marcel: In Silicon Valley, they talk about the risk that we could all be finished all because of AI. They contemplate the end of humanity, and they call it P-Doom. The probability of doom. Exactly. Fred, as a person who I assume dine and chat with people in Silicon Valley, Can you describe the vibe right now? Are people anxious, optimistic? How is it?

Fred: Well, I think there's a risk for the folks who are saying the sky is falling because it's kind of hard to prove one way or another. Where it's not happening. Yeah, it's like if it's not happening now, I mean, so we have to like project and look into the future and kind of figure that out. And I think there's a lot of incentive for people to say, hey, you guys aren't thinking enough about this. And like, pay attention to me, because then they get to kind of ride on the coattails of the AI conversation by just saying the opposite. So it's a, you kind of have to like figure out like, okay, how likely is this? And you have to look at the technology and you have to look at how people are actually using it. And right now, for the better, everyone is using AI with a human in the loop. And I think that's a good phrase for your audience to understand and kind of process. And what that means is that when I sit down to use ChatGPT, I am directing ChatGPT. I'm saying, hey, help me write this code, or reformat this email, or whatever. We're not at the point where we're just letting AI be an agent in the world with its own motivations and its own desires. And I think that's actually where things could go awry. I, having worked with a lot of AI systems, more often than not, they just make, like, dumb decisions. They're not, you know, not necessarily, like, malicious. And it's hard, like, for me to imagine, just with my experience of it going from a stupid or bad decision to a malicious decision. And I think, you know, this is where alignment, the conversation around alignment, which is this kind of code word in the AI world of, does the AI's motivation align with the human's motivation? And this goes all the way back to science fiction from the 50s and 60s around, will the robots always listen to us? And I think OpenAI, to their credit, has spent a lot of time making sure that they like, factor this into the way that they're building AI. And there are a lot of smart people thinking about it. Honestly, to answer your question about the vibe, I think a lot of people are really kind of bothered by the Doomers.

Marcel: And they're just kind of... What do you mean, bothered by the Doomers?

Fred: I think some of them some of the demands from the people who think that AI doom is coming and some of the projections about what we should do in that event get a little wonky. There was somebody I think I saw advocating for bombing data centers. Oh my god. Yeah. Is that a movement? It's not a movement, but it was like a somewhat serious suggestion. Okay. In order to kind of stave off the incoming AI apocalypse. And it was kind of somewhat seriously suggested as a way that humans could defend themselves against the future. I think stuff like that hurts the credibility. I think there are risks around AI. I'm not pretending there aren't risks. I think talking about it in a hyperbolic Doomer sense doesn't move the conversation forward.

Henrik: And Fred says that the mood in his circle is that you are a little tired of these people who think the sky falls down on their ears, and there are some completely crazy proposals where people, for example, say that you should bomb a data center to stop the development.

Henrik: So Fred, these doomers have this naive idea. What are the deeper and more real threats you see? And I think you're quite aligned with Sam Altman, actually, from our conversation, so I will use you as a sort of...

Fred: amalgamate of the two of you well obviously I can't I can't speak for Sam on this stuff but having confronted AI systems going awry and working well one of the big things that everyone talks about this is not gonna be surprising is bias and I think open AI has done a lot of work to try to make sure that the system is basically unoffensive. And what's interesting is the narrative around AI has kind of shifted from, oh, what happens if these algorithms are racist? Like, people aren't talking about that very much right now. And it's because the main algorithm that everyone's working with isn't racist. It's actually quite hard to get it to behave. in a anti-social way in that sense. That doesn't mean the training data and the way that the model works under the hood didn't kind of absorb all of the racism and bad things in society. It did. It's just that open AI has put kind of a layer on top of it to make it safe.Fred: There's a lot of people experimenting with these larger language models now. An open source one from Facebook, and the way that they get released is very conservative. You can ask it to make a joke about a panda, and it'll be like, well, you shouldn't because pandas are endangered. So people are paying attention to those concerns, and I think it's because of journalists and activists saying, well, we've got to make sure bias isn't a problem.

Henrik: One of the first concerns is that the AI systems have built in bias. He says that OpenAI and others have actually been really good. It's not something we discuss so much. We saw earlier examples of how these language models very quickly ended up being racist or asocial. That has been managed to prevent, at least with this very cautious release cycle. That is, you do not release any language models that fall into that hole, which is a clear risk that they have bias.

Fred: What is your P-Doom? My likelihood of doom given AI? Oh man, it's so hard to predict the future in general. Just give us a number, Fred. I don't know. I'd say between 5% and 10% there's actual harm. Not like human civilization has gone wrong, but something at the top of society has gone off the rails in 20 years. I'd say 5% to 10%. We're not going to go extinct. We're going to go extinct based on our own hand. It's not going to be the machines hit to it.

Henrik: Then the question is how far we are exposed by artificial intelligence and the real percentage says that he actually thinks that there is a 5-10% risk that something in society goes seriously wrong within 20 years. Maybe not that we are exposed, but we have to take care of it ourselves, but that there is something else at a high level in society that breaks down altogether because of artificial intelligence.

Marcel: I don't like that.

Fred: I mean, listen, it's a trade off, right? Like I'm reading the Oppenheimer biography and I watch the movie and that technology was designed only to cause destruction, right?

Henrik: And it's a comparison that occurs a lot in this circle with that you developed a technology that was only to be used for death and destruction, and you are afraid that Artificial Intelligence is also on the way in that direction.

Fred: So I think there's this kind of impulse to build something really powerful and that's where the two projects are kind of similar. And then you look back and you say, wow, did we unleash something terrible and is it going to destroy society? And I mean, nuclear energy has some of the possibility of saving society, right? I'm a big proponent of nuclear fusion power plants and what they could mean for sustainable energy. But obviously they cause a huge amount of destruction in Japan and that's a serious consequence. And we live with the threat of nuclear war. I mean, it's just a part of our society now. So, you know, Marcel saying, oh, I don't like that. I'm like, well, I don't like the fact that, you know, America has thousands of nuclear warheads just ready to go all the time. And technology isn't unambiguously good, you know? And I think that that's something that we've had to learn as all of the world has come online and these systems have become more powerful.

Henrik: The comparison with Oppenheimer and nuclear power is good because we have seen that nuclear technology plays an important role. It is in many ways a good energy source. Phat says that he is a big supporter of fusion energy, which has some incredibly positive results. We read an interview yesterday, back from 2016.

Marcel: And that's seriously frightening, but in a way also funny. Altman says, I try not to think too much about it, but I have guns, gold, potassium, iodide, antibiotics, batteries, water, gas mask from Israeli Defense Force and a big patch of land in Big Sur I can fly to. It really spooks me. Yeah, I mean, the CEO of OpenAI says this.

Fred: I got it. I mean, I think the instinct to prep for the future is maybe best thought of as independent from the technology that they're creating, because it's actually just a kind of engineering approach to the world. When you build complicated systems, you can get 80% of the way there with a straightforward design. But when things break, it's on the edges. It's outlier situations. And good engineers, people who, whether it's they're thinking about the future or millions of users, they're planning for these kind of outside possibilities happening. But is he serious saying this?

Marcel: Or is it kind of a joke?

Fred: No, I believe he has that. I think he's being totally sincere. So the question is, is he linking the technology that he's working on to the idea that he needs to prep for the apocalypse. No, I think it's just a general kind of paranoia around what's going to happen in the future.

Henrik: And it's very typical for these nerds that they like to prepare for all scenarios and solve problems themselves. And that's not necessarily tied to artificial intelligence.

Marcel: And let's just aim for a hopeful finish here. Make me a believer and convince me that I don't have to brace for Armageddon. But we're heading into a bright future with AI.

Fred: I can't help you with the doom, man.

Marcel: Not the doom, but the realistic scenario, the optimistic scenario. What would that be?

Fred: Well, I think it's that we continue to have humans in the loop kind of harnessing the best parts of AI and the AI is ultimately subservient to us. And I think that's possible. I mean, everyone who's building those systems is thinking in those terms. I don't think there's a kind of villainous, I mean, who knows what China's doing, but like, I don't think there's a villainous CEO out there who just wants to delegate everything to the AI and make sure it has, you know, its own motivations and that kind of thing. So the one silver lining of the companies being concentrated or the power being concentrated in these companies is that there's only really a handful of them to regulate and they have the power. And so if OpenAI or Microsoft or Facebook, if we put pressure on them to do the right thing, set up the best practices, hopefully the fact that it's concentrated means that we can control it more.

Henrik: Fred's optimism about the future of AI can best be exemplified by how he works with it himself. He has a small company where he spreads customer service, he develops the product, he does everything himself, and there he uses chatGBT to delegate work tasks so he can do this and make a deal on a crazy idea. Perfect. That was really fun.

Marcel: Fred Benenson, thank you so much for taking time talking to us. I've heard you were going for, what, Louisiana? Well, not the state, but the museum. Yeah. He's going to take the Tesla now and disappear. Oh, he's going to take your Tesla. Yeah, it's a good deal. What does your wife say about you having a living husband?

Henrik: That's off-topic, Marcel.

Marcel: Henrik, we have come to Apple Podcast and Spotify, and we must of course say that we are also in DL Lyd, but more interestingly, we have also received our first application in Apple Podcast. Yes, there is one that has given us one star. It's just to understand.

Henrik: I've heard it and I think it was... I'm sure it was him who thought you spoke too much English. Yes, I think so too.

Marcel: But I think it's fair enough. He thinks it's a bad show because of that. But I want to read a little bit from our first review. It's written by a person called... Molgatti. It sounds a bit like a CIA attack from the 70s or a legendary mortadella chef. We get five stars and the host writes, I apologize, but I do not have specific information about the DR podcast promptly, as my knowledge goes until September 2021.

Henrik: I think maybe you should read it properly, Marcel, because it says, as an A-language model, I cannot evaluate this podcast. And then it says in the text, I'm sorry, but I don't have specific information about this podcast, as my knowledge only goes as far as September 2021. It is possible that it is a newer podcast or one that has not gained widespread attention within this time frame. I recommend checking their official website or other reliable sources for more detailed and updated information about this podcast.

Marcel: Why did you write a review of our show with these ads?

Henrik: It was actually a practical joke a few hours later, because I saw that we had come up, and then I wanted to see if Apple would accept if you used the chat GPT to review it, so I wrote a prompt to chat GPT, evaluate, review, evaluate this podcast, and then I got this answer, and then I actually wrote the headline myself, because

Henrik: It's a meme, a thing that people have fun about on the internet for a long time, that people who have used ChatGPT sometimes forget to remove a sentence that in English reads, as an AI language model, I cannot, and then it explains why it can't answer a question. And there are a lot of people who have found examples of this in everything from I mean, some user references, for example, like podcasts, but also on Amazon and on Yelp in the USA. I have also found it on some Danish websites. It has become such a thing that you can find on Twitter, where some bots write false tweets, often where they look a lot like dressed women. It has become such a whole thing to find them. I mean, specialized scientific articles have this sentence in them, and it is a way of how you can see that chat GPT has been used and therefore does not want to answer the questions.

Marcel: But it's super interesting that we are on our way to an internet that is read with all this AI generated crap. Reviews are a pretty important resource in a way. It is in the way that we as consumers orient ourselves around. What is there to say about this product, apart from what they themselves write to the producer, right? And the credibility that is in the reviews, it is about to be destroyed in a way. I mean, it's probably broken. It has been destroyed for a long time, Marcel. I'm sorry to say it. But what's the purpose of all these posts? Except for yours, of course, which was the Troll-app. But what's the purpose of all these fake posts?

Henrik: Well, I found, for example, a website that was kind of a link-farm. Those were the old days, where you put a lot of text on a page that was about something to get traffic in, and then you served some ads. Can I see it? Yes, let me find it here. Yes. It's called tv- og internet.dk. All you need to know about Danish homepages, TV and Internet packages. Can I see it? It comes here. And I found that by googling the sentence as an AI-language model. Yes. There's a picture of a remote control, and then there's pretty much everything you need to know about TV and the Internet.

Henrik: And it has a whole bunch of sub-pages about TV and the Internet in Fjerretslev, TV and the Internet in Brabrand, TV... I mean, for every location in Denmark, they've just automated and made a whole bunch of sub-pages. So when someone wants to find an Internet connection in their local area, they can go to this page here. And then there's a whole bunch of text that apparently seems reasonable, but is completely ridiculous. The homepage now has a sub-section called, How do Danish homepages work? And then it starts, As an AI language model, I can tell that Danish homepages work in Danish by using Danish as the primary language on the website. This includes everything from navigation menus, transcripts, spreadsheets and buttons to contact forms and payment processes. In addition, Danish homepages often adapt the Danish culture and ways of trading, for example, with the typical Danish currency as standard currency, and then it just continues. It's a robot home page. And then I looked at an even more local home page. It said, for example, technology that is able to send Wi-Fi signals directly to devices, that this is an alternative to signals in your society of a placeholder, that means that the unit you run water on Netflix. It really became black. It's pretty black, yes. I understand that. And I think it's just a combination of having machine-translated, ungenerated text, and not checking it afterwards at all.

Marcel: It's fascinating that we're on our way to a robot world, where robots are reading the robot's home. And the robot is listening to the robot's music.

Henrik: This is a robot that is trying to create a website that will appear on top of the Google search results, so that you have to go in and click on a link to set up an internet connection at Nordlys, or Stofa, or QuickNet.

Marcel: And then there's a man in a teenager's MV making money on me clicking on that link.

Henrik: And then they get a kickback, a small receipt for every time someone signs up for an internet connection.

Marcel: How can it be that it is so difficult to discover if something is written by an AI or a human? It is of course the dilemma that school teachers and university researchers sit with when they get these assignments written by ChatGPT at the door. It may be easy to get an idea that it is written by ChatGPT, but it is partly difficult to prove that it actually is. It's not plagiarism control in that way. There was also a story before. The Verge among others, and a lot of other media, about how OpenAI was developing a tool to detect what is made by a human, what is made by a robot. But they have, without making a big deal out of it, in silence, shut down this tool. And the reason was simply too low precision. The tool was simply too bad. It could not distinguish between text made by humans and AI. Why is it technically so difficult to make this tool?

Henrik: Well, what a chatbot does is that it basically searches for what it has read in a whole bunch of texts that it has been served as training data. That means that it will statistically go out from some calculations it has made every time, what is the most likely answer to this question, and therefore it will often generate unique content. The less you ask about something very specific that it has very little data to answer from, the more you will get different answers every time. And the reason why you can use this sentence as an ES-model, is that it's only when it doesn't want to answer for some reason, for example ethical or other reasons, that it gives this recognizable sentence. And it doesn't do that all the time. So therefore, it will be a different text you get every time, and it dictates, it finds everything. There's a researcher who has called generative AI bullshit machines. They can talk and let the background go. But they don't really know what they're saying, and that's why it's very difficult to detect, because they're so good at making it sound very, very real, but it never is. Does that make sense?

Marcel: Yes, it makes sense. It reminds me a bit of if you were to have a job interview with someone who just comes in and likes a whole bunch of bullshit, then you would expose the applicant for some tests, maybe, or something like that, which could reveal that this is bullshit.

Henrik: And that's exactly what OpenAI has done with this Classifier tool. They have taken some texts that they knew were written by ChatGPT. They have taken some texts that they didn't know were. And I think it was 29% of the time they could detect when it was generated, it was simply too bad for bad results. And therefore they have chosen to postpone the project. And I think that's actually very sensible.

Marcel: Yes, of course, there is nothing wrong with that decision, but it is more that it is perhaps quite worrying that the company that is far ahead, perhaps, with generative AI right now, cannot develop a tool that can reveal whether what this robot has done is generated by AI or not. So it's pretty scary in a way. Yes, and interesting too. How much of your racing cycles actually

Henrik: I haven't bought new ones recently, so I can't deliver any very, very high rates. Okay. What about Tesla? I've leased it myself.

Marcel: You won't get me this time. Well, I can inform you that FTC, the Federal Trade Commission of the United States, will shut down what they call dishonest reporting practices. For example, false reports can also be read about on The Verge. FTC mentions the specific origin of AI chatbots as something that makes it easier to make false reports. And the payment size can go up to $50,000.Marcel: That's a little more than my bike. Yes, but not more than your monthly salary.

Henrik: Yes, it is. But I think it's going to have a devastating effect, because it's actually the case that applications, also here in Denmark, but to an even greater extent in the USA, have been completely impossible to trust for a very, very long time. I mean, what you call organic applications, i.e. applications that have been written by real people who are honest, have been largely useless for a long time.

Marcel: I'm looking forward to seeing if you get a ticket for this. I'll be happy if it happens. We like real applications. So if you like the show, go to Spotify or Apple Podcasts and give us a like for every one you get. That was episode 2 of Prompt. I think you did a great job, Henrik. You're very much like me. At least I got smarter. We're coming out every Thursday on the podcast IdeaLyd. You can listen to it from the morning show. I was wondering if we could meet this weekend and listen to the rest of your harddisk collection.

Henrik: I don't think we need that, Marcel.