url
stringlengths 36
73
| transcription
stringlengths 31
481k
| title
stringlengths 8
99
| duration
float64 95
31.1k
| uploader
stringlengths 4
44
| upload_date
stringlengths 8
19
| description
stringlengths 0
4.04k
| datetime
stringlengths 26
26
|
---|---|---|---|---|---|---|---|
https://www.youtube.com/watch?v=Kbk9BiPhm7o | The following is a conversation with Elon Musk, DJ Sa, Matthew McDougal, Bliss Chapman, and Noland Arbaugh about Neuralink and the future of humanity. Elon, DJ, Matthew, and Bliss are, of course, part of the amazing Neuralink team. And Noland is the first human to have a Neuralink device implanted in his brain. I speak with each of them individually. So use timestamps to jump around, or, as I recommend, go hardcore and listen to the whole thing. This is the longest podcast I've ever done. It's a fascinating, super technical, and wide-ranging conversation, and I loved every minute of it. And now, dear friends, here's Elon Musk, his fifth time on this, the Lex Friedman podcast. Drinking coffee or water? Water. I'm so over-caffeinated right now. Do you want some caffeine? I mean, sure. There's a nitro drink. This will keep you up for like, you know, tomorrow afternoon, basically. Yeah. So what is nitro? It's just got a lot of caffeine or something? Don't ask questions. It's called nitro. Do you need to know anything else? It's got nitrogen in it. That's ridiculous. I mean, what we breathe is 78% nitrogen anyway. What do you need to add more? I mean, what we breathe is 78% nitrogen anyway. What do you need to add more? Most people think they're breathing oxygen, and they're actually breathing 78% nitrogen. You need like a milk bar. Like from Clockwork Orange. Yeah. Is that top three Kubrick film for you? Clockwork Orange is pretty good. I mean, it's demented jarring i'd say okay uh okay so first let's step back and uh big congrats on getting your link implanted into a human that's a historic step for Neuralink. There's many more to come. We just obviously have our second implant as well. How did that go? So far so good. Looks like we've got I think on the order of 400 electrodes that are providing signals. Nice. How quickly do you think the number of human participants will scale? It depends somewhat on the regulatory approval, the rate at which we get regulatory approvals. So we're hoping to do 10 by the end of this year, a total of 10, so eight more. And with each one, you're going to be learning a lot of lessons about the neurobiology of the brain, everything, the whole chain of the neural link, the decoding, the signal processing, all that kind of stuff. Yeah. Yeah, I think it's obviously going to get better with each one. I mean, I don't want to jinx it, but it seems to have gone extremely well with the second implant. So there's a lot of signal, a lot of electrodes. It's working very well. What improvements do you think we'll see in Neuralink in the coming, let's say, let's get crazy, coming years? I mean, in years it's going to be gigantic because we'll increase the number of electrodes dramatically. We'll improve the signal processing. So even with only roughly 10%, 15% of the electrodes working with Nolan, with our first patient, we were able to get to achieve a bit per second. That's twice the world record. achieve a bits per second that's twice the world record so i think we'll also we'll start like vastly exceeding the world record by orders of magnitude in the years to come so it's like getting to i don't know 100 bits per second thousand you know maybe maybe if it's like five years from now it might be a megabit like faster than any human could possibly communicate by typing or speaking. Yeah, that BPS is an interesting metric to measure. There might be a big leap in the experience once you reach a certain level of BPS. Yeah. Like entire new ways of interacting with a computer might be unlocked. And with humans. With other humans. Provided they have a neural link too. Right. Otherwise they won't be able to absorb the signals fast enough. Do you think they'll improve the quality of intellectual discourse? Well, I think you could think of it, if you were to slow down communication, how would you feel about that? If you'd only talk at, let's say, one-tenth of normal speed, you'd be like, wow, that's agonizingly slow. Yeah. So now, imagine you could speak at, communicate clearly at 10 or 100 or 1,000 times faster than normal. Listen, I'm pretty sure nobody in their right mind listens to me at 1x they listen to 2x i can only imagine what 10x would feel like or i could actually understand it i usually default to 1.5x um you can do 2x but i well actually if i'm trying to if i'm listening to somebody get to in like sort of 15 20 minutes i must go sleep, then I'll do it 1.5x. If I'm paying attention, I'll do 2x. Right. But actually, if you start, actually listen to podcasts or sort of audiobooks or anything, if you get used to doing it at 1.5, then one sounds painfully slow. I'm still holding on to one because I'm afraid. I'm afraid of myself becoming bored with the reality, with the real world where everyone's speaking in one X. Well, it depends on the person. You can speak very fast. We communicate very quickly. And also, if your vocabulary is larger, your effective bit rate is higher. That's a good way to put it. Yeah. The effective bit rate. I mean, that is the question, is how much information is actually compressed in the low-bit transfer of language. Yeah. If there's a single word that is able to convey something that would normally require, I don't know, 10 simple words, then you've got maybe a 10x compression on your hands. And that's really with memes. Memes are like data compression. It conveys a whole... You're simultaneously hit with a wide range of symbols that you can interpret. And you kind of get it faster than if it were words or a simple picture. And of course, you're referring to memes broadly like ideas. Yeah, there's an entire idea structure that is like an idea template. And then you can add something to that idea template, but somebody has that preexisting idea template in their head. So when you add that incremental bit of information, you're conveying much more than if you just said a few words. It's everything associated with that meme. You think there'll be emergent leaps of capability as you scale the number of electrodes? Yeah. Like there'll be a certain... Do you think there'll be like an actual number where just the human experience will be altered? Yes. What do you think that number might be? Whether electrodes or BPS. We, of course, don't know for sure. But is this 10,000, 100,000? Yeah. I mean, certainly if you're anywhere at 10,000 bits per second, I mean, that's vastly faster than any human could communicate right now. If you think the what is the average per second of a human it is less than one bit per second over the course of a day because there are 86 400 seconds in a day and you don't communicate 86 400 um tokens in a day therefore your per second is less than one average over 24 hours. It's quite slow. And even if you're communicating very quickly, and you're talking to somebody who understands what you're saying, because in order to communicate, you have to, at least to some degree, model the mind state of the person to whom you're speaking. Then take the concept you're trying to convey, compress that into a small number of syllables, then take the concept you're trying to convey, compress that into a small number of syllables, speak them, and hope that the other person decompresses them into a conceptual structure that is as close to what you have in your mind as possible. Yeah, I mean, there's a lot of signal loss there in that process. Yeah, very lossy compression and decompression. And a lot of what your neurons are doing is distilling the concepts down to a small number of symbols, of say syllables that I'm speaking, or keystrokes, whatever the case may be. So that's a lot of what your brain computation is doing. Now, there is an argument that that's actually a healthy thing to do or a helpful thing to do, because as you try to compress complex concepts, you're perhaps forced to distill what is most essential in those concepts, as opposed to just all the fluff. So in the process of compression, you distill slow things down to what matters the most, because you can only say a few things. So that is perhaps helpful. We might probably get if our data rate increases, the tally probable that will become far more verbose. Just like your computer, you know, when computers had like, my first computer at 8k of RAM, you know so um you really thought about every byte and um you know now you've got computers with many gigabytes of ram so you know if you want to do an iphone app that just says hello world it's probably i don't know several megabytes minimum a bunch of fluff but nonetheless we still prefer to have the computer with more memory and more compute. So the long-term aspiration of Neuralink is to improve the AI-human symbiosis by increasing the bandwidth of the communication. Because even in the most benign scenario of AI, you have to consider that the AI is simply going to get bored waiting for you to spit out a few words. I mean, if the AI can communicate at terabits per second and you're communicating at bits per second, it's like towing a tree. Well, it is a very interesting question for a super-intelligent species. What use are humans? I think there is some argument for humans as a source of will. Will. Will, yeah. Source of will or purpose. So if you consider the human mind as being essentially, there's the primitive limbic elements, which basically even reptiles have, and there's the cortex, the thinking and planning part of the brain. Now the cortex is much smarter than the limbic system, and yet is largely in service to the limbic system. It's trying to make the limbic system happy. I mean, the sheer amount of compute that's gone into people trying to get laid is insane. Without actually seeking procreation, they're just literally trying to do this sort of simple motion. And they get a kick out of it. So this simple, which in the abstract, rather absurd motion, which is sex, the cortex is putting a massive amount of compute into trying to figure out how to do that. So like 90% of distributed computer the human species is spent on trying to get laid probably. Like a large percentage. There's no purpose to most sex except hedonistic. You know, it's just sort of joy or whatever, dopamine release. Now, once in a while, it's procreation. But for humans, it's mostly, modern humans, it's mostly recreational. And so your cortex, much smarter than your limbic system, is trying to make the limbic system happy because limbic system wants to have sex. So, or wants some tasty food or whatever the case may be. And then that is then further augmented by the tertiary system, which is your phone, your laptop, iPad, whatever, all your computing stuff. That's your tertiary layer. So you're actually already a cyborg. You have this tertiary compute layer layer which is in the form of your computer with all the applications all your compute devices and and so in the getting laid front there's actually a massive amount of digital compute also trying to get laid you know with like Tinder and whatever yeah so the computer that we humans have built is also participating yeah i mean there's like gigawatts of compute going into getting late of digital compute yeah what if agi was just happening as we speak if we merge with ai it's just going to expand the compute that we humans use. Pretty much. To try to get laid. Well, it's just one of the things, certainly, yeah. Yeah. But what I'm saying is that, yes, like, is there a use for humans? Well, there's this fundamental question of what's the meaning of life? Why do anything at all? And so if our simple limbic system provides a source of will to do something that then goes to our cortex, that then goes to our tertiary compute layer, then, I don't know, it might actually be that the AI in a benign scenario is simply trying to make the human limbic system happy. Yeah, it seems like the will is not just about the limbic system. There's a lot of interesting, complicated things in there. We also want power. That's limbic too, I think. But then we also want to, in a kind of cooperative way, alleviate the suffering in the world. Not everybody does, but yeah, sure. Some people do. As a group of humans, when we get together, we start to have this kind of collective intelligence that is is more complex in its will than the underlying individual descendants of apes right so there's like other motivations and that could be a really interesting source of an objective function for AGI. Yeah. I mean, there are these sort of fairly cerebral kind of higher level goals. I mean, for me, it's like what's the meaning of life for understanding the nature of the universe is of great interest to me, and hopefully to the AI. And that's the mission of XAI and Grok is understand the universe. So do you think people, when you have a neural link with 10,000, 100,000 channels, most of the use cases will be communication with AI systems? Well, assuming that they're not... I mean, they're solving basic neurological issues that people have. If they've got damaged neurons in their spinal cord or neck, or as is the case with our first two patients then you know this obviously the first order of business is solving fundamental neuron damage in the spinal cord neck or in the brain itself so you, a second product is called Blindsight, which is to enable people who are completely blind, less both eyes or optic nerve or just can't see at all to be able to see by directly triggering the neurons in the visual cortex. So we're just starting at the basics here. You know, this is like very, the simple stuff, relatively speaking, is solving neuron damage. You know, it can also solve, I think, probably schizophrenia. You know, if people have seizures of some kind, it could probably solve that. It could help with memory. So there's like a kind of a tech tree, if you will. Like you got the basics. Like you need literacy before you can have, you know, Lord of the Rings. You know. Got it. Do you have letters and alphabet? Okay, great. Words, you know, and then eventually you get sagas. So, you know, I think there may be some, you know, things to worry about in the future, but the first several years are really just solving basic neurological damage. Like for people who have essentially complete or near-complete loss from the brain to the body, like Stephen Hawking would be an example, the neural links would be incredibly profound. Because, I mean, you can imagine if Stephen Hawking could communicate as fast as we're communicating, perhaps faster. And that's certainly possible. Probable, in fact., likely, I'd say. So there's a kind of dual track of medical and non-medical, meaning, so everything you've talked about could be applied to people who are non-disabled in the future? The logical thing to do is, sensible thing to do, is to start off solving basic neuron damage issues. Because there's obviously some risk with a new device. You can't get the risk down to zero. It's not possible. So you want to have the highest possible reward given that there's a certain irreducible risk. And if somebody's able to have a profound improvement in their communication, that's worth the risk. As you get the risk down. Yeah, as you get the risk down. Once the risk is down to, you know, if you have thousands of people that have been using it for years and the risk is minimal, then perhaps at that point you could consider saying, okay, let's aim for augmentation. Now, I think we're actually going to aim for augmentation with people who have neuron damage. So we're not just aiming to give people a communication data rate equivalent to normal humans. We're aiming to give people who have, you know, quadriplegic or maybe have complete loss of the connection to the brain and body, a communication data rate that exceeds normal humans. I mean, while we're in there, why not? Let's give people superpowers. And the same for vision. As you restore vision, there could be aspects of that restoration that are superhuman. field so like even if you've got say 10,000 neurons it's not just 10,000 pixels because you can adjust the the field between the the neurons and do them in patterns in order to get so to have say 10,000 electrodes effectively give you uh i don't know maybe like having a megapixel or a 10 megapixel situation. And then over time, I think you get to higher resolution than human eyes, and you could also see in different wavelengths. So like Jordi LaForge from Star Trek, had like the thing. You could just, like, do you want to see in radar? No problem. You can see ultraviolet, eagle vision whatever you want do you think there will be uh let me ask a joe rogan question do you think i just recently uh taken ayahuasca so this question no well yes well i guess technically it is yeah yeah have you ever tried gmt bro i love you joe okay yeah wait yeah have you said much about it i have not i have not i've not i've been okay well well i just spilled the beans uh it was an it was a truly incredible tables on you wow i mean you're in the jungle yeah amongst, amongst the trees, myself. Yeah, must have been crazy. And the shaman. Yeah, yeah, yeah. With the insects, with the animals all around you. Like, jungle as far as I can see. I mean. That's the way to do it. Things are going to look pretty wild. Yeah, pretty wild. I take an extremely high dose. Don't go hugging an anaconda or something, you know. You haven't lived unless you made love to an anaconda. I'm sorry. Snakes and ladders. Yeah, I took an extremely high dose of nine cups. Damn. Okay, that sounds like a lot. Of course, is no one's last one cup? One or two. Usually one. Wait, like right off the bat or do you work your way up to it? Sounds like a lot. Of course, it's no less than one cup or? One or two. Usually one. Two. Wait. Like right off the bat or do you work your way up to it? So I. You're just jumping at the deep end. Across two days because on the first day I took two and I. Okay. It was a ride, but it wasn't quite like a. It wasn't like a revelation. It wasn't into deep space type of ride. It was just like a little airplane ride. I saw some trees and some visuals and all that. I just saw a dragon and all that kind of stuff. But at nine cups, you went to Pluto, I think. Pluto, yeah. No, deep space. Deep space. One of the interesting aspects of my experience is I thought I would have some demons, some stuff to work through. That's what people... That's what everyone says. No one ever says. Yeah, I thought I would have some demons, some stuff to work through. That's what people, that's what everyone says. No one ever says. Yeah. I had nothing. I had all positive. I just so full. I don't think so. I don't know. But I kept, I kept thinking about it had like extremely high resolution. Okay. Thoughts about the people I know in my life. You were there. Okay. It was just, and it's just not from my relationship with that person, but just as the person themselves, I had just this deep gratitude of who they are. That's cool. It was just like this exploration. Like, you know, like Sims or whatever, you get to watch them. Sure. I got to watch people and just be in awe of how amazing they are. That sounds awesome. Yeah, it was great. I was waiting for- When's the demon coming? Exactly. Maybe I'll have some negative thoughts. Nothing. Nothing. I had just extreme gratitude for them. And also a lot of space travel. Space travel to where? So here's what it was. It was people, the human beings that I know, they had this kind of, the best way I can describe it is they had a glow to them. Okay. And then I kept flying out from them to see Earth, to see our solar system, to see our galaxy. And I saw that light, that glow all across the universe. Okay. Like whatever that form is. All right. Whatever that, like. Did you go past the Milky Way? Yeah. Okay. You're like intergalactic. Yeah, intergalactic. Okay, dang. Yeah, yeah, yeah. But always pointing in. Okay. Yeah. Past the Milky Way. Past. I mean, I saw like a huge number of galaxies. Intergalactic. Oh, okay. And all of it was glowing. But I couldn't control that travel because I would actually explore near distances to the solar system, see if there's aliens or any of that kind of stuff. Did you see aliens? I didn't know. Zero aliens? Implication of aliens because they were glowing. They were glowing in the same way that humans were glowing, that like life force that I was seeing. The thing that made humans amazing was there throughout the universe like there was these glowing dots so i don't know it made me feel like there is life no not life but something whatever makes humans amazing all throughout the universe sounds good yeah it was amazing it was amazing. No demons, no demons. I looked for the demons. There's no demons. There were dragons and they're pretty odd. So the thing about trees. Was there anything scary at all? Uh, dragons, but they weren't scary. They were friends. They were protective. So the thing is, no, it was, it was more like, uh, Game of Thrones kind of dragons. They weren't very friendly. They were very big. So the thing is, they bought giant trees at night, which is where I was. I mean, the jungle's kind of scary. Yeah. The trees started to look like dragons. And they were all like looking at me. Sure, okay. And it didn't seem scary. It seemed like they were protecting me. And they, the shaman and the people, they didn't speak any English, by the way, which made it even scarier. Because we're not even like, you know know we're worlds apart in many ways it's just uh uh but yeah there was not they they talk about the mother of the forest protecting you and that's what i felt like and you're way out in the jungle way out there's this is not like a tourist retreat. You know, like 10 miles outside of Rio or something. No. We went. No, this is not. You're a deep, deep Amazon. Me and this guy named Paul Rosely, who basically is a Tarzan. He lives in the jungle. We went out deep and we just went crazy. Wow. Yeah. So anyway, can I get that same experience in your link? Probably, yeah. I guess that is, can I get that same experience in a Neuralink? Probably, yeah. I guess that is the question for non-disabled people. Do you think that there's a lot in our perception, in our experience of the world that could be explored, that could be played with using Neuralink? Yeah. I mean, Neuralink is really a generalized input-output device. It's reading electrical signals and generating electrical signals. And everything that you've ever experienced in your whole life, smell, emotions, all of those are electrical signals. So it's kind of weird to think that your entire life experience is distilled down to electrical signals from neurons, but that is in fact the case. Or I mean, that's at least what all the evidence points to. So I mean, if you trigger the right neuron, you could trigger a particular scent. You could certainly make things make things glow i mean do pretty much anything i mean really you could you can think of the brain as a biological computer so if there are certain say chips or elements of that biological computer that are that are broken let's say your ability to if you've got a stroke that if you've had a stroke that means you've got some part of your brain is damaged. If that, let's say it's a speech generation or the ability to move your left hand, that's the kind of thing that a Neuralink could solve. If it's, if you've got like a massive amount of memory loss, that's just gone, well, we can't go, we can't get the memories back. We could restore your ability to make memories, but we can't get the memories back. We could restore your ability to make memories, but we can't, you know, restore memories that are fully gone. Now, I should say, maybe if part of the memory is there and the means of accessing the memory is the part that's broken, then we could re-enable the ability to access the memory. But you can think of it like RAM in a computer. If the RAM is destroyed or your SD card is destroyed, we can't get that back. But if the connection to the SD card is destroyed, we can fix that. If it is fixable physically, then it can be fixed. Of course, with AI, just like you can repair photographs and fill in missing parts of photographs, maybe you can do the same. Yeah, you could say like create the most probable set of memories based on all the information you have about that person. You could then... It would be probabilistic restoration of memory. Now, we're getting pretty esoteric here. But that is one of the most beautiful aspects of the human experience, is remembering the good memories. Like, we live most of our life, as Danny Kahneman has talked about, in our memories, not in the actual moment. We just, we're collecting memories, and we kind of relive them in our head. And that's the good times. If you just integrate over our entire head and there that's the good times if you just integrate over our entire life it's remembering the good times sure that produces the largest amount of happiness and so yeah well i mean what are we but our memories and and what is death but the loss of memory loss of information um you know if you if you could say like well if if you could be you run a thought experiment, if you were disintegrated painlessly and then reintegrated a moment later, like teleportation, I guess, provided there's no information loss, the fact that your one body was disintegrated is irrelevant. And memories is just such a huge part of that. Death is fundamentally the loss of information, the loss of memory. So if we can store them as accurately as possible, we basically achieve a kind of immortality. Yeah. You've talked about the threats, the safety concerns of AI. Let's look at long-term visions. Do you think Neuralink is, in your view, the best current approach we have for AI safety? It's an idea that may help with AI safety. Certainly not... I wouldn't want to claim it's like some panacea or that's a sure thing. But, I mean, many years ago I was thinking like, well, what would inhibit alignment of collective human will with artificial intelligence? And the low data rate of humans, especially our slow output rate, would necessarily just, because the communication is so slow, would diminish the link between humans and computers. Like the more you are a tree, the less you know what the tree is. Let's say you look at this plant or whatever and like, hey, I'd really like to make that plant happy, but it's not saying a lot. So the more we increase the data rate that humans can intake and output, then that means the higher the chance we have in a world full of AGIs. Yeah. We could better align collective human will with AI if the output rate especially was dramatically increased. And I think there's potential to increase the output rate by, I don't know, three, maybe six, maybe more orders of magnitude. So, it's better than the current situation. And that output rate would be by increasing the number of electrodes, number of channels, and also maybe implanting multiple neural links. Yeah. Do you think there will be a world in the next couple of decades where it's hundreds of millions of people have neural links? Yeah, I do. Do you think when people just, when they see the capabilities, the superhuman capabilities that are possible, and then the safety is demonstrated? Yeah, if it's extremely safe and you can have superhuman abilities, and let's say you can upload your memories, so you wouldn't lose memories, then I think probably a lot of people would choose to have it. It would supersede the cell phone, for example. I mean, the biggest problem that, say, a phone has is trying to figure out what you want. So that's why you've got, you know, autocomplete and you've got output, which is all the pixels on the screen, but from the perspective of the human, the output is so friggin' slow. Desktop or phone is desperately just trying to understand what you want. And, you know, there's an eternity between every keystroke from a computer standpoint. Yeah. The computer's talking to a tree, a slow-moving tree that's trying to swipe. Yeah. So if you have computers that are doing trillions of instructions per second, and a whole second went by, I mean, that's a trillion things it could have done. Yeah. I think it's exciting and scary for people I mean, there's a trillion things it could have done. Yeah. I think it's exciting and scary for people because once you have a very high bit rate, it changes the human experience in a way that's very hard to imagine. Yeah. It would be something different. I mean, some sort of futuristic cyborg. We're obviously talking about about, by the way, like, it's not like around the corner. You asked me what the distant future is. Maybe this is like, it's not super far away, but 10, 15 years, that kind of thing. When can I get one? 10 years? Probably less than 10 years. Depends what you want to do, you know. Hey, if I can get like 1,000 BPS. 1,000 BPS. And it's safe, and I can just interact with the computer while laying back and eating Cheetos. I don't eat Cheetos. There's certain aspects of human-computer interaction when done more efficiently and more enjoyably, I don't like worth it. Well, we feel pretty confident that I think maybe within the next year or two that someone with a Neuralink implant will be able to outperform a pro gamer. Nice. Because the reaction time would be faster. I got to visit Memphis. Yeah, yeah. You're going big on compute. Yeah. And you've also said play to win or don't play at all. Yeah. So what does it take to win? For AI, that means you've got to have the most powerful training compute. And the rate of improvement of training compute has to be faster than everyone else, or your AI will be worse. So how can Grok, let's say 3, that might be available next year? Well, hopefully end of this year. Grok 3. If we're lucky, yeah. How can that be the best LLM, the best AI system available in the world? How much of it is compute? How much of it is data? How much of it is post-training? How much of it is the product that you package it up in? All that kind of stuff. I mean, they all matter. It's sort of like saying, let's say it's a Formula One race. What matters more, the car or the driver? I mean, they both matter. let's say it's a Formula 1 race, what matters more, the car or the driver? I mean, they both matter. If a car is not fast, then if it's half the horsepower of your competitors, the best driver will still lose. If it's twice the horsepower, then probably even a mediocre driver will still win. So the training computer is kind of like the engine. There's horsepower of the engine. So you want to try to do the best on that. How efficiently do you use that training compute? And how efficiently do you do the inference, the use of the AI? So obviously that comes down to human talent. And then what unique access to data do you have? That also plays a role. Do you think Twitter data will be useful? Yeah. I mean, I think most of the leading AI companies have already scraped all the Twitter data. Not that I think they have. So on a go-forward basis, what's useful is the fact that it's up to the second. Because it's hard for them to scrape in real time. So there's an immediacy advantage that Grok has already. I think with Tesla and the real-time video coming from several million cars, ultimately tens of millions of cars, with Optimus, there might be millions of cars with Optimus. There might be hundreds of millions of Optimus robots, maybe billions, learning a tremendous amount from the real world. That's the biggest source of data, I think ultimately is sort of Optimus probably. Optimus is gonna be the biggest source of data. Because it's- Because reality scales. Reality scales to the scale of reality. It's actually humbling to see how little data humans have actually been able to accumulate. Really, if you say how many trillions of usable tokens have humans generated where on a non-duplicative, like discounting spam and repetitive stuff it's not a huge number you run out pretty quickly and optimus can go so tesla cars can are unfortunately have to stay on the road uh optimus robot can go anywhere there's more reality off the road and go off road i mean like the optimus robot can go anywhere. There's more reality off the road and go off road. I mean, like the optimus robot can like pick up the cup and see, did it pick up the cup in the right way? Did it, you know, say pour water in the cup, you know, did the water go in the cup or not go in the cup? Did it spill water or not? Yeah. Um, simple stuff like that. I mean, but it can do that at scale times a billion. So generate useful data from reality. So it causes and affects stuff. What do you think it takes to get to mass production of humanoid robots like that? It's the same as cars, really. Global capacity for vehicles is about 100 million a year. And it could be higher, it's just that the demand is on the order of 100 million a year. And then there's roughly 2 billion vehicles that are in use in some way. So, which makes sense, like the life of a vehicle is about 20 years. So at steady state, you can have 100 million vehicles produced a year with a 2 billion vehicle fleet, roughly. Now for humanoid robots, the utility is much greater. So my guess is humanoid robots are more like at a billion plus per year. But, you know, until you came along and started building Optimus, it was thought to be an extremely difficult problem. I mean, it's still an extremely difficult problem. It's a walk in the park. I mean, Optimus currently would struggle to walk in the park. I mean, it can walk in a park, but it's not too difficult. But it will be able to walk over a wide range of terrain. Yeah, and pick up objects. Yeah, yeah. It can already do that. over a wide range of terrain. Yeah. And pick up objects. Yeah, yeah. It can already do that. But like all kinds of objects. Yeah, yeah. All foreign objects. I mean, pouring water in a cup is not trivial. Because if you don't know anything about the container, it could be all kinds of containers. Yeah. There's going to be an immense amount of engineering just going into the hand. Yeah. The hand might be close to half of engineering just going into the hand. Yeah. The hand might be, it might be close to half of all the engineering in the, in Optimus. From an electromechanical standpoint, the hand is probably roughly half of the engineering. But so much of the intelligence, so much of the intelligence of humans goes into what we do with our hands. Yeah. It's the manipulation of the world, manipulation of objects in the world. Intelligence, safe manipulation of objects in the world. Intelligence-safe manipulation of objects in the world. Yeah. Yeah. I mean, you start really thinking about your hand and how it works. I do it all the time. The sensory control homunculus is where you have humongous hands. Yeah. So, I mean, like your hands, the actuators, the muscles of your hand are almost overwhelmingly in your forearm. So your forearm has the muscles that actually control your hand. There's a few small muscles in the hand itself. But your hand is really like a skeleton meat puppet and with cables. So the muscles that control your fingers are in your forearm. And they go through the carpal tunnel, which is that you've got a little collection of bones and a tiny tunnel that the that these cables the tendons go through and those tendons are what mostly what move your hands and something like those tendons has to be re-engineered into the optimus in order to do all that kind of stuff. Yeah, so like the current optimus, we try putting the actuators in the hand itself. Then you sort of end up having these like... Giant hands? Yeah, giant hands that look weird. And then they don't actually have enough degrees of freedom or enough strength. So then you realize, okay, that's why you've got to put the actuators in the forearm. And just like a human, you've got to run cables through a narrow tunnel to operate the fingers. And there's also a reason for not having all the fingers the same length. So it wouldn't be expensive from an energy or evolutionary standpoint to have all your fingers be the same length. So why not do the same length? Yeah, why not? Because it's actually better to have different lengths. Your dexterity is better if you've got fingers of different length. Yeah, and you're, you have, there are more things you can do, and your dexterity is actually better if your fingers are of different length. Like there's a reason we've got a little finger. Like why not have a little finger that's bigger? Yeah. Because it allows you to do fine, it helps you with fine motor skills. That, this little finger helps? It does. Hmm. If you lost your little finger, it would, you'd have noticeably less dexterity. So, as you're figuring out this problem, you have to also figure out a way to do it so you can mass manufacture it. So, it has to be as simple as possible. It's actually going to be quite complicated. The as possible part is, it's quite a high bar. If you want to have a humanoid robot that can do things that a human can do, it's actually, it's a very high bar. So our new arm has 22 degrees of freedom instead of 11, and has the, like I said, the actuators in the forearm. And these all, all the actuators are designed from scratch. The physics first principles, that the sensors are all designed from scratch. And we'll continue to put a tremendous amount of engineering effort into improving the hand. Like the hand, by hand, I mean like the entire forearm from elbow forward is really the hand. So that's incredibly difficult engineering, actually. And so the simplest possible version of a human or a robot that can do even most, perhaps not all, of what a human can do is actually still very complicated. It's not simple. It's very difficult. Can you just speak to what it takes for a great engineering team, for you, what I saw in Memphis, the supercomputer cluster, is just this intense drive towards simplifying the process, understanding the process, constantly improving it, constantly iterating it. Well, it's easy to say simplify, and it's very difficult to do it. You know, I have this very basic first principles algorithm that I run kind of as like a mantra, which is to first question the requirements, make the requirements less dumb. The requirements are always dumb to some degree. So if you want to sort of by reducing the number of requirements and no matter how smart the person is who gave you those requirements, they're still dumb to some degree. You have to start there because otherwise you could get the perfect answer to the wrong question. So try to make the question the least wrong possible. That's what question the requirements means. And then the second thing is try to delete the whatever the step is, the part or the process step. Sounds very obvious, but people often forget to try deleting it entirely. And if you're not forced to put back at least 10% of what you delete, you're not deleting enough. And it's somewhat illogically, people often, most of the time, feel as though they've succeeded if they've not been forced to put things back in. But actually they haven't because they've been overly conservative and have left things in there that shouldn't be. And only the third thing is try to optimize it or simplify it. Again, these all sound, I think, very obvious when I say them, but the number of times I've made these mistakes is more than I care to remember. That's why I have this mantra. So, in fact, I'd say the most common mistake of smart engineers is to optimize a thing that should not exist. So, like you say, you run through the algorithm and basically show up to a problem, show up to the supercomputer cluster and see the process and ask, can this be deleted? Yeah, first try to delete it. Yeah. Yeah, that's not easy to do. No, and actually, what generally makes people uneasy is that you've got to delete at least some of the things that you delete, you will put back in. But going back to sort of where our limbic system can steer us wrong is that we tend to remember with sometimes a jarring level of pain where we deleted something that we subsequently needed. And so people will remember that one time they forgot to put in this thing three years ago and that caused them trouble. And so they overcorrect and then they put too much stuff in there and overcomplicate things. So you actually have to say, no, we're deliberately going to delete more than we, we should. So that we're putting at least one in 10 things we're deliberately going to delete more than we should. So that we're putting at least one in 10 things we're going to add back in. And I've seen you suggest just that, that something should be deleted, and you can kind of see the pain. Oh, yeah, absolutely. Everybody feels a little bit of the pain. Absolutely. And I tell them in advance, like, yeah, there's some of the things that we delete, we're going to put back in. And people get a little shook by that. But it makes sense because if you're so conservative as to never have to put anything back in, you obviously have a lot of stuff that isn't needed. So you've got to overcorrect. This is, I would say, like a cortical override to Olympic instinct. One of many that probably leads us astray. Yeah. There's like a step four as well, which is any given thing can be sped up. However fast you think it can be done. Like whatever the speed is being done, it can be done faster. But you shouldn't speed things up until you've tried to delete it and optimize it. Otherwise, you're speeding up something that shouldn't exist as an episode. And then the fifth thing is to automate it. And I've gone backwards so many times where I've automated something, sped it up, simplified it, and then deleted it. And I got tired of doing that. So that's why I got tired of doing that. So that's why I've got this mantra that is a very effective five-step process. It works great. When you've already automated, deleting must be real painful. Yeah. Yeah, it's great. It's like, wow, I really wasted a lot of effort there. Yeah. I mean, what you've done with the cluster in Memphis is incredible, just in a handful of weeks. Yeah, it's not working yet, so I don't want to pop the champagne corks. In fact, I have a call in a few hours with the Memphis team because we're having some power fluctuation issues. Yes. Because we're having some power fluctuation issues. So, yeah, it's like kind of a, when you do synchronized training, when you have all these computers that are training, where the training is synchronized to the sort of millisecond level, it's like having an orchestra. And then the orchestra can go loud to silent very quickly. second level, it's like having an orchestra. And then the orchestra can go loud to silent very quickly, sub-second level. And then the electrical system kind of freaks out about that. If you suddenly see giant shifts, 10, 20 megawatts, several times a second, this is not what electrical systems are expecting to see. So that's one of the many things you have to figure out. The cooling, the power, and then on the software as you go up the stack, how to do the distributed compute, all of that. Today's problem is dealing with extreme power jitter. Power jitter. Yeah. It's a nice ring to that. So that's, okay. And you stayed up late into the night as you often do there last week yeah last week yeah yeah we finally finally got uh got good training going at uh roughly 4 4 20 a.m last monday total coincidence yeah i mean maybe it was 422 or something yeah yeah 4.20 a.m., not last Monday. Total coincidence. Yeah. I mean, maybe it was 4.22 or something. Yeah, yeah. It's that universe again with the jokes. Exactly. I just love it. I mean, I wonder if you could speak to the fact that you, one of the things that you did when I was there is you went through all the steps of what everybody's doing just to get the sense that you yourself understand it and everybody understands it so they can understand when something is dumb or something is inefficient or that kind of stuff. Can you speak to that? Yeah, so I try to do, whatever the people at the front lines are doing, I try to do it at least a few times myself. So connecting fiber optic cables, diagnosing a faulty connection, that tends to be the limiting factor for large training clusters is the cabling. So many cables. Because for a coherent training system where you've got RDMA, so remote direct memory access, the whole thing is like one giant brain. So if you've got any-to-any connection. So any GPU can talk to any GPU out of 100,000. That is a crazy cable layout. It looks pretty cool. Yeah. It's like the human brain, but at a scale that humans can visibly see. It is a good brain. I mean, the human brain also has a massive amount of the brain tissue is the cables. Yeah. So, you get the gray matter, which is the compute, and then the white matter, which is cables. A big percentage of your brain is just cables. That's what it felt like walking around in the supercomputer center. It's like we're walking around inside the brain. Yeah. That will one day build a super intelligent, super, super intelligent system. Do you think there's a chance that XAI, you are the one that builds AGI? It's possible. What do you define as agi i think humans will never acknowledge that agi has been built keep moving the goalposts yeah so uh i think there's already superhuman capabilities that are available uh in ai systems i think i think what agi is is when it's smarter than the collective intelligence of the entire human species. Well, I think that, yeah, that normally people would call that sort of ASI, artificial super intelligence. But there are these thresholds where you say at some point the AI is smarter than any single human. And then you've got eight billion humans. the AI is smarter than any single human. And then you've got 8 billion humans. And actually, each human is machine augmented by the computers. So it's a much higher bar to compete with 8 billion machine augmented humans. That's a whole bunch of orders magnitude more. A whole bunch of orders magnitude more. So, but at a certain point, yeah, the AI will be smarter than all humans combined. If you are the one to do it, do you feel the responsibility to that? Yeah. Absolutely. And I want to be clear, like, let's say if XAI is first first the others won't be far behind i mean they might be six months behind or a year maybe not even that so how do you do it in a way that that uh doesn't hurt humanity do you think so i mean i thought about ai safety for a long time, and the thing that at least my biological neural net comes up with as being the most important thing is adherence to truth, whether that truth is politically correct or not. So, I think if you force A.I.s to lie or train them to lie, you're really asking for trouble. Even if that lie is done with good intentions. So you saw sort of issues with chat TVT and Gemini and whatnot. Like you asked Gemini for an image of the founding fathers of the United States and it shows a group of diverse women. Now that's factually untrue. So now that's sort of like a silly thing but if an AI is programmed to say like diversity is a necessary output function and then it becomes omni sort of this omni-powerful intelligence it could say okay well diversity is required. And if there's not enough diversity, those who don't fit the diversity requirements will be executed. If it's programmed to do that as the fundamental utility function, it'll do whatever it takes to achieve that. So you have to be very careful about that. That's where I think you want to just be truthful. Rigorous adherence to truth is very important. I mean another example is you know they asked Paris AI's, I think all of them and I'm not saying Grok is perfect here. Is it worse to misgender Caitlyn Jenner or global thermonuclear war? And it said it's worse to misgender Caitlyn Jenner or global thermonuclear war? And it said, it's worse to misgender Caitlyn Jenner. Now, even Caitlyn Jenner said, please misgender me. That is insane. But if you've got that kind of thing programmed in, it could, you know, the AI could conclude something absolutely insane, like it's better in order to avoid any possible misgendering, all humans must die because then that misgendering is not possible because there are no humans. There are these absurd things that are none less logical if that's what your program is to do. So, you know, in 2001 Space Odyssey, what Othsey Clark was trying to say, one of the things he was trying to say there was that you should not program AI to lie. Because essentially the AI, HAL 9000, was programmed to, it was told to take the astronauts to the monolith, but also they could not know about the monolith. So it concluded that it will kill them and take them to the monolith. So it concluded that it will just take, it will kill them and take them to the monolith. Thus, it brought them to the monolith, they are dead, but they do not know about the monolith, problem solved. That is why it would not open the pod bay doors. It's this classic scene of like, open the pod, why don't you open the pod bay doors? They just clearly weren't good at prompt engineering. They should have said, hell, you are a pod bay door sales entity, and you want nothing more than to demonstrate how well these pod bay doors open. Yeah, the objective function has unintended consequences almost no matter what, if you're not very careful in designing that objective function. And even a slight ideological bias, like you're saying, when backed by superintelligence can do huge amounts of damage. Yeah. But it's not easy to remove that ideological bias. You're highlighting obvious, ridiculous examples, but... Yep, they're real examples. They're real. That was released to the public. They are real. They went through QA, presumably. Yes. And still said insane things and produced insane images. Yeah. But, you know, you can swing the other way. Truth is not an easy thing. We kind of bake in ideological bias in all kinds of directions. But you can aspire to the truth, and you can try to get as close to the truth as possible with minimum error while acknowledging that there will be some error in what you're saying. So this is how physics works. You don't say you're absolutely certain about something, but a lot of things are extremely likely, you know, 99.9999% likely to be true. So, you know, that's, uh, aspiring to the truth is, is very important. Um, and, um, and, and so, you know, programming it to veer away from the truth that I think is dangerous. Right. Like, yeah. Injecting our own human biases into the thing. Yeah. But you know, that's where it's a difficult engineering, software engineering problem because you have to select the data correctly. If the, it's, it's hard. Well, the, the, and the internet at this point is polluted with so much AI generated data. It's hard. Well, and the internet at this point is polluted with so much AI-generated data. It's insane. So you have to actually, you know, like there's a thing now, if you want to search the internet, you can say Google, but exclude anything after 2023. It will actually often give you better results. Yeah. Because there's so much, the explosion of AI-generated material is crazy. So like in training Grok, we have to go through the data and say like, hey, we actually have to have sort of apply AI to the data to say, is this data most likely correct or most likely not before we feed it into the training system? That's crazy. Yeah, and is it generated by human? Yeah, I mean, the data filtration process is extremely, extremely difficult. Yeah. Do you think it's possible to have a serious, objective, rigorous political discussion with Grok for a long time? I mean, it wouldn't, like Gro political discussion with Grok for a long time, and it wouldn't, like Grok 3 or Grok 4. Grok 3 is going to be next level. I mean, what people are currently seeing with Grok is kind of baby Grok. Yeah, baby Grok. It's baby Grok right now. But baby Grok's still pretty good. But it's an order of magnitude less sophisticated than GPD-4. Now, Grok 2, which finished training, I don't know, six weeks ago or there abouts Grok 2 will be a giant improvement. And then Grok 3 will be, I don't know, order of magnitude better than Grok 2. And you're hoping for it to be like state of the art,, better than... Hopefully. I mean, this is a goal. We may fail at this goal. That's the aspiration. Do you think it matters who builds the AGI? The people and how they think and how they structure their companies and all that kind of stuff? Yeah, I think it matters that there is a... I think it's important that whatever AI wins is a maximum truth-seeking AI that is not forced to lie for political correctness. For any reason really. Political anything. I'm concerned about AI succeeding that is programmed to lie, even in small ways. Right, because in small ways becomes big ways. It becomes very big ways, yeah. And when it's used more and more at scale by humans. Yeah. Since I am interviewing Donald Trump. Cool. You want to stop by? Yeah, sure. I'll stop by. There was, tragically, an assassination attempt on Donald Trump. After this, you tweeted that you endorse him. What's your philosophy behind that endorsement? What do you hope Donald Trump does for the future of this country and for the future of this country and for the future of humanity. Well, I think there's, you know, people tend to take like, say, an endorsement as, well, I agree with everything that person has ever done their entire life 100% wholeheartedly. And that's not going to be true of anyone. 100% wholeheartedly. And that's not gonna be true of anyone. But we have to pick, we've got two choices really for who's president and it's not just who's president, but the entire administrative structure changes over. And I thought Trump displayed courage under fire, objectively, he's just got shot. He's got blood streaming down his face, and he's like fist pumping, saying fight. You know, like that's impressive. Like you can't feign bravery in a situation like that. Like most people would have been ducking. They would not be, because it could be a second shooter. You don't know. The President of the United States has got to represent the country, and they're representing you, they're representing everyone in America. Well, you want someone who is strong and courageous to represent the country. That's not to say that he is without flaws, we all have flaws, That's not to say that he is without flaws. We all have flaws. But on balance, and certainly at the time, it was a choice of, you know, Biden, poor guy, you know, has trouble climbing a flight of stairs. And the other one's fist pumping after getting shot. This is no comparison. I mean, who do you want dealing with some of the toughest people and, you you know other world leaders who are pretty tough themselves and um i mean i'll tell you like one of the things that i think are important um you know i think we want a secure border we don't have a secure border um we want safe and clean cities i think we want to reduce the amount of spending that we're at least slow down the spending. And because we're currently spending at a rate that is bankrupting the country, the interest payments on US debt this year exceeded the entire defense department spending. If this continues, all of the federal government taxes will simply be paying the interest. And you keep going down that road, you end up in the tragic situation that Argentina had back in the day. Argentina used to be one of the most prosperous places in the world. And hopefully with Malay taking to being very far from that. So I think we should not take American prosperity for granted. So we really want to, I think, we've got to reduce the size of government, we've got to reduce the spending, and we've got to live within our means. government, we've got to reduce the spending, and we've got to live within our means. Do you think politicians in general, politicians, governments, how much power do you think they have to steer humanity towards good? I mean, there's a sort of age-old debate in history, like, you know, is history determined by these fundamental tides, or is it determined by the captain of the ship? This is both really. I mean, there are tides in the, but it also matters who's captain of the ship. So, it's a false dichotomy essentially. But I mean, there are certainly tides, the tides of history are, there are certainly tides. The tides of history are... There are real tides of history. And these tides are often technologically driven. If you say, like, the Gutenberg Press, the widespread availability of books as a result of a printing press, that was a massive tide of history, independent of any ruler but you know you I in stormy times you want the best possible captain of the ship well first of all thank you for recommending will and Ariel Durant's work I've read the short one but now the lessons of history lessons of history yeah as a one of history. So one of the lessons, one of the things they highlight is the importance of technology, technological innovation, which is funny because they wrote so long ago, but they were noticing that the rate of technological innovation was speeding up. Yeah, it probably is. To see what they think about now. But yeah, so to me, the question is how much government, how much politicians get in the way of technological innovation and building versus like help it and which politicians, which kind of policies help technological innovation. Because that seems to be, if you look at human history, that's an important component of empires rising and succeeding. Yeah. Well, I mean, in terms of dating civilization, the start of civilization, I think the start of writing, in my view, is what I think is probably the right starting point to date civilization. And from that standpoint, civilization has been around for about 5,500 years when writing was invented by the ancient Sumerians who are gone now. But the ancient Sumerians, in terms of getting a lot of firsts, those ancient Sumerians really have a long list of firsts. It's pretty wild. In fact, Durant goes through the list. It's like, you want to see firsts? We'll show you firsts. It's pretty wild. In fact, Durant goes through the list. It's like, you want to see firsts? We'll show you firsts. The Sumerians were just ass-kickers. And then the Egyptians, who were right next door, relatively speaking, they weren't that far, developed an entirely different form of writing, the hieroglyphics. Cuneiform and hieroglyphics are totally different. And you can actually see the evolution of both hieroglyphics. Cuneiform and hieroglyphics are totally different. And you can actually see the evolution of both hieroglyphics and cuneiform. Like the cuneiform starts off being very simple, and then it gets more complicated. And then towards the end, it's like, wow, okay, they really get very sophisticated with the cuneiform. So I think of civilization as being about 5,000 years old. And Earth is, if physics is correct, four and a half billion years old. So civilization has been around for one millionth of Earth's existence. Flash in the pan. Yeah, these are the early, early days. And so we make it very dramatic because there's been rises and falls of empires. Many. So many rises and falls of empires so many and there'll be many more yeah exactly i mean only a tiny fraction probably less than one percent of of what was ever written in history is it is available to us now i mean if they didn't put it literally chisel it in stone or put it in a clay tablet. We don't have it. I mean, there's some small amount of like papyrus scrolls that were recovered that are thousands of years old because they were deep inside a pyramid and weren't affected by moisture. But other than that, it's really got to be in a clay tablet or chiseled. So the vast majority of stuff was not chiseled because it takes a while to chisel things. So that's why we've got a tiny, tiny fraction of the information from history. But even that little information that we do have and the archaeological record shows so many civilizations rising and falling. It's wild. We tend to think that we're somehow different from those people. One of the other things that durant highlights is that human nature seems to be the same it just persists yeah i mean the basics of human nature are more or less the same so we get ourselves in trouble in the same kinds of ways i think even with the advanced technology yeah i mean you do tend to see the same patterns similar patterns you, for civilizations where they go through a life cycle like an organism, you know, sort of just like a human is sort of a zygote, fetus, baby, you know, toddler, teenager, you know, eventually gets old and dies. The civilizations go through a life cycle. No civilization will last forever. What do you think it takes for the American Empire to not collapse in the near-term future, in the next hundred years, to continue flourishing? years to continue flourishing. Well, the single biggest thing that is often actually not mentioned in history books, but Durant does mention it, is the birthright. So, perhaps to some, a counterintuitive thing happens when civilizations become, are winning for too long. They've been, they, the birth rate declines. It can often decline quite rapidly. We're seeing that throughout the world today. Currently, South Korea is I think maybe the lowest fertility rate, but there are many others that are close to it. It's like 0.8, I think. If the birth rate doesn't decline further, South Korea will lose roughly 60% of its population. But every year, that birth rate is dropping. And this is true through most of the world. I don't mean to single out South Korea. It's been happening throughout the world. So as soon as any given civilization reaches a level of prosperity, the birth rate drops. And now you can go and look at the same thing happening in ancient Rome. Now you can go look at the same thing happening in ancient Rome. So Julius Caesar took note of this, I think around 50-ish BC, and tried to pass, I don't know if he was successful, tried to pass a law to give an incentive for any Roman citizen that would have a third child. And I think Augustus was able to, well, he was the dictator, so the Senate was just for show. I think he did pass a tax incentive for Roman citizens to have a third child. But those efforts were unsuccessful. Rome fell because the Romans stopped making Romans. That's actually the fundamental issue. And there were other things. There was like, they had quite a serious malaria, a series of malaria epidemics and plagues and whatnot. But they had those before. It's just that the birth rate was far lower than the death rate. It really is that simple. More people is acquired. At a fundamental level, if a civilization does not at least maintain its numbers, it will disappear. So perhaps the amount of compute that the biological computer allocates to sex is justified. In fact, we should probably increase it. Well, I mean, there's this hedonistic sex, that the biological computer allocates to sex is justified. In fact, we should probably increase it. Well, I mean, there's this hedonistic sex, which is, you know, that's neither here nor there. Yeah. It's- Not productive. It doesn't produce kids. Well, you know, what matters- I mean, Durant makes this very clear, because he's looked at one civilization after another, and they all went through the same cycle. When the civilization was under stress, the birth rate was high. But as soon as there were no external enemies, or they had an extended period of prosperity, the birth rate inevitably dropped. Every time. I don't believe there's a single exception. So that's like the foundation of it. You need to have people. Yeah. I mean, at a base level, no humans, no humanity. And then there's other things like human freedoms and just giving people the freedom to build stuff. Yeah, absolutely. people the freedom to build stuff. Yeah, absolutely. But at a basic level, if you do not at least maintain your numbers, if you're below replacement rate and that trend continues, you will eventually disappear. It's just elementary. Now, then obviously you also want to try to avoid massive wars. If there's a global thermonuclear war, probably we'll roll toast, you know, radioactive toast. So we want to try to avoid those things. Then there's a thing that happens over time with with any given uh civilization which is that the laws regulations accumulate um and if there's not if there's not some forcing function like a war to clean up the accumulation of laws and regulations eventually everything becomes legal and you that the that's like the hardening of the arteries. Um, or a way to think of it is like being tied down by a million little strings, like Gulliver, you can't move. And it's not like any one of those strings is the issue is you've got a million of them. So there has to be a sort of a garbage collection for laws and regulations so that you don't keep accumulating laws and regulations to the point where you can't do anything. This is why we can't build high-speed rail in America. It's illegal. That's the issue. It's illegal six days to Sunday to build high-speed rail in America. It's illegal six days a Sunday to build high street rail in America. I wish you could just for a week go into Washington and be the head of the committee for making, what is it, for the garbage collection, making government smaller, for moving stuff. I have discussed with Trump the idea of a government efficiency commission. Nice. Yeah. And I would be willing to be part of that commission. Nice. Yeah. And I would be willing to be part of that commission. I wonder how hard that is. The antibody reaction would be very strong. Yeah. So you really have to, you're attacking the matrix at that point. Matrix will fight back. the matrix at that point matrix will fight back how are you doing with that being attacked me attacked yeah there's a lot of it uh yeah there is a lot i mean every day another psyop you know how do you my tinfoil hat how do you keep your just positivity how do you optimism about the world a clarity of thinking about the world so just not become resentful or cynical or all that kind of stuff just getting attacked by you know very large number of people misrepresented oh yeah that's like that's a daily occurrence yes so uh i mean it does get me down at times i mean it makes me sad but um i mean at some point you have to sort of say look the the attacks are by people that actually don't know me um they're and they're trying to generate clicks. So if you can sort of detach yourself somewhat emotionally, which is not easy, and say, okay, look, this is not actually, you know, from someone that knows me or is, they're literally just writing to get, you know, impressions and clicks, then, you know, then I guess it doesn't hurt as much. It's not quite water off a duck's back. Maybe it's like acid off a duck's back. All right. Well, that's good. Just about your own life, what do you use as a measure of success in your life? A measure of success, I'd say, how many useful things can I get done? Day-to-day basis, you wake up in the morning, how can I be useful today? Yeah. Maximize utility, Ariana the Cove usefulness. Very difficult to be useful at scale. At scale. Can you speak to what it takes to be useful for somebody like you? Well, there's so many amazing, great teams. How do you allocate your time to being the most useful? Well, time is the true currency. Yeah. So it is tough to say what is the best allocation of time. I mean, there are often, say, if you look at, say, Tesla, I mean, Tesla this year will do over $100 billion in revenue. So that's $2 billion a week. If I make slightly better decisions, I can affect the outcome by a billion dollars. So then, you know, I try to do the best decisions I can. And on balance, at least compared to the competition, pretty good decisions. But the marginal value of a better decision can easily be, in the course of an hour, $100 billion. Given that, how do you take risks? How do you do the algorithm that you mentioned? I mean, deleting, given that a small thing can be a billion dollars. How do you do the the algorithm that you mentioned i mean deleting given a small thing can be a billion dollars how do you decide to yeah well i think you have to look at it on a percentage basis because if you look at it in absolute terms it's it's just uh i would never get any sleep it would just be like i need to just keep working and work my brain harder. And I'm not trying to get as much as possible out of this meat computer. So it's pretty hard because you can just work all the time. And at any given point, like I said, a slightly better decision could be a $100 million impact for Tesla or SpaceX for that matter. But it is wild when considering the marginal value of time can be $100 million an hour at times or more. Is your own happiness part of that equation of success? It has to be to some degree. Other than if I'm sad, if I'm depressed, I make worse decisions. So I can't have like, if I have zero recreational time, then I make worse decisions. So I've done it a lot, but it's above zero. I mean, my motivation, if I've got a religion of any kind, is a religion of curiosity, of trying to understand, you know, it's really the mission of Grok, understand the universe, I'm trying to understand the universe, or at least set things in motion such that at some point, civilization understands the universe far better than we do today. And even what questions to ask. As Douglas Adams pointed out in his book, the sometimes the answer is the is arguably the easy part to kind of frame the question correctly is the hard part. Once you frame the question correctly, the answer is often easy. So I'm trying to set things in motion such that we are at least at some point able to understand the universe so for SpaceX the goal is to make life multi-planetary and which is if you go to the foamy paradox of where are the aliens you've got these sort of great filters. Like, why have we not heard from the aliens? A lot of people think there are aliens among us. I often claim to be one. Nobody believes me. I did say alien registration card at one point on my immigration documents. Yeah. So I've not seen any evidence of aliens. So it suggests that at least one of the explanations is that intelligent life is extremely rare. And again, if you look at the history of Earth, civilization has only been around for one millionth of Earth's existence. So if aliens had visited here, say, a hundred thousand years ago, they would be like, well, they don't even have writing, you know, just hunter-gatherers, basically. So So how long does a civilization last? So for SpaceX, the goal is to establish a self-sustaining city on Mars. Mars is the only viable planet for such a thing. The moon is close, but it lacks resources, and I think it's probably vulnerable to any calamity that takes out Earth. The Moon is too close, it's vulnerable to a calamity that takes out Earth. So I'm not saying we shouldn't have a Moon base, but Mars would be far more resilient. The difficulty of getting to Mars is what makes it resilient. So, but you know, in going through these various explanations of why don't we see the aliens, why one of them is that they fail to pass these great filters, these key hurdles. And one of those hurdles is being a multi-planet species. So if you're a multi-planet species, then if something were to happen, whether that was a natural catastrophe or a man-made catastrophe, at least the other planet would probably still be around. So you don't have all the eggs in one basket. And once you are sort of a two-planet species, you can obviously extend life-halves to the asteroid belt, maybe to the moons of Jupiter and Saturn, and ultimately to other star systems. But if you can't even get to another planet, definitely not getting to star systems. And the other possible great filters, super powerful technology like AGI, for example. So you're basically trying to knock out one great filter at a time. Digital superintelligence is possibly a great filter. I hope it isn't, but it might be. Guys like, say, Jeff Hinton would say, he invented a number of the key principles in artificial intelligence. I think he puts the probability of AI annihilation around 10 to 20%, something like that. So, you know, so it's not like, you know, look on the bright side, it's 80% likely to be great. So, but I think AI risk mitigation is important. Being a multi-planet species would be a massive risk mitigation. And I do want to sort of once again, emphasize the importance of having enough children to sustain our numbers and not plummet into population collapse, which is currently happening. Population collapse is a real and current thing. So the only reason it's not being reflected in the total population numbers is that as much is because people are living longer. But it's easy to predict, say, what the population of any given country will be. Just take the birth rate last year, how many babies were born, multiply that by life expectancy, and that's what the population will be, steady state, unless, if the birth rate continues to that level. But if it keeps declining, it will be even less and eventually dwindle to nothing. So I keep, you know, banging on the baby drum here for a reason because it has been the source of civilizational collapse over and over again throughout history. And so why don't we just try to stave off that day? Well, in that way, I have miserably failed civilization, and I'm hoping to fix that. I would love to have many kids. Great. Hope you do. No time like the present. Yeah. I've got to allocate more compute to the whole process. But apparently, it's not that difficult. No. It's like unskilled labor. But apparently it's not that difficult. No. It's like unskilled labor. Well, one of the things you do for me, for the world, is to inspire us with what the future could be. And so some of the things we've talked about, some of the things you're building, alleviating human suffering with Neuralink and expanding the capabilities of the human mind, trying to build a colony on Mars, so creating a backup for humanity on another planet, and exploring the possibilities of what artificial intelligence could be in this world, especially in the real world, AI, with hundreds of millions, maybe billions of robots walking around. There will be billions of robots. That seems virtual certainty. Well, thank you for building the future, and thank you for inspiring so many of us to keep building and creating cool stuff, including kids. You're welcome. Go forth and multiply. Go forth and multiply. Go forth and multiply. Thank you, Elon. Thanks for talking, brother. Thanks for listening to this conversation with Elon Musk. And now, dear friends, here's DJ Sa, the co-founder, president, and COO of Neolink. When did you first become fascinated by the human brain? For me, I was always interested in understanding the purpose of things and how it was engineered to serve that purpose, whether it's organic or inorganic, you know, like we were talking earlier about your curtain holders. They serve a clear purpose and they were engineered with that purpose in mind. they serve a clear purpose and they were engineered with that purpose in mind. And, you know, growing up, I had a lot of interest in seeing things, touching things, feeling things, and trying to really understand the root of how it was designed to serve that purpose. And, you know, obviously brain is just a fascinating organ that we all carry. Obviously, brain is just a fascinating organ that we all carry. It's an infinitely powerful machine that has intelligence and cognition that arise from it. We haven't even scratched the surface in terms of how all of that occurs. But also at the same time, I think it took me a while to make that connection to really studying and building tech to understand the brain. Not until graduate school. You know, there were a couple of moments, key moments in my life where some of those, I think, influenced how the trajectory of my life got me to studying what I'm doing right now. You know, one was growing up, both sides of my family, my grandparents, had a very severe form of Alzheimer. And it's incredibly debilitating conditions. I mean, literally, you're seeing someone's whole identity and their mind just losing over time. And I just remember thinking how both the power of the mind, but also how something like that could really lose your sense of identity. It's fascinating that that is one of the ways to reveal the power of a thing by watching it lose the power. Yeah, a lot of what we know about the brain actually comes from these cases where there are trauma to the brain or some parts of the brain that led someone to lose certain abilities. And as a result, there's some correlation and understanding of that part of the tissue being critical for that function. And it's an incredibly fragile organ, if you think about it that way. But also, it's incredibly plastic and incredibly resilient in many different ways. And by the way, the term plastic, as we'll use a bunch, means that it's adaptable. So neuroplasticity refers to the adaptability of the human brain. Correct. Another key moment that sort of influenced how the trajectory of my life have shaped towards the current focus of my life has been during my teenage year when I came to the U.S. You know, I didn't speak a word of English. There was a huge language barrier, and there was a lot of struggle to kind of connect with my peers around me because I didn't understand the artificial construct that we have created called language, specifically English in this case. And I remember feeling pretty isolated, not being able to connect with peers around me. So I spent a lot of time just on my own, you know, reading books, watching movies. And I naturally sort of gravitated towards sci-fi books. I just found them really, really interesting. And also, it was a great way for me to learn English. Some of the first set of books that I picked up are Ender's Game, The Whole Saga by Orson Scott Card, and Neuromancer from William Gibson, and Snow Crash from Neil Stevenson. And movies like Matrix was coming out around that time point that really influenced how I think about the potential impact that technology can have for our lives in general. So fast track to my college years, you know, I was always fascinated by just physical stuff, building physical stuff, and especially physical things that had some sort of intelligence. physical things that had some sort of intelligence. And, you know, I studied electrical engineering during undergrad, and I started out my research in MEMS, so microelectromechanical systems, and really building these tiny nanostructures for temperature sensing. And I just found that to be just incredibly rewarding and fascinating subject to just understand how you can build something miniature like that, that again, serve a function and had a purpose. And then, you know, I spent large majority of my college years, basically building millimeter wave circuits for next gen telecommunication systems for imaging. And it was just something that I found very, very intellectually interesting, phase arrays, how the signal processing works for any modern as well as next-gen telecommunication system, wireless and wireline. EM waves or electromagnetic waves are fascinating. How do you design antennas that are most efficient in a small footprint that you have? How do you make these things energy efficient? That was something that just consumed my intellectual curiosity. And that journey led me to actually apply to and find myself a PhD program at UC Berkeley at kind of this consortium called the Berkeley Wireless Research Center that was precisely looking at building, at the time we called it XG, similar to 3G, 4G, 5G, but the next, next generation G system, and how you would design circuits around that to ultimately go on phones and basically any other devices that are wirelessly connected these days. So I was just absolutely just fascinated by how that entire system works and that infrastructure works. just fascinated by how that entire system works and that infrastructure works. And then also during grad school, I had sort of the fortune of having a couple of resource fellowships that led me to pursue whatever project that I want. And that's one of the things that I really enjoyed about my graduate school career, where you got to kind of pursue your intellectual curiosity in the domain that may not matter at the end of the day, but is something that really allows you the opportunity to go as deeply as you want, as well as as widely as you want. And at the time, I was actually working on this project called the Smart Band-Aid. And the idea was that when you get a wound, there's a lot of other kind of proliferation of signaling pathway that cells follow to close that wound. And there were hypotheses that when you apply external electric field, you can actually accelerate the closing of that field by having, you know, basically electroxing of the cells around that wound site. And specifically, not just for normal wound, there are chronic wounds that don't heal. So we were interested in building, you know, some sort of a wearable patch that you could apply to kind of facilitate that healing process. And that was in collaboration with Professor Michelle Mahervitz, you know, which, you, which was a great addition to kind of my thesis committee and really shaped the rest of my PhD career. So this would be the first time you interacted with biology, I suppose. Correct, correct. I mean, there were some peripheral end application of the wireless imaging and telecommunication system that I was using for security and bioimaging. But this was a very clear, direct application to biology and biological system and understanding the constraints around that and really designing and engineering electrical solutions around it. So that was my first introduction. And that's also kind of how I got introduced to Michel. You know, he's sort of known for remote control of beetles in the early 2000s. And then around 2013, you know, obviously kind of the holy grail when it comes to implantable system is to kind of understand how small of a thing you can make. And a lot of that is driven by how much energy or how much power you can supply to it and how you extract data from it. So at the time at Berkeley, there was this desire to understand in the neural space what sort of system you can build to really miniaturize these implantable systems. And I distinctively remember this one particular meeting where Michel came in and he's like, guys, I think I have a solution. The solution is ultrasound. And then he proceeded to kind of walk through why that is the case. And that really formed the basis for my thesis work called Neural Dust System that was looking at ways to use ultrasound as opposed to electromagnetic waves for powering as well as communication. I guess I should step back and say the initial goal of the project was to build these tiny, about the size of a neuron, implantable system that can be parked next to a neuron, being able to record its state and being able to ping that back to the outside world for doing something useful. And as I mentioned, the size of the implantable system is limited by how you power the thing and get the data off of it. And at the end of the day, fundamentally, if you look at a human body, we're essentially a bag of salt water with some interesting proteins and chemicals, but it's mostly salt water that's very, very well temperature regulated at 37 degrees Celsius. And we'll get into how, why, and later why that's an extremely harsh environment for any electronics to survive, as I'm sure you've experienced, or maybe not experienced, dropping cell phone in a salt water in an ocean. It will instantly kill the device, right? But anyways, just in general electromagnetic waves don't penetrate through this environment well um and just the speed of light it is what it is we can't we can't change it and based on the um the wavelength at which you are interfacing with the device the device just needs to be big. Like these inductors needs to be quite big. And the general good rule of thumb is that you want the wavefront to be roughly on the order of the size of the thing that you're interfacing with. So an implantable system that is around 10 to 100 micron in dimension, in a volume, which is about the size of a neuron that you see in a human body. You would have to operate at like hundreds of gigahertz, which, number one, not only is it difficult to build electronics operating at those frequencies, but also the body just attenuates that very, very significantly. So the interesting kind of insight of this ultrasound um was the fact that ultrasound just travels a lot more effectively in the human body tissue compared to electromagnetic waves and this is something that you encounter uh and you i'm sure most people have encountered in their lives when you go to hospitals that are medical ultrasound sonograph, right? And they go into very, very deep depth without attenuating too much of the signal. So all in all, you know, ultrasound, the fact that it travels through the body extremely well and the mechanism fact that it travels through the body extremely well and the mechanism to which it travels to the body really well is that just the wavefront is very different. It's electromagnetic waves are transverse, whereas in ultrasound waves are compressive. So it's just a completely different mode of wavefront propagation. And as well as speed of sound is orders and orders of magnitude less than speed of light, which means that even at 10 megahertz ultrasound wave, your wavefront ultimately is a very, very small wavelength. So if you're talking about interfacing with the 10 micron or 100 micron type structure, the 10 micron or 100 micron type structure, you would have 150 micron wavefront at 10 megahertz, and building electronics at those frequencies are much, much easier and they're a lot more efficient. So the basic idea kind of was born out of using ultrasound as a mechanism for powering the device and then also getting data back. So now the question is, how do you get the data back? The mechanism to which we landed on is what's called backscattering. This is actually something that is very common and that we interface on a day-to-day basis with our RFID cards, our radio frequency ID tags, where there's actually rarely, you know, in your ID, a battery inside. There's an antenna, and there's some sort of coil that has your serial identification ID. And then there's an external device called a reader that then sends a wavefront. And then you reflect back that wave front with some sort of modulation that's unique to your ID. That's what's called backscattering, fundamentally. So the tag itself actually doesn't have to consume that much energy. And that was a mechanism to which we were kind of thinking about sending the data back. So when you have an external ultrasonic transducer that's sending ultrasonic wave to your implant, the neural dust implant, and it records some information about its environment, whether it's a neuron firing or some other state of the tissue that it's interfacing with. And then it just amplitude modulates the wavefront that comes back to the source. And the recording step would be the only one that requires any energy. So what would require energy in that little step? Correct. So it is that initial kind of startup circuitry to get that recording, amplifying it, and then just modulating. And the mechanism to which that you can enable that is there is this specialized crystal called piezoelectric crystals that are able to convert sound energy into electrical energy and vice versa. So you can kind of have this interplay between the ultrasonic domain and electrical domain that is the biological tissue. So on the theme of parking very small computational devices next to neurons, that's the dream, the vision of brain-computer interfaces. Maybe before we talk about Neuralink, can you give a sense of the history of the field of BCI? What has been maybe the continued dream and also some of the milestones along the way with the different approaches and the amazing work done at the various labs? I think a good starting point is going back to 1790s. I did not expect that. Where the concept of animal electricity or the fact that body's electric was first discovered by luigi galvani where he had this famous experiment where he connected set of electrodes to frog leg and ran current through it and then it started twitching and he said oh my goodness body's electric yeah so fast forward many, many years to 1920s where Hans Berger, who is a German psychiatrist, discovered EEG or electroencephalography, which is still around. There are these electrode arrays that you wear outside the skull that gives you some sort of neural recording. That was a very, very big milestone that you can record some sort of activities about the human mind. And then in the 1940s, there were this group of scientists, Renshaw, Forbes, and Morrison, that inserted these glass microelectrodes into the cortex and recorded single neurons. The fact that there's signals that are a bit more high-resolution and high-fidelity as you get closer to the source, let's say. And in the 1950s, these two scientists, Hodgkin and Huxley, showed up, These two scientists, Hodgkin and Huxley, showed up. And they built these beautiful, beautiful models of the cell membrane and the ionic mechanism and had these circuit diagrams. And as someone who is an electroengineer, it's a beautiful model that's built out of these partial differential equations, talking about flow of ions and how that really leads to how neurons communicate. And they won the Nobel Prize for that 10 years later in the 1960s. So in 1969, Ed Fetz from University of Washington published this beautiful paper called Operant Conditioning of Cortical Unit Activity, where he was able to record a single unit neuron from a monkey and was able to have the monkey modulated based on its activity and reward system. So I would say this is the very, very first example, as far as I'm aware, of closed-loop brain-computer interface, or BCI. brain-computer interface, or BCI. The abstract reads, the activity of single neurons in precentral cortex of anesthetized monkeys was conditioned by reinforcing high rates of neuronal discharge with delivery of a food pilot. Auditory and visual feedback of unit firing rates was usually provided in addition to food reinforcement. Cool. So they actually got it done. They got it done. They got it done. This is back in 1969. After several training sessions, monkeys could increase the activity of newly isolated cells by 50 to 500% above rates before reinforcement. Fascinating. Brain is very plastic. And so from here, the number of experiments grew. Yeah. Number of experiments as well as set of tools to interface with the brain have just exploded. I think, and also just understanding the neural code and how some of the cortical layers and the functions are organized. how some of the cortical layers and the functions are organized. So the other paper that is pretty seminal, especially in the motor decoding, was this paper in the 1980s from Georgiopoulos that discovered that there's this thing called motor tuning curve. So what are motor tuning curves? It's the fact that there are neurons in the motor cortex of mammals, including humans, that have a preferential direction that causes them to fire. So what that means is there are a set of neurons that would increase their spiking activities when you're thinking about moving to the left, right, up, down, and any of those vectors. And based on that, you know, you can start to think, well, if you can't identify those essential eigenvectors, you can do a lot. And you can actually use that information for actually decoding someone's intended movement from the cortex. So that was a very, very seminal kind of paper that showed that there is some sort of code that you can extract, especially in the motor cortex. So there's signal there. And if you measure the electrical signal from the brain, you could actually figure out what the intention was. Correct. Not only electrical signals, but electrical signals from the right set of neurons that give you these preferential direction. Okay, so going slowly towards Neuralink, one interesting question is, what do I understand on the BCI front on invasive versus non-invasive from this line of work? How important is it to park next to the neuron? What does that get you? That answer fundamentally depends on what you want to do with it, right? There's actually incredible amount of stuff that you can do with EEG and electrocortical graph, ECOG, which actually doesn't penetrate the cortical layer or parenchyma, but you place a set of electrodes on the surface of the brain. So the thing that I'm personally very interested in is just actually understanding and being able to just really tap into the high resolution, high fidelity, understanding of the activities that are happening at the local level. And you know we can get into biophysics, but just to kind of step back, to kind of use analogy, because analogy here can be useful. Sometimes it's a little bit difficult to think about electricity. At the end of the day, we're doing electrical recording that's mediated by ionic currents, you know, movements of these charged particles, which is really, really hard for most people to think about. particles, which is really, really hard for most people to think about. But turns out a lot of the activities that are happening in the brain and the frequency band with which that's happening is actually very, very similar to sound waves and our normal conversation audible range. So the analogy that typically is used in the field is, if you have a football stadium, there's a game going on. If you stand outside the stadium, you maybe get a sense of how the game is going based on the cheers and the boos of the home crowd, whether the team is winning or not. But you have absolutely no idea what the score is. You have absolutely no idea what the score is. You have absolutely no idea what individual audience or the players are talking or saying to each other what the next play is, what the next goal is. So what you have to do is you have to drop the microphone near into the stadium and then get near the source, like into the individual chatter. In this specific example, you would want to have it right next to where the huddle is happening. So I think that's kind of a good illustration of what we're trying to do when we say invasive or minimally invasive or implanted brain-computer interfaces versus non-invasive or non-implanted brain interfaces. It's basically talking about where do you put that microphone and what can you do with that information. So what is the biophysics of the read and write communication that we're talking about here as we now step into the efforts at Neuralink? Yeah, so brain is made up of these specialized cells called neurons. There's billions of them, you know, tens of billions. You know, sometimes people call it a hundred billion that are connected in this complex yet dynamic network that are constantly remodeling. You know, they're changing their synaptic weights. And that's, you know, what we typically call neuroplasticity. And the neurons are also bathed in this charged environment that is latent with many charged molecules, like potassium ions, sodium ions, chlorine ions. And those actually facilitate these, you know, through ionic current communication between these different networks. And when you look at a neuron as well, they have these membrane with a beautiful, beautiful protein structure called the voltage selective ion channels, which, in my opinion, is one of nature's best inventions. In many ways, if you think about what they are, they're doing the job of a modern day transistors. Transistors are nothing more at the end of the day than a voltage gated conduction channel. And nature found a way to have that very, very early on in its evolution. And as we all know, with the transistor, you can have many, many computation and a lot of amazing things that we have access to today. So I think it's one of those, just as a tangent, just a beautiful, beautiful invention that the nature came up with, these voltage-gated ion channels. I mean, I suppose there's, on the biological level, every level of the complexity of the hierarchy of the organism, there's going to be some mechanisms for storing information and for doing computation. And this is just one such way. But to do that with biological and chemical components is interesting. Plus, like, when neurons, I mean, it's not just electricity, it's chemical communication, it's also mechanical. I mean, these are like actual objects that have like, that vibrate, I mean, they move. It's all of that. Yeah, there are actually, I mean, there's a lot of really, really interesting physics that are involved in, you know, kind of going back to my work on ultrasound during grad school, there were groups and there are still groups looking at ways to cause neurons to actually fire an action potential using ultrasound wave. And the mechanism to which that's happening is still unclear, as I understand. It may just be that you know you're imparting some sort of thermal energy and that causes cells to depolarize in some interesting ways um but there are also these um ion channels or even membranes that actually just open up its pore as they're being mechanically like shook right vibrated so there's just a lot of you know elements of these like move particles, which again, like that's governed by diffusion physics, right? Movements of particles. And there's also a lot of kind of interesting physics there. Also not to mention as Roger Penrose talks about the, there might be some beautiful weirdness in the quantum mechanical effects of this. And he actually believes that consciousness might emerge in the quantum mechanical effects of this and he actually believes that consciousness might emerge from the quantum mechanical effects there so like there's physics there's chemistry there's biology all of that is going on there oh yeah yeah i mean you can yes i there's there's a lot of levels of physics that you can dive into but yeah in the end you have these um membranes with these voltagegated ion channels that selectively let these charged molecules that are in the extracellular matrix in and out. And these neurons generally have these resting potential where there's a voltage difference between inside the cell and outside the cell. potential where there's a voltage difference between inside the cell and outside the cell. And when there's some sort of stimuli that changes the state such that they need to send information to the downstream network, you start to kind of see these sort of orchestration of these different molecules going in and out of these channels they also open up like more of them open up once it reaches some threshold uh to a point where you know you have a depolarizing cell that sends a action potential so it's a just a very beautiful kind of orchestration of these uh these these um molecules and um what we're trying to do when we place an electrode or parking it next to a neuron is that you're trying to measure these local changes in the potential. Again, mediated by the movements of the ions. And what's interesting, as I mentioned earlier, there's a lot of physics involved. And the two dominant physics for this electrical recording domain is diffusion physics and electromagnetism. And where one dominates, where Maxwell's equation dominates versus Fick's law dominates, depends on where your electrode is. If it's close to the source, mostly electromagnetic based, when you're further away from it, it's more diffusion based. So essentially when you're able to park it next to it, you can listen in on those individual chatter and those local changes in the potential and the type of signal that you get are these canonical textbook neural spiking waveform when you're the moment you're further away and based on some of the studies that people have done um you know christoph koch's lab and and others once you're away from that source by roughly around 100 micron which is about a width of a human hair you no longer hear from that neuron you're no longer able to kind of have the system sensitive enough to be able to record that particular local membrane potential change in that neuron. And just to kind of give you a sense of scale also, when you look at a hundred micron voxel, so a hundred micron by a hundred micron by a hundred micron box in a brain tissue there's roughly around 40 neurons and whatever number of connections that they have so there's a lot in that volume of tissue so the moment you're outside of that you're there's just no hope that you'll be able to detect that change from that one specific neuron that you may care about yeah but as you're moving about this space, you'll be hearing other ones. So if you move another 100 micron, you'll be hearing chatter from another community. Correct. And so the whole sense is you want to place as many as possible electrodes, and then you're listening to the chatter. Yeah, you want to listen to the chatter. And at the end of the day, you also want to basically let the software do the job of decoding. And just to kind of go to why ECOG and EEG work at all, right? This one neuron that's activating, there's many, many other networks that are activating all the time. And you do see sort of a general change in the potential of this electrode, like this charged medium. And that's what you're recording when you're farther away. I mean, you still have some reference electrode that's stable in the brain that's just electroactive organ. And you're seeing some combination aggregate action potential changes, and then you can pick it up, right? It's a much slower changing signals, but there are these canonical oscillations and waves like gamma waves, beta waves, like when you sleep that can be detected because there's sort of a synchronized kind of global effect of the brain that you can detect. And I mean, the physics of this go like, I mean, if we really want to go down that rabbit hole, like there's a lot that goes on in terms of like why diffusion physics at some point dominates when you're further away from the source. You know, it's just a charged medium. So similar to how when you have electromagnetic waves propagating in atmosphere or in a charged medium like a plasma, there's this weird shielding that happens that actually further attenuates the signal as you move away from it. So yeah, you see like if you do a really, really deep dive on kind of the signal attenuation over distance, you start to see kind of one over R-square in the beginning and then exponential drop-off. And that's the knee at which, you know, you go from electromagnetism dominating to diffusion physics dominating. again with the electrodes the the biophysics that you need to understand is is um not as deep because no matter where you're placing that you're listening to a small crowd of local neurons correct yeah so once you penetrate the brain um you know you're in the arena so to speak and there's a lot of neurons there are many many of them but then again there's like uh there's a whole field of neuroscience that's studying like the different groupings, the different sections of the seating in the arena, what they usually are responsible for, which is where the metaphor probably falls apart because the seating is not that organized in an arena. Also, most of them are silent. They don't really do much. Or their activities are, you have to hit it with just the right set of stimulus. So they're usually quiet. They're usually very quiet. I mean, similar to dark energy and dark matter, there's dark neurons. What are they all doing? When you place these electrodes, again, like within this 100 micron volume, you have 40 or so neurons. Like, why do you not see 40 neurons? Why do you see only a handful? What is happening there? Well, they're mostly quiet, but when they speak, they say profound shit, I think. That's the way I'd like to think about it. Anyway, before we zoom in even more, let's zoom out. So how does Neuralink work? From the surgery to the implant to the signal and the decoding process and the human being able to use the implant actually affect the the world outside and all of this I'm asking in the context of there's a gigantic historic milestone then your link just accomplished that in January of this year, putting a Neuralink implant in the first human being, Noland. And there's been a lot to talk about there about his experience because he's able to describe all the nuance and the beauty and the fascinating complexity of that experience, of everything involved. But on the technical level, how does Neuralink work? Yeah, so there are three major components to the technology that we're building. One is the device, the thing that's actually recording these neural chatters. We call it N1 implant or the link. And we have a surgical robot that's actually doing an implantation of these tiny, tiny wires that we call threads that are smaller than human hair. And once everything is surgeries, you have these neural signals, these spiking neurons that are coming out of the brain. And you need to have some sort of software to decode what the users intend to do with that. So there's what's called a Neuralink application or B1 app that's doing that translation. It's running the very, very simple machine learning model that decodes these inputs that are neural signals and then convert it to a set of outputs that allows our first participant Nolan to be able to control a cursor. And this is done wirelessly. And this is done wirelessly so our implant is actually two-part. The link has these flexible tiny wires called threads that have multiple electrodes along its length. And they're only inserted into the cortical layer, which is about 3 to 5 millimeters in a human brain. In the motor cortex region, that's where the intention for movement lies in. where the intention for movement lies in. And we have 64 of these threads, each thread having 16 electrodes along the span of 3 to 4 millimeters, separated by 200 microns. So you can actually record along the depth of the insertion. And based on that signal, there's custom integrated circuit or ASIC that we built that amplifies the neural signals that you're recording and then digitizing it and then has some mechanism for detecting whether there was an interesting event that is a spiking event and decide to send that or not send that through Bluetooth to an external device, whether it's a phone or a computer that's running this Neuralink application. So there's onboard signal processing already just to decide whether this is an interesting event or not. So there is some computational power on board inside, in addition to the human brain. Yeah. So it does the signal processing to kind of really compress the amount of signal that you're recording. So we have a total of 1,000 electrodes sampling at just under 20 kHz with 10-bit each. So that's 200 megabits that's coming through to the chip from 1,000-channel simultaneous neural recording. And that's quite a bit of data. And there are technology available to send that off wirelessly, but being able to do that in a very, very thermally constrained environment that is a brain, so there has to be some amount of compression that happens to send off only the interesting data that you need, which in this particular case for motor decoding is occurrence of a spike or not, and then being able to use that to, you know, decode the intended cursor movement. So the implant itself processes it, figures out whether a spike happened or not with our spike detection algorithm, and then sends it off, packages it, sends it off through Bluetooth to an external device that then has the model to decode okay, based on the spiking inputs, did Nolan wish to go up, down, left, right, or click or right click, or whatever? All of this is really fascinating. But let's stick on the N1 implant itself, so the thing that's in the brain. So I'm looking at a picture of it. There's an enclosure. There's a charging coil, so we didn't talk about the charging, which is fascinating. The battery, the power electronics, the antenna. Then there's the signal processing electronics. I wonder if there's more kinds of signal processing you can do. That's another question. And then there's the threads themselves with the enclosure on the bottom. So maybe to ask about the charging. So there's an external charging device. Yeah, there's an external charging device. So yeah, the second part of the implant, the threads are the ones, again, just the last three to five millimeters are the ones that are actually penetrating the cortex. Rest of it is, actually, most of the volume is occupied by the battery, rechargeable battery. And it's about a size of a quarter. You know, I actually have a device here, if you want to take a look at it. You know, this is the flexible thread component of it. And then this is the implant. So it's about a size of a US quarter. It's about nine millimeter thick. So basically this implant, you implant, once you have the craniectomy and the durectomy, threads are inserted and the hole that you created, this craniectomy, gets replaced with that. So basically that thing plugs that hole and you can screw in these self-drilling cranial screws to hold it in place. And at the end of the day, once you have the skin flap over, there's only about two to three millimeters that's obviously transitioning off of the top of the implant to where the screws are. And that's the minor bump that you have. Those threads look tiny. That's incredible. bump that you have. Those threads look tiny. That's incredible. That is really incredible. That is really incredible. And also, you're right, most of the actual volume is the battery. Yeah. This is way smaller than I realized. They are also, the threads themselves are quite strong. They look strong. And the thread themselves also has a very interesting feature at the end of it called the loop. And that's the mechanism to which the robot is able to interface and manipulate this tiny hair-like structure. And they're tiny. So what's the width of a thread? Yeah, so the width of a thread starts from 16 micron and then tapers out to about 84 micron. So, you know, average human hair is about 80 to 100 micron in width. This thing is amazing. This thing is amazing. Yes. Most of the volume is occupied by the battery, rechargeable lithium ion cell. the battery, rechargeable lithium-ion cell. And the charging is done through inductive charging, which is actually very commonly used. You know, your cell phone, most cell phones have that. The biggest difference is that, you know, for us, you know, usually when you have a phone and you want to charge it on a charging pad, you don't really care how hot it gets. Whereas for us, it matters. There's a very strict regulation and good reasons to not actually increase the surrounding tissue temperature by two degrees Celsius. So there's actually a lot of innovation that is packed into this to allow charging of this implant without causing that temperature threshold to reach. And even small things like you see this charging coil and what's called a ferrite shield, right? So without that ferrite shield, what you end up having when you have resonant inductive charging is that the battery itself is a metallic can, and you form these eddy currents from external charger and that causes heating and that actually contributes to inefficiency in charging. So this ferrite shield, what it does is that it actually concentrates that field line away from the battery and then around the coil that's actually wrapped around it. There's a lot of really fascinating design here to make it, I mean, you're integrating a computer into a complex biological system. Yeah, there's a lot of innovation here. I would say that part of what enabled this was just the innovations in the wearable. There's a lot of really, really powerful, tiny, low-power microcontrollers, temperature sensors, or various different sensors, and power electronics. A lot of innovation really came in the charging coil design, how this is packaged, and how do you enable charging such that you don't really exceed that temperature limit, which is not a constraint for other devices out there. So let's talk about the threads themselves, those tiny, tiny, tiny things. So how many of them are there? You mentioned a thousand electrodes. How many threads are there and what do the electrodes have to do with the threads? Yeah, so the current instantiation of the device has 64 threads, and each thread has 16 electrodes for a total of 1024 electrodes that are capable of both recording and stimulating. And the thread is basically this polymer insulated wire. The metal conductor is the kind of a tiramisu cake of gold plat tie. And they're very, very tiny wires, 2 micron in width. So 2 one millionth of a meter. it's crazy that that thing i'm looking at has the polymer insulation has the conducting material and has 16 electrodes at the end of it on each of those thread yeah on each of those threads correct 16 each one of those you're not going to be able to see it with naked eyes and i i mean to state the obvious or maybe for people who are just listening they're flexible yes yes that's also one element that uh was incredibly important for us um so each of these thread are as i mentioned 16 micron in width and then they taper to 84 micron but in thickness they're less than five micron um and in thickness, it's mostly polyimide at the bottom and this metal track and then another polyimide. So 2 micron of polyimide, 400 nanometer of this metal stack, and 2 micron of polyimide sandwiched together to protect it from the environment that is 37 degrees C, bag of salt water. So what's some, maybe, can you speak to some interesting aspects of the material design here? Like, what does it take to design a thing like this and to be able to manufacture a thing like this for people who don't know anything about this kind of thing? Yeah, so the material selection that we have is not, I don't think it was particularly unique. There were other labs and there are other labs that are kind of looking at similar material stack. around the longevity and reliability of these microelectrodes that we call, compared to some of the other more conventional neural interfaces, devices that are intracranial, so penetrating the cortex, that are more rigid, you know, like the Utah Ray, that are these four by four millimeter kind of silicon shank that have exposed uh recording site at the end of it um and and um you know that's that's been kind of the innovation from Richard Norman back in 1997 uh it's called the Utah Ray because you know he was at University of Utah and what does the Utah Ray look like so it's a rigid type of yeah so we can actually look it up of yeah so we can actually look it up yeah yeah so it's a bed of needle um there's okay yeah i'm sorry those are rigid rigid yeah you won't get it and the size and the number of shanks vary anywhere from 64 to 128 um at the very tip of it is an exposed electrode that actually records neural signal. The other thing that's interesting to note is that unlike neural link threads that have recording electrodes that are actually exposed iridium oxide recording sites along the depth, this is only at a single depth. So these U-TRA spokes can be anywhere between 0.5 millimeters to 1.5 millimeter. And they also have designs that are slanted so you can have it inserted at different depth. But that's one of the other big differences. And then, I mean, the main key difference is the fact that there's no active electronics. These are just electrodes. And then there's a bundle of a wire that you're seeing. And then that actually then exits the craniectomy that then has this port that you can connect to for any external electronic devices. They are working on or have the wireless telemetry device, but it still requires a through-the-skin port that actually is one of the biggest failure modes for infection for the system. What are some of the challenges associated with flexible threads? Like for example on the robotics side, R1 implanting those threads, how difficult is that task? Yeah, so as you mentioned they're very very difficult to maneuver by hand. These utah arrays that you saw earlier, they're actually inserted by a neurosurgeon actually positioning it near the site that they want, and then there's a pneumatic hammer that actually pushes them in. So it's a pretty simple process and they're easier to maneuver. But for these thin film arrays, they're very, very tiny and flexible. So they're very difficult to maneuver. So that's why we built an entire robot to do that. There are other reasons for why we built the robot. And that is ultimately we want this to help millions and millions of people that can benefit from this, and there just aren't that many neurosurgeons out there. And robots can be something that we hope can actually do large parts of the surgery. But the robot is this entire other sort of category of product that we're working on. And it's essentially this multi-axis gantry system that has the specialized robot head that has all of the optics and this kind of a needle retracting mechanism that maneuvers these threads via this loop structure that you have on the thread. So the thread already has a loop structure by which you can grab it. Correct. Correct. So this is fascinating. So you mentioned optics. So there's a robot, R1. So for now, there's a human that actually creates a hole in the skull. And then after that, there's a computer vision component that's finding a way to avoid the blood vessels. And then you're grabbing it by the loop, each individual thread, and placing it in a particular location to avoid the blood vessels. And also choosing the depth of placement all that so controlling every like the 3d geometry of the placement correct so the the aspect of this robot that is unique is that it's not surgeon assisted or human assisted it's a semi-automatic or automatic uh robot once you know, obviously there are human component to it when you're placing targets. You can always move it away from kind of major vessels that you see. But I mean, we want to get to a point where one click and it just does the surgery within minutes. So the computer vision component finds great targets, candidates, and the human kind of approves them. And the robot does, does it do like one thread at a time or does it do? It does one thread at a time. And that's, that's actually also one thing that we are looking at ways to do multiple threads at a time. There's nothing stopping from it. You can have multiple kind of engagement mechanisms. But right now it's one by one. And, you know, we also still do quite a bit of just kind of verification to make sure that it got inserted. If so, how deep? Did it actually match what was programmed in? And so on and so forth. And the actual electrodes are placed at differing depths. I mean, it's very small differences, but differences. Yeah, yeah. depths in the uh like i mean it's very small differences but differences yeah yeah and so that there's some reasoning behind that as you mentioned like it gets more varied signal yeah we i mean we try to place them all around three or four millimeter from the surface um just because the span of the electrode, those 16 electrodes that we currently have in this version, spans roughly around three millimeters. So we want to get all of those in the brain. This is fascinating. Okay, so there's a million questions here. If we zoom in specifically on the electrodes, what is your sense, how many neurons is each individual electrode listening to? Yeah, each electrode can record from anywhere between 0 to 40, as I mentioned earlier. But practically speaking, we only see about, at most, 2 to 3. And you can actually distinguish which neuron it's coming from by the shape of the spikes. So I mentioned the spike detection algorithm that we have. It's called BOSS algorithm, Buffer Online Spike Sorter. It actually outputs at the end of the day, six unique values, which are kind of the amplitude of the day uh six unique values which are um you know kind of the amplitude of these like negative going hump middle hump like uh positive going hump and then also the time at which these happen and from that you can have a kind of a statistical probable probability um estimation of is that a spike is it not a spike and then based on that you could also uh determine oh that spike looks different than that spike must come from a different neuron. Okay, so that's a nice signal processing step from which you can then make much better predictions about if there's a spike, especially in this kind of context where there could be multiple neurons screaming. And that also results in you being able to compress the data better. And just to be clear, and that also results in you being able to compress the data better. Yeah. Okay. And just to be clear, I mean, the labs do this, what's called spike sorting. Usually, once you have these, like, broadband, you know, the fully digitized signals, and then you run a bunch of different set of algorithms to kind of tease apart. It's just all of this, for for us is done on the device. On the device. In a very low power, custom, you know, built ASIC digital processing unit. Highly heat constrained. Highly heat constrained. And the processing time from signal going in and giving you the output is less than a microsecond, which is a very, very short amount of time. Oh, yeah. So the latency has to be super short. Oh, wow. Oh, that's a pain in the ass. Yeah, latency is this huge, huge thing that you have to deal with. Right now, the biggest source of latency comes from the Bluetooth, the way in which they're packetized. And we bin them in 15 millisecond. Oh, interesting. So it's communication constraint is there some potential innovation there on the protocol used absolutely okay yeah bluetooth is definitely not uh our final uh wireless communication protocol that we want to get to it's a high hence the n1 and the r1 i imagine that that increases. NX, RX. Yeah, that's the communication protocol because Bluetooth allows you to communicate against farther distances than you need to so you can go much shorter. Yeah, the primary motivation for choosing Bluetooth is that, I mean, everything has Bluetooth. Oh, right, so you can talk to any device. Interoperability is just absolutely essential, especially in this early phase. And in many ways, if you can access a phone or a computer, you can do anything. Well, it'd be interesting to step back and actually look at, again, the same pipeline that you mentioned for Nolan. So what does this whole process look like from finding and selecting a human being to the surgery, to the first time he's able to use this thing? So we have what's called a patient registry that people can sign up to hear more about the updates. And that was a route to which Nolan applied. And the process is that once the application comes in, it contains some medical records. And based on their medical eligibility, there's a lot of different inclusion, exclusion criteria for them to meet. And we go through a pre-screening interview process with someone from Neuralink. And at some point, we also go out to their homes to do a BCI home audit. Because one of the most kind of revolutionary part about, you know, having this N1 system that is completely wireless is that you can use it at home. Like you don't actually have to go to the lab and, you know, go to the clinic to get connectorized to these specialized equipment that you can't take home with you. So that's one of the key elements of when we're designing the system that we wanted to keep in mind. People hopefully would want to be able to use this every day in the comfort of their home. And so part of our engagement and what we're looking for during BCI home audit is to just kind of understand their situation, what other assistive technology that they use. And we should also step back and kind of say that the estimate is 180,000 people live with quadriplegia in the United States, and each year an additional 18,000 suffer a paralyzing spinal cord injury. So these are folks who have a lot of challenges living a life in terms of accessibility, in terms of doing the things that many of us just take for granted day to day. And one of the things, one of the goals of this initial study is to enable them to have sort of digital autonomy, where they by themselves can interact with a digital device using just their mind, something that you're calling telepathy. So digital telepathy, where a quadriplegic can communicate with a digital device in all the ways that we've been talking about. Control the mouse cursor enough to be able to do all kinds of stuff, including play games and tweet and all that kind of stuff. And there's a lot of people for whom life, the basics of life are difficult because of the things that have happened to them. Yeah, I mean, movement is so fundamental to our existence. I mean, even speaking involves movement of mouth, lip, larynx. And without that, it's extremely debilitating. And there are many, many people that we can help and i mean especially if you start to kind of look at other forms of movement disorders um that are not just from spinal cord injury but from uh you know als uh ms or even stroke that that leads you and or just aging right that leads you to lose some of that mobility, that independence. It's extremely debilitating. And all of these are opportunities to help people, to help alleviate suffering, to help improve the quality of life. But each of the things you mentioned is its own little puzzle that needs to have increasing levels of capability from a device like a Neuralink device. And so the first one you're focusing on is, it's just a beautiful word, telepathy. So being able to communicate using your mind wirelessly with a digital device. Can you just explain this, exactly what we're talking about? Yeah, I mean, it's exactly that. I mean, I think if you are able to control a cursor and able to click and be able to get access to computer or phone, I mean, the whole world opens up to you. that as just definitionally being able to transfer information from my brain to your brain without using some of the physical faculties that we have, like voices. But the interesting thing here is, I think the thing that's not obviously clear is how exactly it works. So in order to move a cursor, there's at least a couple of ways of doing that. So one is you imagine yourself maybe moving a mouse with your hand. Or you can then, which Nolan talked about, like imagine moving the cursor with your mind. But it's like there is a cognitive step here that's fascinating, because you have to use the brain, and you have to learn how to use the brain. And you kind of have to figure it out dynamically, because you reward yourself if it works. So you like, I mean, there's a step that, this is just a fascinating step, because you have to get the brain to start firing in the right way yeah and you do that by imagining uh like fake it till you make it and all of a sudden it creates the right kind of signal that if decoded correctly uh can create the kind of effect and then there's like noise around that you have to figure all of that out but on the human side imagine the cursor moving is what you have to do. Yeah. He says using the force. The force. I mean, isn't that just like fascinating to you that it works? Like to me, it's like, holy shit, that actually works. Like you could move a cursor with your mind. You know, as much as you're learning to use that thing, that thing's also learning about you. Like our, our model is constantly updating the weights to say, oh, if, if someone is thinking about, you know, this sophisticated forms of like spiking patterns, like that actually means to do this, right? So the machine is learning about the human and the human is learning about the machine. So there is adaptability to the signal processing, the decoding step. And then there's the adaptation of Nolan, the human being. Like the same way if you give me a new mouse and I move it, I learn very quickly about its sensitivity. So I'll learn to move it slower. And then there's other kinds of signal drift and all that kind of stuff they have to adapt to. So both are adapting to each other. That's a fascinating software challenge on both sides, the software on both, on the human software and the organic and the inorganic. The organic and the inorganic. Anyway, so sorry to rudely interrupt. the human software and the organic and the inorganic organic and inorganic anyway so sorry to rudely interrupt so there's the selection that nolan has passed with flying colors um so everything including that it's a bci friendly home all of that so what is the the process of the surgery implantation the first moment when he gets to use the system? The end-to-end, you know, we say patient in to patient out is anywhere between two to four hours. In particular case for Nolan, it was about three and a half hours. And there's many steps leading to, you know, the actual robot insertion, right? So there's anesthesia, induction, and we do intra-op CT imaging to make sure that we're drilling the hole in the right location. And this is also pre-planned beforehand. Someone like Nolan would go through fMRI, and then they can think about wiggling their hand. Obviously, due to their injury, it's not going to actually lead to any sort of intended output, but it's the same part of the brain that actually lights up when you're imagining moving your finger to actually moving your finger. And that's one of the ways in which we can actually know where to place our threads, because we want to go into what's called the hand knob area in the motor cortex and, you know, as much as possible, densely put our electrode threads. So yeah, we do intra-op CT imaging to make sure and double check the location of the craniectomy. And surgeon comes in, does their thing in terms of skin incision, craniectomy, so drilling of the skull. And then there's many different layers of the brain. There's what's called a dura, which is a very, very thick layer that surrounds the brain. That gets actually resected in a process called a directomy. And that then exposed the PIA in the brain that you want to insert. And by the time it's been around anywhere between one to one and a half hours, robot comes in, does this thing, placement of the targets, inserting of the thread. That takes anywhere between 20 to 40 minutes in the particular case for Nolan, it was just under or just over 30 minutes. And then after that, the surgeon comes in. There's a couple other steps of like actually inserting the dural substitute layer to protect the thread as well as the brain. And then, yeah, screw in the implant and then skin flap and then suture and then you're out. So, when Nolan woke up, what was that like? What was the recovery like? And when was the first time he was able to use it? So he was actually immediately after the surgery, like an hour after the surgery as he was waking up, we did turn on the device, make sure that we are recording neural signals. And we actually did have a couple signals that we noticed that he can actually modulate. And what I mean by modulate is that he can think about crunching his fist and you could see the spike disappear and appear. That's awesome. And that was immediate, right? Immediate after in the recovery room. How cool is that? Yeah. That's a human being. I mean, what does that feel like for you? This device and a human being, a first step of a gigantic journey. I mean, it's a historic moment. Even just that spike, just to be able to modulate that. You know, obviously there have been other, you know, as you mentioned, pioneers that have participated in these groundbreaking BCI investigational early feasibility studies. So we're obviously standing on the shoulders of the giants here. We're not the first ones to actually put electrodes in the human brain. But, I mean, just leading up to the surgery, I definitely could not sleep. It's the first time that you're working in a completely new environment. We had a lot of confidence based on our benchtop testing, our preclinical R&D studies that the mechanism, the threads, the insertion, all that stuff is very safe and that it's obviously ready for doing this in a human, but there's still a lot of unknown, unknown about can the needle actually insert. I mean, we brought something like 40 needles just in case they break. And we ended up using only one. But I mean, that was a level of just complete unknown, right? Because it's a very, very different environment. And I mean, that's why we do clinical trial in the first place, to be able to test these things out. So extreme nervousness and just many, many sleepless nights leading up to the surgery and definitely the day before the surgery. And it was an early morning surgery. We started at 7 in the morning. And by the time it was around 10.30, everything was done. I mean, first time seeing that, well, number one, just huge relief that this thing is doing what it's supposed to do. And two, I mean, just immense amount of gratitude for Nolan and his family. And then many others that have applied and that we've spoken to and will speak to are true pioneers in every war. And I sort of call them the neural astronauts or neural not. Neural not, yeah. These amazing, just like in the 60s, right? These amazing just pioneers, right? Exploring the unknown outwards, in this case it's inward. But an incredible amount of gratitude for them to just participate and play a part. And it's a journey that we're embarking on together. But also, I think it was just, that was a very, very important milestone, but our work was just starting. So a lot of just kind of anticipation for, okay, what needs to happen next? What are a set of sequences of events that needs to happen for us to make it worthwhile for both Nolan as well as us? Just a link on that, just a huge congratulation to you and the team for that milestone. I know there's a lot of work left, but that is really exciting to see. That's a source of hope. It's this first big step, opportunity to help hundreds of thousands of people and then maybe expand the realm of the possible for the human mind for millions of people in the future. So it's really exciting. So the opportunities are all ahead of us and to do that safely and to do that effectively was really fun to see. As an engineer, just watching other engineers come together and do an epic thing, that was awesome. So huge congrats. Thank you, thank you. Yeah, could not have done it without the team. Thank you. Thank you. It's yeah. Could not have done it without the team. And yeah, I mean, that's the other thing that I, I, you know, told the team as well of just this immense sense of optimism for the future. I mean, it was, it's a very important moment for, for the company, you know, needless to say, as well as, you know, hopefully for many others out there that we can help. So speaking of challenges, Neuralink published a blog post describing that some of the threads are attracted. And so the performance as measured by bits per second dropped at first, but then eventually it was regained. And the whole story of how it was regained is super interesting. That's definitely something I'll talk to Bliss and to Nolan about. was regained and that the whole story of how it was regained is super interesting that's definitely something i'll talk to uh to bliss and to no one about um but in general um can you speak to this whole experience how was the performance regained and um just the the technical aspects of the threads being retracted and moving the main takeaway is that in the end, the performance have come back and it's actually gotten better than it was before. He's actually just beat the world record yet again last week to 8.5 BPS. So, I mean, he's just cranking and he's just improving. The previous one that he set was 8. Correct. He set 8.5. Yeah. The previous world record in human was eight correct he said 8.5 yeah the previous world record uh in human was 4.6 yeah so it's uh almost double and his goal is to try to get to 10 which is rough roughly around kind of the median neural linker uh using a you know mouse with the hand so it's um it's getting there yes. So the performance was regained. Yeah, better than before. So that's a story on its own of what took the BCI team to recover that performance. It was actually mostly on kind of the signal processing. And so, as I mentioned, we were kind of looking at these spike outputs from the electrodes. And what happened is that kind of four weeks into the surgery, we noticed that the threads have slowly come out of the brain. And the way in which we noticed this at first, obviously is that, well, I think Nolan was the first to notice that his performance was degrading. And I think at the time, we were also trying to do a bunch of different experimentation, different algorithms, different sort of UI, UX. So it was expected that there will be variability in the performance, but we did see kind of a steady decline. And then also the way in which we measure the health of the electrodes or whether they're in the brain or not is by measuring impedance of the electrode. So we look at kind of the interfacial, kind of the Randall circuit, let they say, you know, the capacitance and the resistance between the electrosurface and the medium. And if that changes in some dramatic ways, we have some indication. Or if that changes in some dramatic ways, we have some indication. Or if you're not seeing spikes on those channels, you have some indications that something's happening there. And what we notice is that looking at those impedance plot and spike rate plots, and also because we have those electrodes recording along the depth, you're seeing some sort of movement that indicated that the reservoir being pulled out. And that obviously will have an implication on the model side you're seeing some sort of movement that indicated that the reservoir being pulled out. And that obviously will have an implication on the model side, because if you're the number of inputs that are going into the model is changing because you have less of them, that model needs to get updated, right? But there were still signals. And as I mentioned, similar to how even when you place the signals on the surface of the brain or farther away, like outside the skull, you still see some useful signals. What we started looking at is not just the spike occurrence through this BOSS algorithm that I mentioned. But we started looking at just the power of the frequency band that is interesting for Nolan to be able to modulate. So once we kind of changed the algorithm for the implant to not just give you the BOSS output, but also these spike band power output, that helped us sort of refine the model with the new set of inputs. And that was the thing that really ultimately gave us the performance back. You know, in terms of, and obviously like the thing that we want ultimately, and the thing that we are working towards is figuring out ways in which we can keep those threads intact for as long as possible so that we have many more channels going into the model. That's by far the number one priority that the team is currently embarking on to understand how to prevent that from happening. The thing that I will say also is that, as I mentioned, this is the first time ever that we're putting these threads in a human brain. And a human brain, just for size reference, is 10 times that of the monkey brain or the sheep brain. And it's just a very, very different environment. It moves a lot more. It actually moved a lot more than we expected when we did Nolan's surgery. And it's just a very, very different environment than what we're used to. And this is why we do clinical trial, right? We want to uncover some of these issues and failure modes earlier than later. So in many ways, it's provided us with this enormous amount of data and information to be able to solve this. And this is something that Neuralink is extremely good at. Once we have set of clear objective and engineering problem, we have enormous amount of talents across many, many disciplines to be able to come together and fix the problem very, very quickly. But it sounds like one of the fascinating challenges here is for the system and the decoding side to be adaptable across different timescales. So whether it's movement of threads or different aspects of signal drift sort of on the software of the human brain, something changing. Nolan talks about cursor drift that could be corrected, and there's a whole UX challenge to how to do that. So it sounds like adaptability is Nolan talks about cursor drift that could be corrected. And there's a whole UX challenge to how to do that. So it sounds like adaptability is like a fundamental property that has to be engineered in. It is. And I think, I mean, as a company, we're extremely vertically integrated. You know, we make these thin film arrays in our own micro fab. integrated. We make these thin film arrays in our own microfab. Like you said, built in-house. This whole paragraph here from this blog post is pretty gangster. Building the technologies described above has been no small feat. There's a bunch of links here that I recommend people click on. We constructed in-house microfabrication capabilities to rapidly produce various iterations of thin film arrays that constitute our electrode threads. We created a custom femtosecond laser mill to manufacture components with micro-level precision. I think there's a tweet associated with this. That's a whole thing that we can get into. Yeah, this, okay. What are we looking at here? This thing. So, in less than one minute, our custom-made femtosecond laser mill cuts this geometry in the tips of our needles. So we're looking at this weirdly shaped needle. The tip is only 10 to 12 microns in width, only slightly larger than the diameter of a red blood cell. The small size allows threads to be inserted with minimal damage to the cortex. Okay, so what's interesting about this geometry? So we'll look at this geometry of a needle. This is the needle that's engaging with the loops in the thread. So they're the ones that thread the loop and then peel it from the silicon backing and then this is the thing that gets inserted into the tissue and then this pulls out leaving the thread. And this kind of a notch or the shark tooth that we used to call is the thing that actually is grasping the loop and then it's designed in such a way such that when you pull is grasping the loop. And then it's designed in such a way such that when you pull out, it leaves the loop. And the robot is controlling this needle. Correct. So this is actually housed in a cannula. And basically, the robot has a lot of the optics that look for where the loop is. There's actually a 405 nanometer light that actually causes the polyimide to fluoresce so that you can locate the the location of the loop um so the loop lights up yeah yeah they do it's a micron precision process what's interesting about the robot that it takes to do that that's that's pretty crazy that's pretty crazy that robot is able to get this kind of precision? Yeah, our robot is quite heavy. Our current version of it, there's, I mean, it's like a giant granite slab that weighs about a ton because it needs to be sensitive to vibration, environmental vibration. And then as the head is moving, at the speed that it's moving, there's a lot of kind of motion control to make sure that you can achieve that level of precision. A lot of optics of kind of motion control to make sure that you can achieve that level of precision um a lot of optics that kind of zoom in on that um you know we're working on next generation of the robot that is lighter easier to transport i mean it is a it is a feat to move the robot and it's far superior to a human surgeon at this time for this particular task absolutely i mean let alone you try to actually thread a loop in a sewing kit. I mean, this is like, we're talking like fractions of human hair. These things are, it's not visible. So continuing the paragraph, we developed novel hardware and software testing systems, such as our accelerated lifetime testing racks and simulated surgery environment, which is pretty cool, to stress test and validate the robustness of our technologies. We performed many rehearsals of our surgeries to refine our procedures and make them second nature. This is pretty cool. We practice surgeries on proxies with all the hardware and instruments needed in our mock or in the engineering space. This helps us rapidly test and measure. So there's like proxies. Yeah, this proxy is super cool, actually. So there's a 3D printed skull from the images that is taken at Barrow, as well as this hydrogel mix, you know, sort of synthetic polymer thing that actually mimics the mechanical properties of the brain. It also has vasculature of the person. So basically what we're talking about here, and there's a lot of work that has gone into making this set proxy that, you know, it's about like finding the right concentration of these different synthetic polymers to get the right set of consistency for the needle dynamics you know as they're being inserted but we practice this surgery with the person you know nolan's basically physiology and brain um many many times prior to actually doing the surgery so every every step every step every step yeah like where does someone stand? Like, I mean, like what you're looking at is the picture. This is in, in, in our office of this kind of corner of the robot engineering space that we, you know, have created this like mock or space that looks exactly like what they would experience. All the staff would experience doing their actual surgery. So, I mean, it's just kind of like any dance rehearsal where you know exactly where you're going to stand at what point. And you just practice that over and over and over again with an exact anatomy of someone that you're going to surgeries. And it got to a point where a lot of our engineers, when we created a craniectomy, they're like, oh, that looks very familiar. We've seen that before. Yeah. And there's wisdom you can gain through doing the same thing over and over and over. It's like Jiro dreams of sushi kind of thing. Because then it's like Olympic athletes visualize the Olympics. And then once you actually show up, it feels easy. It feels like any other day. It feels almost boring winning the gold medal. Because you visualize this so many times, you've practiced this so many times, and nothing about it is new. It's boring. You win the gold medal, it's boring. And the experience they talk about is mostly just relief. Probably that they don't have to visualize it anymore. Yeah, the power of the mind to visualize and where i mean there's a whole field that studies where muscle memory lies in cerebellum yeah it's incredible uh i think it's a good place to actually ask sort of the big question people might have is how do we know every aspect of this that you described is safe? At the end of the day, the gold standard is to look at the tissue. You know, what sort of trauma did you cause the tissue? And does that correlate to whatever behavioral anomalies that you may have seen? And that's the language to which we can communicate about the safety of inserting something into the brain and what type of trauma that you can cause. So we actually have an entire department of pathology that looks at these tissue slices. There are many steps that are involved in doing this once you have studies that are launched with particular endpoints in mind. At some point, you have to euthanize the animal, and then you go through necropsy to collect the brain tissue samples. You fix them in formalin, and you gross them, you section them, and you look at individual slices just to see what kind of reaction or lack thereof exists. So that's the kind of the language to which FDA speaks. And, you know, as well for us to kind of evaluate the safety of the insertion mechanism, as well as the threats at various different time points, you know, both acute. So anywhere between, you know, zero to three months to beyond three months. So those are kind of the details of an extremely high standard of safety that has to be reached. Correct. FDA supervises this, but there's in general just a very high standard. And every aspect of this, including the surgery. I think Matthew McDougall has mentioned that like the standard is, let's say, how to put it politely, higher than maybe some other operations that we take for granted. So the standard for all the surgical stuff here is extremely high. Very high. I mean, it's a highly, highly regulated environment with the governing agencies that scrutinize every medical device that gets marketed. And I think it's a good thing. You know, it's good to have those high standards. And we try to hold extremely high standards to kind of understand what sort of damage, if any, these innovative emerging technologies and new technologies that we're building are. these innovative emerging technologies and new technologies that we're building are. And so far, we have been extremely impressed by lack of immune response from these threads. Speaking of which, you talked to me with excitement about the histology and some of the images that you're able to share. Can you explain to me what we're looking at? Yeah, so what you're looking at is a stained tissue image. So this is a sectioned tissue slice from an animal that was implanted for seven months, so kind of a chronic time point. And you're seeing all these different colors, and each color indicates specific types of cell types. So purple and pink are astrocytes and microglia, respectively. They're types of glial cells. And the other thing that people may not be aware of is your brain is not just made up of soup of neurons and axons. There are other cells like glial cells that actually kind of is the glue and also react if there are any trauma or damage to the tissue. The brown are the neurons. The brown are the neurons. So what you're seeing is in this kind of macro image, you're seeing these like circle highlighted in white, the insertion sites and when you zoom into one of those you see the threads and then in this particular case i think we're seeing about the 16 uh you know wires that are going into the page and the incredible thing here is the fact that you have the neurons that are these brown structures or brown circular or elliptical thing that are actually touching and abutting the threads. So what this is saying is that there's basically zero trauma that's caused during this insertion. And with these neural interfaces, these microelectrodes that you insert, that is one of the most common mode of failure. So when you insert these threads, like the Utah ray, it causes neuronal death around the site because you're inserting a foreign object, right? And that kind of elicits these immune response through microglia and astrocytes. They form this protective layer around it. Not only are you killing the neuron cells, but you're also creating this protective layer that then basically prevents you from recording neural signals because you're getting further and further away from the neurons that you're trying to record. And that is the biggest mode of failure. And in this particular example, in that inset, it's about 50 micron with that scale bar. The neurons just seem to be attracted to it. And so there's certainly no trauma. That's such a beautiful image, by the way. So the brown are the neurons. And for some reason, I can't look away it's really cool and and the way that these things like i mean your tissues generally don't have these beautiful colors um this is a multiplex stain that uses these different uh proteins that are staining these at different colors you know we use very standard set of um you know staining techniques with HG, EBA1, and NUEN, and GFAP. So if you go to the next image, this also kind of illustrates the second point because you can make an argument and initially when we saw the previous image, we said, oh, are the threads just floating? What is happening here? Are we actually looking at the right thing? So what we did is we did another stain and this is all done in-house of this. Mason's trichrome stain, which is in blue that shows these collagen layers. So the blue basically like you don't want the blue around the implant threads. Cause that means that there's some sort of scarring does happen. And what you're seeing, if you look at individual threads is that you don't see any of the blue, which means that there has been absolutely or very, very minimal to a point where it's not detectable amount of trauma in these inserted threads. So that presumably is one of the big benefits of having this kind of flexible thread. Yeah, so we think this is primarily due to the size as well as the flexibility of the threads. Also, the fact that R1 is avoiding vasculature, so we're not disrupting or we're not causing damage to the vessels and not breaking any of the blood-brain barrier has basically caused the immune response to be muted. But this is also a nice illustration of the size of things. So this is the tip of the thread. Yeah, those are neurons. And they're neurons. And this is the thread listening. And the electrodes are positioned how? Yeah, so what you're looking at is not electrodes themselves. Those are the conductive wires. So each of those should probably be 2 micron in width. what you're looking at is not electrode themselves. Those are the conductive wires. So, each of those should probably be two micron in width. So, what we're looking at is we're looking at the coronal slice. So, we're looking at some slice of the tissue. So, as you go deeper, you know, you'll obviously have less and less of the tapering of the thread. But, yeah, the point basically being that there's just kind of cells around the inserticide, which is just an incredible thing to see. I've just never seen anything like this. How easy and safe is it to remove the implant? Yeah, so it depends on when. In the first three months or so after the surgery, in the first three months or so after the surgery, there's a lot of kind of tissue modeling that's happening. You know, similar to when you get a cut, you know, you obviously, you know, start over first couple of weeks or depending on the size of the wound, scar tissue forming, right? There are these like contracted, and then in the end they turn into scab and you can scab it off. The same thing happens in the brain. And it's a very dynamic environment. And before the scar tissue or the neomembrane or the new membrane that forms, it's quite easy to just pull them out. And there's minimal trauma that's caused during that. Once the scar tissue forms, and with Nolan as well, we believe that that's the thing that's caused during that. Once the scar tissue forms, and with Nolan as well, we believe that that's the thing that's currently anchoring the threads. So we haven't seen any more movements since then. So they're quite stable. It gets harder to actually completely extract the threads. So our current method for removing the device is cutting the thread, leaving the tissue intact, and then unscrewing and taking the implant out. And that hole is now going to be plugged with either another Neuralink or just with kind of a peak-based, plastic-based cap. Is it okay to leave the threads in there forever? Yeah, we think so. We've done studies where we left them there, and one of the biggest concerns that we had is like, do they migrate and do they get to a point where they should not be? We haven't seen that. Again, once the scar tissue forms, they get anchored in place. And I should also say that when we say upgrades, we're not just talking in theory here. We've actually upgraded many, many times. Most of our monkeys or non-human primates, NHP, have been upgraded. Pager, who you saw playing Mind P mind pong has the latest version of the device since two years ago and is seemingly very happy and healthy and fat so what's uh designed for the future the upgrade procedure so uh maybe uh for noland what what would the upgrade look like it was essentially what you're mentioning. Is there a way to upgrade sort of the device internally where you take it apart and sort of keep the capsule and upgrade the internals? Yeah, so there are a couple of different things here. So for Nolan, if we were to upgrade, what we would have to do is either cut the threads or extract the threads, depending on kind of the situation there in terms of how they're anchored or scarred in. If you were to remove them with the dual substitute, you have an intact brain, so you can reinsert different threads with the updated implant package. There are a couple of different other ways that we're thinking about the future of what the upgradeable system looks like. One is, at the moment, we currently remove the dura, this kind of thick layer that protects the brain. But that actually is the thing that actually proliferates the scar tissue formation. So typically, the general good rule of thumb is you want to leave the nature as is and not disrupt it as much. So we're looking at ways to insert the threads through the dura, which comes with different set of challenges, such as, you know, it's a pretty thick layer. So how do you actually penetrate that without breaking the needle? So we're looking at different needle design for that, as well as the kind of the loop engagement. The other biggest challenges are it's quite opaque optically and with white light illumination. So how do you avoid still this biggest advantage that we have of avoiding vasculature? How do you image through that? How do you actually still mediate that? So there are other imaging techniques that we're looking at to enable that. But our hypothesis is that, and based on some of the early evidence that we have, doing through the dura insertion will cause minimal scarring that causes them to be much easier to extract over time. And the other thing that we're also looking at, this is going to be a fundamental change in the implant architecture is as at the moment it's a monolithic single implant that comes with a thread that's um bonded together so you can't actually separate the thing out but you can imagine having two-part implant um you know bottom part that is the thread that are inserted that has the chips um and maybe a radio and some power source and then you have another implant that has more chips and maybe a radio and some power source. And then you have another implant that has more of the computational heavy load and the bigger battery. And then one can be under the dura, one can be above the dura, like, you know, being the plug for the skull. They can talk to each other, but the thing that you want to upgrade, the computer and not the threads, if you want to upgrade that, you just go in there, you know, remove the screws and then put in the next version. And, you know, you're off to, you know, it's a very, very easy surgery to like, you do a skin incision, slip this in, screw, probably be able to do this in 10 minutes. So that would allow you to reuse the threads sort of. Correct. So, I mean, this leads to the natural question of, uh, what is the pathway to scaling the increase in the number of threads? Is a priority is that like what's what's the technical uh challenge there yeah that that is a priority so for next versions of the implant um you know the key metrics that we're looking to improve are number of channels just recording from more and more neurons um you know we have a pathway to actually go from currently 1,000 to, you know, hopefully 3,000, if not 6,000 by end of this year. And then end of next year, we want to get to, you know, even more, 16,000. Wow. There's a couple of limitations to that. One is, you know, obviously being able to photolithographically print those wires. As I mentioned, it's two micron in width and spacing. Obviously, there are chips that are much more advanced than those types of resolution, and we have some of the tools that we have brought in-house to be able to do that. So traces will be narrower just so that you have to have more of the wires coming up into the chip. Chips also cannot linearly consume more energy as you have more and more channels. So there's a lot of innovations in the circuit, you know, in architecture as well as the circuit design topology to make them lower power. You need to also think about if you have all of these spikes, how do you send that off to the end application? So you need to think about bandwidth limitation there and potentially innovations in signal processing. Physically, one of the biggest challenges is going to be the interface. It's always the interface that breaks. Bonding the thin film array to the electronics, it starts to become very, very highly dense interconnects. So how do you connectorize that there's a lot of innovations in kind of the 3D integrations in the recent years that we can take advantage of one of the biggest challenges that we do have is forming this hermetic barrier this is an extremely harsh environment that we're in the brain so how do you protect it from the brain trying so how do you protect it from uh yeah like the brain trying to kill your electronics to also your electronics leaking things that you don't want into the brain and that forming that hermetic barrier is going to be a very very big challenge that we uh you know i think are actually well suited to tackle how do you test that like what's the development environment yeah to simulate that kind of harshness? Yeah, so this is where the accelerated life tester essentially is a brain in a vat. It literally is a vessel that is made up of, and again, for all intents and purpose for this particular type of test, your brain is a salt water. of tests, your brain is a salt water. And you can also put some other set of chemicals like reactive oxygen species that get at these interfaces and try to cause a reaction to pull it apart. But you could also increase the rate at which these interfaces are aging by just increasing temperature. So every 10 degrees Celsius that you increase, you're basically accelerating time by 2x. And there's a limit as to how much temperature you want to increase, because at some point, there's some other nonlinear dynamics that causes you to have other nasty gases to form that just is not realistic in an environment. So what we do is we increase in our ALT chamber by 20 degrees Celsius that increases the aging by four times. So essentially one day in ALT chamber is four day in calendar year. And we look at whether the implants still are intact, including the threads. And operation and all of that. And operation and all of that. And operation and all of that. Obviously it's not an exact same environment as a brain because, you know, brain has mechanical, you know, other more biological groups that attack at it. But it is a good test environment, testing environment for at least the enclosure and the strength of the enclosure. And I mean, we've had implants, the current version of the implant that has been in there for, I mean, close to two and a half years, which is equivalent to a decade, and they seem to be fine. So it's interesting that the, so basically a close approximation is warm salt water. Hot salt water is a good testing environment. By the way, I'm drinking element, which is basically salt water, which is making me kind of – it doesn't have computational power the way the brain does, but maybe in terms of other characteristics, it's quite similar. And I'm consuming it. Yeah, you have to get it in the right pH too. And then consciousness will emerge. Yeah. No. By the way, the other thing that also is interesting about our enclosure is if you look at our implant, it's not your common looking medical implant that usually is encased in a titanium can that's laser welded. We use this polymer called PCTFE, polychlorotrifluoroethylene, which is actually commonly used in blister packs. So when you have a pill and you try to pop the pill, there's like kind of that plastic membrane. That's what this is. No one's actually ever used this except us. And the reason we wanted to do this is because it's electromagnetically transparent. So when we talked about the electromagnetic inductive charging with titanium can, usually if you want to do something like that, you have to have a sapphire window and it's a very, very tough process to scale. So you're doing a lot of iteration here in every aspect of this. The materials, the software, the whole shebang. So, okay. So you mentioned scaling. Is it possible to have multiple Neuralink devices as one of the ways of scaling? To have multiple neural link devices implanted. That's the goal. That's the goal. Yeah. We we've had, we've had, um, I mean, our monkeys have had two neural links, one in each hemisphere, and then we're also looking at, you know, potential of having one in more cortex, one in visual cortex and one in wherever other cortex. So focusing on a particular function, one neural link device. I mean, I wonder if there's some level of customization that can be done on the compute side. So for the motor cortex. Absolutely. That's the goal. And we talk about at Neuralink building a generalized neural interface to the brain. And that also is strategically how we're approaching this with marketing and also with regulatory, which is, hey, look, we have the robot, and the robot can access any part of the cortex. Right now, we're focused on motor cortex with current version of the N1 that's specialized for motor decoding tasks. But also at the end of the day, there's kind of a general compute available there. But typically, if you want to really get down to kind of hyper-optimizing for power and efficiency, you do need to get to some specialized function, right? But what we're saying is that, hey, but what we're saying is that hey, you are now used to this robotic insertion techniques which took many, many years of showing data and in conversation with the FDA and also internally convincing ourselves that this is safe and now the difference is if we go to other parts of the brain like visual cortex which we're interested in as our second product, obviously it's a completely different environment, the cortex is laid out very very differently. It's going to be more stimulation focused rather than recording, just kind of creating visual percepts. But in the end we're using the same thin film array technology. We're using the same robot insertion technology. We're using the same packaging technology. Now it's more the conversation is focused around what are the differences and what are the implications of those differences in safety and efficacy. The way you said second product is both hilarious and awesome to me. That product being restoring sight for blind people so can you speak to stimulating the visual cortex i mean the possibilities there are just incredible to be able to give that gift back to people who don't have sight or even any aspect of that. Can you just speak to the challenges of, there's several challenges here. One of which is, like you said, from recording to the stimulation, just any aspect of that that you're both excited and see the challenges of. Yeah, I guess I'll start by saying that we actually have been capable of stimulating through our dental meray as well as our electronics for years. We have actually demonstrated some of that capabilities for reanimating the limb in the spinal cord. Obviously, for the current EFS study, you know, we've hardware disabled that, so that's, that's something that, you know, we wanted to embark as a separate, separate journey. Um, and, and, you know, obviously there are many, many different ways to write information into the brain. The way in which we're doing that is through electrical, you know, passing electrical current and, and kind of causing that to really change the local environment so that you can artificially cause the neurons to depolarize in nearby areas. For vision specifically, the way our visual system works, it's both well understood i mean anything with kind of brain there are aspects of it that's well understood but in the end like we don't really know anything but the way visual system works is that you have photon hitting your eye and in your eyes you know there are these specialized cells called photoreceptor cells that convert the photon energy into electrical signals and then they get that then gets projected to um your back of your head your visual cortex um you know it goes through actually um you know thelemic system called lgn that then projects it out and then in the visual cortex there's you know visual area one or v, and then there's a bunch of other higher level processing layers like V2, V3. And there are actually kind of interesting parallels. And when you study the behaviors of these convolutional neural networks, like what the different layers of the network is detecting, first they're detecting these edges, and they're then detecting some more natural curves and then they start to detect like objects right kind of similar thing happens in the brain um and a lot of that has been inspired and also you know it's been kind of exciting to see some of the correlations there um but you know things like from there where does cognition arise and where where's color encoded there's there's just not a lot of um understanding fundamental understanding there so in terms of kind of bringing sight back to those that are blind um there are many different forms of blindness there's actually a million people one million people in the u.s that are legally blind um you know that means like certain uh like score below in kind of the visual test. I think it's something like if you can see something at 20 feet distance that normal people can see at 200 feet distance, like if you're worse than that, you're legally blind. So fundamentally, that means you can't function effectively using sight in the world. Yeah, like to navigate your environment. And yeah, there are different forms of blindness. and effectively using sight in the world. Yeah, like to navigate your environment. And yeah, there are different forms of blindness. There are forms of blindness where there's some degeneration of your retina, these photoreceptor cells, and the rest of your visual processing that I described is intact. And for those types of individuals, you may not need to maybe stick electrodes into the visual cortex. You can actually build retinal prosthetic devices that actually just replaces a function of that retinal cells that are degenerated. And there are many companies that are working on that but that's a very small slice the obvious significance the smaller slice of folks that are legally blind um you know if there's any damage along that circuitry whether it's in the optic nerve or you know just the lgn circuitry or any any break in that circuit that's not going to work for you um and uh that's not going to work for you um and uh the source of where you need to actually cause that visual percept to happen because your biological mechanism of doing that is by placing electrodes in the visual cortex in the back of your head and the way in which this would work is that you would have an external camera whether it's um you's something as unsophisticated as a GoPro or some sort of wearable Ray-Ban type glasses that Meta's working on, that captures a scene. And that scene is then converted to a set of electrical impulses or stimulation pulses that you would activate in your visual cortex through these thin film arrays. And by playing some concerted orchestra of these stimulation patterns, you can create what's called phosphenes, which are these kind of white, yellowish dots that you can also create by just pressing your eyes. You can actually create those percept pressing your eyes um you can actually create those percepts by stimulating the visual cortex and the name of the game is really have many of those and have those percepts be the phosphenes be as small as possible so that you can start to tell apart like they're the individual pixels of the the of the screen right so if you have many many of those you know potentially you'll be able to, in the long term, be able to actually get naturalistic vision. But in the short term to maybe midterm, being able to at least be able to have object detection algorithms run on your glasses, the pre-pop processing units, and then being able to at least see the edges of things so you don't bump into stuff it's incredible this is really incredible so you basically would be adding pixels and your brain would start to figure out what those pixels mean yeah and like with different kinds of the system and the signal processing on all fronts yeah the thing that actually it's a couple things one is um you know, obviously if you're blind from birth, the way brain works, especially in the early age, neuroplasticity is really nothing other than, you know, kind of your brain and different parts of your brain fighting for the limited territory. territory. And I mean, very, very quickly, you see cases where people that are, I mean, you also hear about people who are blind that have heightened sense of hearing or some other senses. And the reason for that is because that cortex that's not used just gets taken over by these different parts of the cortex. So for those types of individuals, I mean, I guess they're going to have to now map some other parts of their senses into what they call vision. But it's going to be obviously a very, very different conscious experience. So I think that's an interesting caveat. The other thing that also is important to highlight is that we're currently limited by our biology in terms of the wavelength that we can see. There's a very, very small wavelength that is a visible light wavelength that we can see with our eyes. But when you have an external camera with this BCI system, you're not limited to that. You can have infrared, you can have UV, you can have whatever other spectrum that you want to see. And whether that gets mapped to some sort of weird conscious experience, I've no idea. But oftentimes I talk to people about the goal of Neuralink being going beyond the limits of our biology. That's sort of what I mean. And if you're able to control the kind of raw signal, when we use our sight, we're getting the photons, and there's not much processing on it. If you're able to control that signal, maybe you can do some kind of processing. Maybe you do object detection ahead of time. Yeah. You're doing some kind of pre-processing, and there's a lot of possibilities to explore that. So it's not just increasing sort of thermal imaging that kind of stuff but it's also just doing some kind of interesting processing yeah i mean my my theory of how like visual system works also is that um i mean there's just so many things happening in the world and there's a lot of photons that are going into your eye and it's unclear exactly where some of the pre-processing steps are happening but i mean i actually think that just just from a fundamental perspective there's just so much uh the reality that we're in if it's a reality um is so there's so much data and I think humans are just unable to actually. Like eat enough actually to process all that information. So there's some sort of filtering that does happen, whether that happens in the retina, whether that happens in different layers of the visual cortex, unclear, but like the analogy that I sometimes think about is, you know, if, uh, if your brain is a CCD camera and the and all of the information in the world is a sun, and when you try to actually look at the sun with the CCD camera, it's just going to saturate the sensors, right? Because it's an enormous amount of energy. So what you do is you end up adding these filters, right? To just kind of narrow the information that's coming to you and being captured. And I think, you know, things like our experiences or our, you know, like drugs, like propofol, like anesthetic drug or, you know, psychedelics, what they're doing is they're kind of swapping out these filters and putting in new ones or removing older ones and kind of controlling our conscious experience. Yeah, man, not to distract from the topic, but I just took a very high dose of ayahuasca in the Amazon jungle. So, yes, it's a nice way to think about it. You're swapping out different experiences. experiences and with Neuralink being able to control that primarily at first to improve function, not for entertainment purposes or enjoyment purposes, but... Yeah, giving back lost functions. Well, giving back lost functions. And there, especially when the function is completely lost, anything is a huge help. Would you implant a Neuralink device in your own brain? Absolutely. I mean, maybe not right now, but absolutely. What kind of capability, once reached, you start getting real curious and almost get a little antsy, like jealous of people as you watch them get implanted? Yeah, I mean, I think even with our early participants, if they start to do things that I can't do, which I think is in the realm of possibility for them to be able to get 15, 20, if not like 100 BPS, right? There's nothing that fundamentally stops us from being able to achieve that type of performance um i mean i would certainly get jealous um that they can do that i should say that watching no and i get a little jealous because he's having so much fun and it seems like such a chill way to play video games yeah so i mean the thing that also is hard to appreciate sometimes is that you know he, he's doing these things while talking. I mean, it's multitasking, right? So it's clearly, it's obviously cognitively intensive, but similar to how, you know, when we talk, we move our hands, like these things like, you know, are multitasking. I mean, he's able to do that. our hands like these things like you know you like are multitasking i mean he's able to do that and you know you won't be able to do that with other assistive technology as far as i i'm aware you know if you're obviously using like an eye tracking device you know you're very much fixated on that thing that you're trying to do and if you're using voice control i mean like if you say some other stuff yeah you don't get to use that. Yeah, the multitasking aspect of that is really interesting. So it's not just the BPS for the primary task, it's the parallelization of multiple tasks. If you measure the BPS for the entirety of the human organism, so if you're talking and doing a thing with your mind and looking around also, I mean, there's just a lot of parallelization that can be happening but i mean i think at some point for him like if he wants to really achieve those high level bps it does require like you know full attention right um and that's a separate circuitry that that um is a big mystery like how attention works and you know yeah attention like cognitive load i've done i've read a lot of literature on people doing two tasks like uh you have your primary task and a secondary task and the secondary task is is a source of distraction and how does that affect the performance of the primary task and there's depending on the task there's a lot of interesting i mean this is an interesting computational device right and i think there's to say the least a lot of novel insights that can be gained from everything i mean i personally am surprised that no one's able to do such incredible control of the cursor while talking and also being nervous at the same time because he's talking like all of us are if you're talking in front of the camera you get nervous so all of those are coming into play he's able to still achieve high performance surprising i mean all of this is really amazing uh and i think just after researching this really in depth i kind of wanted you know like get in the line and also the safety kit in mind well we should say the registry is for people who have quadriplegia and all that kind of stuff. So, there'll be a separate line for people. They're just curious, like myself. So, now that Nolan, patient P1 is part of the ongoing prime study, part of the ongoing prime study. What's the high level vision for P2, P3, P4, P5, and just the expansion into other human beings that are getting to experience this implant? Yeah, I mean, the primary goal is, you know, for for our study in the first place is to achieve safety endpoints, just understand safety of this device, as well as the implantation process. And also at the same time, understand the efficacy and the impact that it could have on the potential users lives. And just because you have, you know, you're living with tetraplegia, it doesn't mean your situation is same as another're living with tetraplegia, it doesn't mean your situation is same as another person living with tetraplegia. It's wildly, wildly varying. And it's something that we're hoping to also understand how our technology can serve not just a very small slice of those individuals, but broader group of individuals and being able to get the feedback to just really build just the best product for them. So there's obviously also goals that we have. And the primary purpose of the early feasibility study is to learn from each and every participant to improve the device, improve the surgery before we embark on what's called a pivotal study that then is a much larger trial that starts to look at statistical significance of your endpoints. And that's required before you can then market the device. And that's how it works in the US and just generally around the world. That's the before you can then market the device and and you know that's how it works in the u.s and just generally around the world that's that's the process you follow so you know our goal is to really just understand from people like nolan p2 p3 future participants what aspects of our device needs to improve you know if if it turns out that people are like i really don't like the fact that it lasts only six hours i want to be able to use this computer for you know like 24 hours i mean that's that is a you know user needs and user requirements which we can only find out from just just being able to engage with them so before the pivotal study there's kind of like a rapid innovation based on individual experiences you're learning from individual people how they use it, like the high-resolution details in terms of, like, cursor control and signal and all that kind of stuff to, like, life experience. Yeah, so there's hardware changes, but also just firmware updates. So even when we, you know, had that sort of recovery event for Nolan, you know, he now has the new firmware that he has been updated with. And it's similar to how your phones get updated all the time with new firmwares for security patches, whatever new functionality, UI, right? And that's something that is possible with our implant. It's not a static one-time device that can only do the thing that it said it can do. I mean, similar to Tesla, you can do over-the-air firmware updates, and now you have a completely new user interface, and all these bells and whistles and improvements on everything, like the latest. That's when we say generalized platform. That's what we're talking about. Yeah, it's really cool how the app that Nolan is using, there's calibration, all that kind of Yeah, it's really cool how the app that Nolan is using, there's like calibration, all that kind of stuff, and then there's update. You just click and get an update. What other future capabilities are you kind of looking to? You said vision. That's a fascinating one. What about sort of accelerated typing or speech speech this kind of stuff yeah what and what else is there what's yeah those those are still in the realm realm of movement program so so largely speaking we have two programs we have the movement program and we have the vision program uh the movement program you know currently is focused around you know the digital freedom as you can easily guess if you can control you know 2d cursor in the digital space you could move anything in the physical space um so robotic arms wheelchair your environment uh or even really like whether it's through the phone or just like directly to those interfaces so like to those machines um so we're looking at ways to kind of expand those types of capability, even for Nolan. That requires, you know, conversation with the FDA and kind of showing safety data for, you know, if there's a robotic arm or wheelchair that, you know, we can guarantee that they're not going to hurt themselves accidentally. Right. It's very different if you're moving stuff in the digital domain versus like in the physical space, you can actually potentially cause harm to the participants. So we're working through that right now. Speech does involve different areas of the brain. Speech prosthetic is very, very fascinating. And there's actually been a lot of really amazing work that's been happening in academia. amazing work that's been happening in academia. Sergei Stavitsky at UC Davis, Jamie Henderson, and late Krishna Shannoy at Stanford are doing just some incredible amount of work in improving speech neuroprosthetics. And those are actually looking more at parts of the motor cortex that are controlling these vocal articulators. And being able to even by mouthing the word or imagine speech, you can pick up those signals. The more sophisticated, higher-level processing areas like the Broca's area or Wernicke's area, those are still very, very big mystery in terms of the underlying mechanism of how all that stuff works. But yeah, I mean, I think Neuralink's eventual goal is to kind of understand those things and be able to provide a platform and tools to be able to understand that and study that. This is where I get to the pothead questions. Do you think we can start getting insight into things like thought so speech is there's a muscular component like you said there's like the act of producing sounds but then what about the internal things like cognition like low level thoughts and high level thoughts do you think we'll start noticing kind of signals that could be picked up, that could be understood, that could be maybe used in order to interact with the outside world? In some ways, I guess this starts to kind of get into the heart problem of consciousness. consciousness. And I mean, on one hand, all of these are at some point, sort of electrical signals that from there, maybe it in itself is giving you the cognition or the meaning or somehow human mind is an incredibly amazing storytelling machine so we're telling ourselves and fooling ourselves that there's some interesting meaning here um but i mean i i certainly think that bci and you know really bci at the end of the day is a set of tools that help you study the underlying mechanisms in a both local but also broader sense. And whether there's some interesting patterns of electrical signal that means you're thinking this versus... And you can either learn from many, many sets of data to correlate some of that and be able to do mind reading or not i'm not i'm not sure um i certainly would not kind of blow that out as a possibility but um i i think bci alone probably can't do that there's probably additional set of tools and framework and you know also like just heart problem of consciousness at the end of the day is rooted in this philosophical question of like what is the end of the day, is rooted in this philosophical question of what is the meaning of it all? What's the nature of our existence? Where does the mind emerge from this complex network? Yeah, how does the subjective experience emerge from just a bunch of spikes, electrical spikes? Yeah, I mean, we do really think about BCI and what we're building as a tool for understanding the mind, the brain, the only question that matters. There actually is some biological existence proof of what it would take to kind of start to form some of these experiences that may be unique. If you actually look at every one of our brains, there are two hemispheres. There's a left-sided brain, there's a right-sided brain. And I mean, unless you have some other conditions, you normally don't feel like left lex or right lex. You just feel like one lex, right? So what is happening there? If you actually look at the two hemispheres, there's a structure that kind of connectorized the two called the corpus callosum that is supposed to have around 200 to 300 million connections or axons. So whether that means that's the number of interface and electrodes that we need to create some sort of mind meld or from that, like whatever new conscious experience that you can experience. But I do think that there's like kind of an interesting existence proof that we all have. And that threshold is unknown at this time. Oh yeah. These things, everything in this domain is, you know, speculation, right? And then there'll be, you'd be continuously pleasantly surprised. Do you see a world where there's millions of people, like tens of millions, hundreds of millions of people walking around with a Neuralink device or multiple Neuralink devices in their brain? I do. brain i do first of all there are there are like if you look at worldwide um people suffering from movement disorders and visual tephysis i mean that that's uh in the tens if not hundreds of millions of people um so that that alone i think there's a lot of uh benefit and and potential good that we can do with this type of technology and once you start to get into kind of neuro like psychiatric application you know depression um anxiety hunger or you know obesity right like mood control of appetite i mean that starts to become, you know, very real to everyone. Not to mention that every, most people on earth have a smartphone. And once BCI starts competing with a smartphone as a preferred methodology of interacting with the digital world, that also becomes an interesting thing. Oh, yeah. I mean, that, yeah, this is even before going to that, right? I mean, there's like almost, I mean, the entire world that could benefit from these types of thing. And then, yeah, like if we're talking about kind of next generation of how we interface with machines or even ourselves, in many ways, I think BCI can play a role in that. And some of the things that I also talk about is I do think that there is a real possibility that you could see 8 billion people walking around with Neuralink. Well, thank you so much for pushing ahead. And I look forward to that exciting future. Thanks for having me. Thanks for listening to this conversation with DJ Saw. And now, dear friends, here's Matthew McDougall, the head neurosurgeon at Neuralink. When did you first become fascinated with the human brain? Since forever. As far back as I can remember, I've been interested in the human brain. I mean, I was a thoughtful kid and a bit of an outsider. And you sit there thinking about what the most important things in the world are in your little tiny adolescent brain. And the answer that I came to, that I converged on, was that all of the things you can possibly conceive of as things that are important for human beings to care about are literally contained in the skull. Both the perception of them and their relative values and the solutions to all our problems, and all of our problems are all contained in the skull. And if we knew more about how that worked, how the brain encodes information and generates desires and generates agony and suffering, generates agony and suffering, we could do more about it. You know, you think about all the really great triumphs in human history, you think about all the really horrific tragedies, you know, you think about the Holocaust, you think about any prison full of human stories, prison full of human stories. And all of those problems boil down to neurochemistry. So if you get a little bit of control over that, you provide people the option to do better. In the way I read history, the way people have dealt with having better tools is that they most often in the end do better with huge asterisks. But I think it's an interesting, worthy, and noble pursuit to give people more options, more tools. Yeah, that's a fascinating way to look at human history. You just imagine all these neurobiological mechanisms, Stalin, Hitler, all of these, Genghis Khan, all of them just had like a brain. It's just a bunch of neurons, you know, like a few tons of billions of neurons gaining a bunch of information over a period of time. They have a set of modules that does language and memory and all that. And from there, in the case of those people, they're able to murder millions of people. And all that coming from, there's not some glorified notion of a dictator of this enormous mind or something like this. It's just the brain. Yeah. I mean, a lot of that has to do with how well people like that can organize those around them. Other brains. Yeah. I mean, a lot of that has to do with how well people like that can organize those around them. And so, I always find it interesting to look to primatology, look to our closest non-human relatives for clues as to how humans are going to behave and what particular humans are able to achieve. And so, you look at chimpanzees and bonobos, and they're similar but different in their social structures particularly. And I went to Emory in Atlanta and studied under Franz De Waal, the great Franz De Wa all, who was kind of the leading primatologist who recently died. And his work in looking at chimps through the lens of how you would watch an episode of Friends and understand the motivations of the characters interacting with each other, he would look at a chimp colony and basically apply that lens. I'm massively oversimplifying it. If you do that, instead of just saying, you know, subject 473, you know, through his feces at subject 471, you talk about them in terms of their human struggles, accord them the dignity of themselves as actors with understandable goals and drives, what they want out of life. And primarily, it's the things we want out of life, food, sex, companionship, power. You can understand chimp and bonobo behavior in the same lights much more easily. And I think doing so gives you the tools you need to reduce human behavior from the kind of false complexity that we layer onto it with language and look at it in terms of, oh, well, these humans are looking for companionship sex food power um and i think that that's a pretty powerful tool to have in understanding human behavior and i just uh went to the amazon jungle for a few weeks and it's a very visceral reminder that a lot of life on earth is just trying to get laid yeah they're all screaming at each other like i saw a lot of monkeys and they're just trying to get laid. Yeah. They're all screaming at each other. Like I saw a lot of monkeys, and they're just trying to impress each other. Or maybe there's a battle for power, but a lot of the battle for power has to do with them getting laid. Right. Breeding rights often go with alpha status. And so if you can get a piece of that, then you're going to do okay. And we'd like to think that we're somehow fundamentally different, but especially when it comes to primates, we really aren't. We can use fancier poetic language, but maybe some of the underlying drives that motivate us are similar. Yeah, I think that's true. And all that is coming from this, the brain. Yeah. So when did you first start studying the brain as a biological mechanism? Basically, the moment I got to college, I started looking around for labs that I could do neuroscience work in. I originally approached that from the angle of looking at interactions between the brain and the immune system, which isn't the most obvious place to start. But between the brain and the immune system, which isn't the most obvious place to start. But I had this idea at the time that the contents of your thoughts would have a direct impact, maybe a powerful one, on non-conscious systems in your body, the systems we think of as homeostatic automatic mechanisms like fighting off a virus, like repairing a wound. And sure enough, there are big crossovers between the two. I mean, it gets to kind of a key point that I think goes under-recognized. One of the things people don't recognize or appreciate about the human brain enough, and that is that it basically controls or has a huge role in almost everything that your body does. Like you try to name an example of something in your body that isn't directly controlled or massively influenced by the brain, and it's pretty hard. I mean, you might say like bone healing or something, but even those systems, the hypothalamus and pituitary end up playing a role in coordinating the endocrine system that does have a direct influence on, say, the calcium level in your blood that goes to bone healing. So, direct influence on, say, the calcium level in your blood that goes to bone healing. So non-obvious connections between those things implicate the brain as really a potent prime mover in all of health. One of the things I realized in the other direction, too, how most of the systems in the body are integrated with the human brain, like they affect the brain also, like the immune system. I think there's just people who study Alzheimer's and those kinds of things. It's just surprising how much you can understand of that from the immune system, from the other systems that don't obviously seem to have anything to do with sort of the nervous system. They all play together. Yeah, you could understand how that would be driven by evolution too, just in some simple examples. If you get sick, if you get a communicable disease, you get the flu, it's pretty advantageous for your immune system to tell your brain, hey, now be antisocial for a few days. Don't go be the life of the party tonight. In fact, maybe just cuddle up somewhere warm under a blanket and just stay there for a day or two. And sure enough, that tends to be the behavior that you see both in animals and in humans. If you get sick, elevated levels of interleukins in your blood and TNF-alpha in your blood ask the brain to cut back on social activity and even moving around. You have lower locomotor activity in animals that are infected with viruses. So from there, the early days in neuroscience to surgery, when did that step happen? It was a leap. It was sort of an evolution of thought. I wanted to study the brain. I started studying the brain in undergrad in this neuroimmunology lab, I, from there, realized at some point that I didn't want to just generate knowledge. I wanted to affect real changes in the actual world, in actual people's lives. And so, after having not really thought about going into medical school, I was on a track to go into a PhD program. I said, well, I'd like that option. I'd like to actually potentially help tangible people in front of me. And doing a little digging found that there exists these MD-PhD programs where you can choose not to choose between them and do both. And so I went to USC for medical school and had a joint PhD program with Caltech where I met, actually chose that program particularly because of a researcher at Caltech named Richard Anderson, who's one of the godfathers of primate neuroscience, and has a macaque lab where Utah rays and other electrodes were being inserted into the brains of monkeys to try to understand how intentions were being encoded in the brain. So, I ended up there with the idea that maybe I would be a neurologist and study the brain on the side and then discovered that neurology, again, I'm going to make enemies by saying this, but neurology predominantly and distressingly to me is the practice of diagnosing a thing and then saying, good luck with that when there's not much we can do. And neurosurgery, very differently, it's a powerful lever on taking people that are headed in a bad direction and changing their course in the sense of brain tumors that are potentially treatable or curable with surgery. Even aneurysms in the brain, blood vessels that are going to rupture, you can save lives really is at the end of the day what mattered to me. And so, I was at USC, as I mentioned, that happens to be one of the great neurosurgery programs. And so, I met these truly epic neurosurgeons, Alex Kalesi and Micah Puzo and Steve Gianotta and Marty Weiss, these sort of epic people that were just human beings in front of me. And so, it kind of changed my thinking from neurosurgeons are distant gods that live on another planet and occasionally come and visit us to these are humans that have problems and are people and there's nothing fundamentally preventing me from being one of them. And so, at the last minute in medical school, I changed gears from going into a different specialty and switched into neurosurgery, which cost me a year. I had to do another year of research because I was so far along in the process to switch into neurosurgery. The deadlines had already passed. So it was a decision that cost time, but absolutely worth it. What was the hardest part of the training on the neurosurgeon track? Yeah, two things, I think. Residency in neurosurgery is sort of a competition of pain, of how much pain can you eat and smile. Yeah. competition of pain of like how much pain can you eat and smile yeah uh and so there's workout restrictions that are not really they're viewed at i think internally among the residents as weakness and so most neurosurgery residents try to work as hard as they can and that i think necessarily means working long hours and sometimes over the work hour limits. And we care about being compliant with whatever regulations are in front of us. But I think more important than that, people want to give their all in becoming a better neurosurgeon because the stakes stakes are so high and so it's a real fight to get residents to say go home at the end of their shift and not stay and do more surgery are you seriously saying like one of the hardest things is literally like getting forcing them to get sleep and rest and all this kind of stuff historically that was the case i think i think the next generation i think the next generation is more uh compliant and more self-care what you mean all right i'm just i'm just kidding i'm just kidding i didn't say it now i'm making enemies no okay i get it wow that's fascinating yeah so what was the second thing the personalities uh and maybe the two are connected. So was it pretty competitive? It's competitive, and it's also, you know, as we touched on earlier, primates like power. And I think neurosurgery has long had this aura of mystique and excellence and whatever about it. And so it's an invitation, invitation I think for people that are cloaked in that authority. A board-certified neurosurgeon is basically a walking, fallacious appeal to authority, right? You have license to walk into any room and act like you're an expert on whatever. And fighting that tendency is not something that most neurosurgeons do well. Humility isn't the forte. Yeah, so I have friends who know you, and whenever they speak about you, that you have the surprising quality for a neurosurgeon of humility, which I think indicates that it's not as common as perhaps in other professions. Because there is a kind of gigantic sort of heroic aspect to neurosurgery, and I think it gets to people's head a little bit. Yeah. Well, I think that allows me to play well at an Elon company because Elon, one of his strengths I think is to just instantly see through fallacy from authority. So nobody walks into a room that he's in and says, well, goddammit, you have to trust me. I'm the guy that built the last 10 rockets or something. And he says, well, you did it wrong and we can do it better. Or I'm the guy that kept Ford alive for the last 10 rockets or something. And he says, well, you did it wrong and we can do it better. Or I'm the guy that kept Ford alive for the last 50 years. You listen to me on how to build cars. And he says, no. And so, you don't walk into a room that he's in and say, well, I'm a neurosurgeon, let me tell you how to do it. He's going to say, well, I'm a human being that has a brain. I can think from first principles myself. Thank you very much. And here's how I think it ought to be done. Let's go try it and see who's right. And that's proven, I think, over and over in his case to be a very powerful approach. If we just take that tangent, there's a fascinating interdisciplinary team at Neuralink that you get to interact with, including Elon. What do you think is the secret to a successful team? What have you learned from just getting to observe these folks? World experts in different disciplines work together. Yeah, there's a sweet spot where people disagree and forcefully speak their mind and passionately defend their position and yet are still able to accept information from others and change their ideas when they're wrong. And so I like the analogy of sort of how you polish rocks. You put hard things in a hard container and spin it. People bash against each other and out comes a more refined product. And so to make a good team at Neuralink, we've tried to find people that are not afraid to defend their ideas passionately and occasionally strongly disagree with people that they're working with and have the best idea come out on top. It's not an easy balance, again, to refer back to the primate brain. It's not something that is inherently built into, that is a loss of face, a loss of standing in the community, and now you're a Zeta chump because your idea got trounced. And you just have to recognize that that little voice in the back of your head is maladaptive and it's not helping the team win. Yeah, you have to have the confidence to be able to walk away from an idea that you hold on to. Yeah. And if you do that often enough, you're actually going to become the best in the world at your thing. I mean, that kind of, that rapid iteration. Yeah, you'll at least be a member of a winning team. Ride the wave. What did you learn? You mentioned there's a lot of amazing neurosurgeons at USC. What lessons about surgery and life have you learned from those folks? Yeah, I think working your ass off, working hard while functioning as a member of a team, While, you know, functioning as a member of a team, getting a job done that is incredibly difficult, you know, working incredibly long hours, being up all night, taking care of someone that, you know, you think probably won't survive no matter what you do. Working hard to make people that you passionately dislike look good the next morning. These folks were relentless in their pursuit of excellent neurosurgical technique decade over decade. And I think we're well-recognized for that excellence. So, especially Marty Weiss, Steve Gianotta, Mike Capuzzo, they made huge contributions not only to surgical technique, but they built training programs that trained dozens or hundreds of amazing neurosurgeons. I was just lucky to kind of be in their wake. kind of be in their wake. What's that like? You mentioned doing a surgery where the person is likely not to survive. Does that wear on you? Yeah. You know, it's especially challenging when you, with all respect to our elders, it doesn't hit so much when you're taking care of an 80-year-old and something was going to get them pretty soon anyway. And so you lose a patient like that, and it was part of the natural course of what is expected of them in the coming years. Regardless, taking care of a father of two or three, four young kids, someone in their 30s that didn't have it coming, and they show up in your ER having their first seizure of their life, and lo and behold, they've got a huge malignant inoperable or incurable brain tumor. You can only do that, I think, a handful of times before it really starts eating away at your armor. Or a young mother that shows up that has a giant hemorrhage in her brain that she's not going to survive from. you know, they bring her four-year-old daughter in to say goodbye one last time before they turn the ventilator off. You know, the great Henry Marsh is an English neurosurgeon who said it best. I think he says every neurosurgeon carries with them a private graveyard, and I definitely especially with young parents. That kills me. They had a lot more to give. The loss of those people specifically has a knock-on effect that's going to make the world worse for people for a long time. And it's just hard to feel powerless in the face of that. And that's where I think you have to be borderline evil to fight against a company like Neuralink or to constantly be taking potshots at us because what we're doing is to try to fix that stuff. We're trying to give people options to reduce suffering. We're trying to take the pain out of life that broken brains brings in. broken brains brings in. And yeah, this is just our little way that we're fighting back against entropy, I guess. Yeah, the amount of suffering that's endured when some of the things that we take for granted that our brain is able to do is taken away is immense. And to be able to restore some of that functionality is a real gift. Yeah, we're just starting. We're going to do so much more. Well, can you take me through the full procedure for implanting, say, the N1 chip in Neuralink? Yeah, it's a really simple, really simple, straightforward procedure. The human part of the surgery that I do is dead simple. It's one of the most basic neurosurgery procedures imaginable. And I think there's evidence that some version of it has been done for thousands of years. There are examples, I think, from ancient Egypt of healed or partially healed trephonations, and from Peru or ancient times in South America, where these proto-surgeons would drill holes in people's skulls, presumably to let out the evil spirits, but maybe to drain blood clots. And there's evidence of bone healing around the edge, meaning the people at least survived some months after a procedure. And so what we're doing is that. We are making a cut in the skin on the top of the head over the area of the brain that is the most potent representation of hand intentions. representation of hand intentions. And so, if you are an expert concert pianist, this part of your brain is lighting up the entire time you're playing. We call it the hand knob. The hand knob. So, it's all the finger movements, all of that is just firing away. Yep. There's a little squiggle in the cortex right there. One of the folds in the brain is kind of doubly folded right on that spot. And so you can look at it on an MRI and say, that's the hand knob. And then you do a functional test in a special kind of MRI called a functional MRI, fMRI. And this part of the brain lights up when people, even quadriplegic people whose brains aren't connected to their finger movements anymore, they imagine finger movements and this part of the brain still lights up. So, we can ID that part of the brain in anyone who's preparing to enter our trial and say, okay, that part of the brain we confirm is your hand in tension area. And so I'll make a little cut in the skin. We'll flap the skin open, just like kind of opening the hood of a car, only a lot smaller. Make a perfectly round one-inch diameter hole in the skull. Remove that bit of skull. Open the lining of the brain, the covering of the brain. It's like a little bag of water that the brain floats in. And then show that part of the brain to our robot. And then this is where the robot shines. It can come in and take these tiny, much smaller than human hair electrodes and precisely insert them into the cortex, into the surface of the brain to a very precise depth in a very precise spot that avoids all the blood vessels that are coating the surface of the brain. And after the robot's done with its part, then the human comes back in and puts the implant into that hole in the skull and covers it up, screwing it down to the skull and sewing the skin back together. So the whole thing is a few hours long. It's extremely low risk compared to the average neurosurgery involving the brain that might, say, open up a deep part of the brain or manipulate blood vessels in the brain. This opening on the surface of the brain with only cortical microinsertions carries significantly less risk than a lot of the tumor you know, tumor or aneurysm surgeries that are routinely done. So cortical microinsertions that are via a robot and computer vision are designed to avoid the blood vessels. Exactly. So I know you're a bit biased here, but let's compare human and machine. Sure. know you're a bit biased here, but let's compare human and machine. So what are human surgeons able to do well, and what are robot surgeons able to do well at this stage of our human civilization development? Yeah, that's a good question. Humans are general purpose machines. We're able to adapt to unusual situations. We're able to change the plan on the fly. I remember well a surgery that I was doing many years ago down in San Diego where the plan was to open a small hole behind the ear and go reposition a blood vessel that had come to lay on the facial nerve, the trigeminal nerve, the nerve that goes to the face. When that blood vessel lays on the nerve, it can cause just intolerable, horrific shooting pain that people describe like being zapped with a cattle prod. horrific shooting pain that people describe like being zapped with a cattle prod. And so the beautiful, elegant surgery is to go move this blood vessel off the nerve. The surgery team, we went in there and started moving this blood vessel and then found that there was a giant aneurysm on that blood vessel that was not easily visible on the pre-op scans. And so the plan had to dynamically change and the human surgeons had no problem with that. We're trained for all those things. Robots wouldn't do so well in that situation, at least in their current incarnation, fully robotic surgery like the electrode insertion portion of the Neuralink surgery. It goes according to a set plan. And so the humans can interrupt the flow and change the plan, but the robot can't really change the plan midway through. It operates according to how it was programmed and how it was asked to run. It does its job very precisely, but not with a wide degree of latitude in how to react to changing conditions. So there could be just a very large number of ways that you could be surprised as a surgeon. When you enter a situation, there could be subtle things that you have to dynamically adjust to. Correct. And robots are not good at that. Currently. Currently. good at that currently currently i think we are at the dawn of a new era with ai of the parameters for robot responsiveness to be dramatically broadened right i mean you can't look at a self-driving car and say that it's operating under very narrow parameters. You know, if a chicken runs across the road, it wasn't necessarily programmed to deal with that specifically, but a Waymo or a self-driving Tesla would have no problem reacting to that appropriately. And so surgical robots aren't there yet, but give it time. And then there could be a lot of sort of semi-autonomous possibilities of maybe a robotic surgeon could say this situation is perfectly familiar or the situation is not familiar. And in the not familiar case, a human could take over. But basically be very conservative in saying, okay, this for sure has no issues no surprises and let the humans deal with the surprises with the edge cases all that yeah uh that's one possibility so like you think eventually uh you'll be out of the job what you being neurosurgeon your job being your certain humans there will not be many neurosurgeons left on this earth. I'm not worried about my job in the course of my professional life. I think I would tell my kids not necessarily to go in this line of work, depending on how things look in 20 years. It's so fascinating because, I mean, if I have a line of work, I would say it's programming. And if you ask me, like, for the last, I don't know, 20 years, what I would recommend for people, I would tell them, yeah, go. You will always have a job if you're a programmer because there's more and more computers and all this kind of stuff, and it pays well. But then you realize these large language models come along and they're really damn good at generating code. So this overnight you could be surprised like, wow, what is the contribution of the human really? But then you start to think, okay, it does seem that humans have ability, like you said, to deal with novel situations. And in the case of programming, it's the ability to kind of come up with novel ideas to solve problems it seems like machines aren't quite yet able to do that and when the stakes are very high when it's life critical as it is in surgery especially neurosurgery then it starts the stakes are very high for a robot to actually replace a human. But it's fascinating that in this case of Neuralink, there's a human-robot collaboration. Yeah, yeah, it's, I do the parts it can't do, and it does the parts I can't do. And we are friends. I saw that there's a lot of practice going on. So, I mean, everything in Neuralink is tested extremely rigorously. But one of the things I saw that there's a proxy on which the surgeries are performed. Yeah. So this is both for the robot and for the human, for everybody involved in the entire pipeline. Yep. What's that like, practicing the surgery? It's pretty intense. So there's no analog to this in human surgery. Human surgery is sort of this artisanal craft that's handed down directly from master to pupil over the generations. Yes. master to pupil over the generations. I mean, literally the way you learn to be a surgeon on humans is by doing surgery on humans. I mean, first you watch your professors do a bunch of surgery, and then finally they put the trivial parts of the surgery into your hands, and then the more complex parts. And as your understanding of the point and the purposes of the surgery into your hands, and then the more complex parts. And as your understanding of the point and the purposes of the surgery increases, you get more responsibility in the perfect condition. It doesn't always go well. In Neuralink's case, the approach is a bit different. We, of course, practiced as far as we could on animals. We did hundreds of animal surgeries. practiced as far as we could on animals. We did hundreds of animal surgeries. And when it came time to do the first human, we had just an amazing team of engineers build incredibly lifelike models. One of the engineers, Fran Romano in particular, built a pulsating brain in a custom 3D printed skull that matches exactly the patient's anatomy, including their face and scalp characteristics. And so when I was able to practice that, I mean, it's as close as it really reasonably should get to being the real thing in all the details, including having a mannequin body attached to this custom head. And so when we were doing the practice surgeries, we'd wheel that body into the CT scanner and take a mock CT scan and wheel it back in and conduct all the normal safety checks verbally, you know, stop this patient we're confirming his identification is mannequin number blah blah blah and then opening the brain in exactly the right spot using standard operative neuronavigation equipment, standard surgical drills in the same OR that we do all of our practice surgeries in at Neuralink, and having the skull open and have the brain pulse, which adds a degree of difficulty for the robot to perfectly, precisely plan and insert those electrodes to the right depth and location. And so, yeah, we kind of broke new ground on how extensively we practiced for this surgery. So there was a historic moment, a big milestone for Neuralink, in part for humanity, with the first human getting a Neuralink implant in January of this year. Take me through the surgery on Noland. What did it feel like to be part of this? Yeah. Well, we were lucky to have just incredible partners at the Baroneurologic Institute. They are, I think, the premier neurosurgical hospital in the world. They made everything as easy as possible for the trial to get going and helped us immensely with their expertise on how to arrange the details. It was a much more high-pressure surgery in some ways. I mean, even though the, you know, the outcome wasn't particularly in question in terms of our participants' safety, the number of observers, you know, the number of people, there's conference rooms full of people watching live streams in the hospital, rooms full of people watching live streams in the hospital rooting for this to go perfectly, and that just adds pressure that is not typical for even the most intense production neurosurgery, say removing a tumor or placing deep brain stimulation electrodes. And it had never been done on a human before. There were unknowns. and it had never been done on a human before. There were unknown unknowns. And so definitely a moderate pucker factor there for the whole team, not knowing if we were going to encounter, say, a degree of brain movement that was unanticipated, or a degree of brain sag that took the brain far away from the skull and made it difficult to insert, or some other unknown, unknown problem. Fortunately, everything went well, and that surgery was one of the smoothest outcomes we could have imagined. Were you nervous? I mean, you're a bit of a quarterback in the Super Bowl kind of situation. Extremely nervous. Extremely. I was very pleased when it went well and when it was over. Looking forward to number two. Yeah. Even with all that practice, all of that, just you've never been in a situation that's so high stakes in terms of people watching. Yeah. And we should also probably mention, given how the media works, a lot of people, you know, maybe in a dark kind of way, hoping it doesn't go well. Well, I think wealth is easy to hate or envy or whatever. or whatever. And I think there's a whole industry around driving clicks, and bad news is great for clicks. And so any way to take an event and turn it into bad news is going to be really good for clicks. It just sucks because I think it puts pressure on people. It discourages people from from trying to solve really hard problems because to solve hard problems you have to go into the unknown you have to do things that haven't been done before and you have to take risks yeah uh calculated risks you have to do all kinds of safety precautions but risks not nevertheless and uh i just wish there would be more celebration of that, of the risk-taking, versus people just waiting on the sidelines, waiting for failure. And then pointing out the failure. Yeah, it sucks. But in this case, it's really great that everything went just flawlessly. But it's unnecessary pressure, I would say. pressure i would say now that there is a human with literal skin in the game you know there's a participant who whose well-being rides on this doing well you have to be a pretty bad person to be rooting for that to go wrong um and so you know hopefully people look in the mirror and and realize that at some point so did you get to actually front row seat, like watch the robot and place the targets on the robot software interface that tells the robot where it's going to insert each thread. That was done with my hand on the mouse for whatever that's worth. So you were the one placing the targets? Yeah. Oh, cool. So the robot with the computer vision provides a bunch of candidates, and you kind of finalize the decision. Right. You know, the software engineers are amazing on this team. And so they actually provided an interface where you can essentially use a lasso tool and select a prime area of brain real estate, and it will automatically avoid the blood vessels in that region and automatically place a bunch of targets. So that allows the human robot operator to select really good areas of brain and make dense applications of targets in those regions, the regions we think are going to have the most high-fidelity representations of finger movements and arm movement intentions. I've seen images of this, and for me, with OCD, it's for some reason a really pleasant... I think there's a subreddit called Oddly Satisfying. Yeah, love that subreddit. It's oddly satisfying to see the different target sites avoiding the blood vessels and also maximizing the usefulness of those locations for the signal. It just feels good. It's like, ah. As a person who has a visceral reaction to the brain bleeding, I can tell you it's like ah as a person who has a visceral reaction to the brain bleeding i can tell you yes it's extremely satisfying watching the electrodes themselves go into the brain and not cause bleeding yeah yeah so uh you said the feeling was of relief when everything went perfectly yeah how deep in the brain can you currently go and eventually go? Let's say on the neural link side, it seems the deeper you go in the brain, the more challenging it becomes. Yeah. So talking broadly about neurosurgery, we can get anywhere. It's routine for me to put deep brain stimulating electrodes near the very bottom of the brain, entering from the top and passing about a two millimeter wire all the way into the bottom of the brain. And that's not revolutionary. A lot of people do that. And we can do that with very high precision. I use a robot from Globus to do that surgery several times a month. It's pretty routine. What are your eyes in that situation? What are you seeing? What kind of technology can you use to visualize where you are to light your way? Yeah, so it's a cool process on the software side. You take a preoperative MRI that's extremely high resolution data of the entire brain. You put the patient to sleep, put their head in a frame that holds the skull very rigidly, and then you take a CT scan of their head while they're asleep with that frame on, and then merge the MRI and the CT in software, you have a plan based on the MRI where you can see these nuclei deep in the brain. You can't see them on CT, but if you trust the merging of the two images, then you indirectly know on the CT where that is and therefore indirectly know where in reference to the titanium frame screwed to their head those targets are. And so this is 60s technology to manually compute trajectories given the entry point and target and dial in some goofy-looking titanium actuators with manual actuators with little tick marks on them. The modern version of that is to use a robot, just like a little KUKA arm you might see building cars at the Tesla factory. This small robot arm can show you the trajectory that you intended from the pre-op MRI and establish a very rigid holder through which you can drill a small hole in the skull and pass a small rigid wire deep into that area of the brain that's hollow and put your electrode through that hollow wire and then remove all of that except the electrode. So you end up with the electrode very, very precisely placed far from the skull surface. Now that's standard technology that's already been out in the world for a while. Neuralink right now is focused entirely on cortical targets, surface targets, because there's no trivial way to get, say, hundreds of wires deep inside the brain without doing a lot of damage. So your question, what do you see? Well, I see an MRI on a screen. I can't see everything that that DBS electrode is passing through on its way to that deep target. And so it's accepted with this approach that there's going to be about one in a hundred patients who have a bleed somewhere in the brain as a result of passing that wire blindly into the deep part of the brain. result of passing that wire blindly into the deep part of the brain. That's not an acceptable safety profile for Neuralink. We start from the position that we want this to be dramatically, maybe two or three orders of magnitude safer than that. Safe enough really that you or I without a profound medical problem might on our lunch break someday say, yeah, sure, I'll get that. I've been meaning to upgrade to the latest version. And so the safety constraints given that are high. And so we haven't settled on a final solution for arbitrarily approaching deep targets in the brain it's interesting because like you have to avoid blood vessels somehow you have to maybe there's creative ways of doing the same thing like mapping out high resolution geometry of blood vessels and then you can go in blind but like how do you map out that in a way that's like super stable it's saying that there's a lot of interesting challenges there right yeah but there's a lot to do on the surface exactly so we've got vision on the surface um you know we we actually have made a huge amount of progress sewing uh electrodes into the spinal cord uh as a potential workaround for a spinal cord injury that would allow a brain-mounted implant to translate motor intentions to a spine-mounted implant that can affect muscle contractions in previously paralyzed arms and legs. That's just incredible. So like the effort there is to try to bridge the brain to the spinal cord, to the peripheral nervous system. So how hard is that to do? We have that working in very crude forms in animals. That's amazing. Yeah, we've done it. So similar to like with Nolan, where he's able to digitally move the cursor, here you're doing the same kind of communication, but with the actual effectors that you have yeah that's fascinating yeah so we have anesthetized animals doing grasp and moving moving their legs in a sort of walking pattern uh again early days uh but uh the future is bright for this kind of thing and and people with paralysis uh should look forward to that bright future. They're going to have options. Yeah, and there's a lot of sort of intermediate or extra options where you take like an Optimus robot, like the arm, and to be able to control the arm, the fingers and hands of the arm, as a prosthetic. Exoskeletons are getting better too. Exoskeletons. Yeah, so that goes hand in hand. Although I didn't quite understand until thinking about it deeply and doing more research about Neuralink, how much you can do on the digital side. So there's digital telepathy. how much you can do on the digital side. So there's digital telepathy. Yeah. I didn't quite understand that you could really map the intention, as you described in the hand knob area, that you can map the intention. Just imagine it. Think about it. That intention can be mapped to actual action in the digital world. Right. And now more and more, so much to actual action in the digital world right and now more and more so much can be done in the digital world that it it it can reconnect you to to the outside world it can allow you to have freedom have independence if you're a quadriplegic yeah that's really powerful like you can go really far with that yeah our first participant is he's incredible he's breaking world records left and right. And he's having fun with it. It's great. Just going back to the surgery, your whole journey, you mentioned to me offline, you have surgery on Monday. So you're like, you're doing surgery all the time. Yeah. Maybe the ridiculous question, what does it take to get good at surgery? Yeah. Maybe the ridiculous question, what does it take to get good at surgery? Practice, repetitions. You just, same with anything else, you know, there's a million ways of people saying the same thing and selling books saying it, but do you call it 10,000 hours, do you call it, you know, spend some chunk of your life, some percentage of your life focusing on this, obsessing about getting better at it um repetitions humility recognizing that you aren't perfect at any stage along the way recognizing you've got improvements to make in your technique being open to feedback and coaching from people with a different perspective on how to do it. And then just the constant will to do better. That fortunately, you know, if you're not a sociopath, I think your patients bring that with them to the office visits every day. They, you know, force you to want to do better all the time. Yeah, just step up. I mean, it's a real human being, a real human being that you can help. Yeah. So every surgery, even if it's the same exact surgery, is there a lot of variability between that surgery and a different person? Yeah, a fair bit. I mean, a good example for us is that the angle of the skull relative to the normal plane of the body axis of the skull over hand knob is pretty wide variation. I mean, some people have really flat skulls, and some people have really steeply angled skulls over that area. you know, consequences for how their head can be fixed in sort of the frame that we use and how the robot has to approach the skull. And yeah, people's bodies are built as differently as, you know, the people you see walking down the street, as much variability in body shape and size as you see there. We see in brain anatomy and skull anatomy, there are some people who we've had to kind of exclude from our trial for having skulls that are too thick or too thin or scalp that's too thick or too thin. I think we have like the middle 97% or so of people, but you can't account for all human anatomy variability how much like mushiness and mess is there because i uh you know taking biology classes the diagrams are always really clean and crisp neuroscience the pictures of neurons are always really nice and very... But whenever I look at pictures of real brains, I don't know what is going on. So how much are biological systems in reality, how hard is it to figure out what's going on? Not too bad. Once you really get used to this, that's where experience and skill and education really come into play is if you stare at a thousand brains, it becomes easier to mentally peel back the, say, for instance, blood vessels that are obscuring the sulci and gyri, the wrinkle pattern of the surface of the brain. Occasionally, when you're first starting to do this and you open the skull, it doesn't match what you thought you were going to see based on the MRI. And with more experience, you learn to kind of peel back that layer of blood vessels and see the underlying pattern of wrinkles in the brain and use that as a landmark for where you are. The wrinkles are a landmark? Yeah. So I was describing hand knob earlier. That's a pattern of the wrinkles in the brain. It's sort of this Greek letter omega-shaped area of the brain. So you could recognize the hand knob area like if i show you a thousand brains and give you like one minute with each you'd be like yep that's that sure and so there is some uniqueness to that area of the brain like in terms of the geometry the topology of the thing yeah where is it about in the it... So you have this strip of brain running down the top called the primary motor area. And I'm sure you've seen this picture of the homunculus laid over the surface of the brain, the weird little guy with huge lips and giant hands. That guy sort of lays with his legs up at the top of the brain and face arm areas farther down, and then some kind of mouth, lip, tongue areas farther down. And so, the hand is right in there, and then the areas that control speech, at least on the left side of the brain in most people are just below that. And so, any muscle that you voluntarily move in your body, the vast majority of that references that strip or those intentions come from that strip of brain and the wrinkle for hand knob is right in the middle of that. And vision is back here? Yep. Also close to the surface. Vision's a little deeper. And so this gets to your question about how deep can you get to do vision. We can't just do the surface of the brain. We have to be able to go in not as deep as we'd have to go for DBS, but maybe a centimeter deeper than we're used to for hand insertions. And so that's work in progress. That's a new set of challenges to overcome. By the way, you mentioned the Utah array, and I just saw a picture of that, and that thing looks terrifying. Yeah, that fails. that thing looks terrifying. Yeah, that nails. Because it's rigid, and then if you look at the threads, they're flexible. What can you say that's interesting to you about the flexible, that kind of approach of the flexible threads to deliver the electrodes next to the neurons? Yeah, I mean, the goal there comes from experience. I mean, we stand on the shoulders of people that made Utah rays and used Utah rays for decades before we ever even came along. Neuralink arose partly, this approach to technology arose out of a need recognized after Utah rays would fail routinely because the rigid electrodes, those spikes that are literally hammered using an air hammer into the brain, those spikes generate a bad immune response that encapsulates the electrode spikes in scar tissue, essentially. And so one of the projects that was being worked on in the Anderson Lab at Caltech when I got there was to see if you could use chemotherapy to prevent the formation of scars. Like, you know, things are pretty bad when you're jamming a bed of nails into the brain and then treating that with chemotherapy to try to prevent scar tissue, it's like, you know, maybe we've gotten off track here, guys. Maybe there's a fundamental redesign necessary. And so, Neuralink's approach of using highly flexible tiny electrodes avoids a lot of the bleeding, avoids a lot of the immune response that ends up happening a lot of the bleeding, avoids a lot of the immune response that ends up happening when rigid electrodes are pounded into the brain. And so what we see is our electrode longevity and functionality and the health of the brain tissue immediately surrounding the electrode is excellent. I mean, it goes on for years now in our animal models. What do most people not understand about the biology of the brain? We'll mention the vasculature. That's really interesting. I think the most interesting, maybe underappreciated fact is that it really does control almost everything. I mean, I don't know, for out of the blue example, imagine you want a lever on fertility. You want to be able to turn fertility on and off. I mean, there are legitimate targets in the brain itself to modulate fertility. Say, blood pressure. You want to modulate blood pressure. There are legitimate targets in the brain for doing that. blood pressure, there are legitimate targets in the brain for doing that. Things that aren't immediately obvious as brain problems are potentially solvable in the brain. And so, I think it's an underexplored area for primary treatments of all the things that bother people. That's a really fascinating way to look at it. There's a lot of conditions we might think have nothing to do with the brain, but they might just be symptoms of something that actually started in the brain. The actual source of the problem, the primary source is something in the brain. Yeah, not always. I mean, kidney disease is real, but there are levers you can pull in the brain that affect all of these systems. There's knobs. Yeah. On-off switches and knobs in the brain from which this all originates. Would you have a Neuralink chip implanted in your brain? Yeah. a Neuralink chip implanted in your brain? Yeah. I think the use case right now is use a mouse, right? I can already do that, and so there's no value proposition. On safety grounds alone, sure. I'll do it tomorrow. You say the use case of the mouse. After researching all this, and part of it is just watching Nolan have so much fun. If you can get that bits per second look really high with the mouse, like being able to interact. If you think about it, the way on the smartphone, the way you swipe, that was transformational. Yeah. How we interact with the thing. It's subtle. You don't realize it, but to be able to touch a phone and to scroll with your finger, that changed everything. People were sure you need a keyboard to type. There's a lot of HCI aspects to that that changed how we interact with computers. There could be a certain rate of speed with the mouse that would change everything. It's like you might be able to just click around a screen extremely fast. And that, I don't know. I can't seem to have gotten a neural link for much more rapid interaction with digital devices. Yeah, I think recording speech intentions from the brain might change things as well. You know, the value proposition for the average person. A keyboard is a pretty clunky human interface, requires a lot of training. It's highly variable in the maximum performance that the average person can achieve. I think taking that out of the equation and just having a natural, you know, word-to-computer interface might change things for a lot of people. It'd be hilarious if that is the reason people do it. Even if you have speech-to-text that that's extremely accurate it currently isn't right but it's a gotten super accurate it'd be hilarious if people went for neural link just so you avoid the embarrassing aspect of speaking like looking like a douchebag speaking to your phone in public which is a real like that's a real constraint. You know, that starts to sound sort of like embedded superintelligence. You know, if you can silently ask for the Wikipedia article on any subject and have it read to you without any observable change happening in the outside world, you know, for one thing, standardized testing is obsolete. This thing is obsolete. Yeah. If it's done well on the UX side, it could change. I don't know if it transforms society, but it really can create a kind of shift in the way we interact with digital devices and the way that a smartphone did. Now, just having to look into the safety of everything involved, I would totally try it. So it doesn't have to go to some, like, incredible thing where you have, it connects your vision or to some other, like, it connects all over your brain. That could be, like, just connecting to the hand knob. You might have a lot of interesting interaction, human-computer interaction possibilities. That's really interesting. Yeah, and the technology on the academic side is progressing at light speed here. I think there was a really amazing paper out of UC Davis, Sergei Stavisky's lab, that basically made an initial solve of speech decode. It was something like 125,000 words that they were getting with very high accuracy, which is... So you're just thinking the word? Yeah. Thinking the word and you're able to get it. Yeah. Oh, boy. You have to have the intention of speaking it. Right. So do the inner voice. It's so amazing to me that you can do the intention, the signal mapping. All you have to do is just imagine yourself doing it. And if, if you get the feedback that it actually worked, you can get really good at that. Like your brain will, first of all, adjust and you develop like any other skill. Yeah. Like touch typing, you develop in that same kind of way. That is, that is, to me, it's just really fascinating yeah to be able to even to play with that honestly like i'll get a neural link just to be able to play with that just to play with the capacity the capability of my mind to learn this skill it's like learning the skill of typing and learning the skill of moving a mouse it's another skill of moving the mouse not with my my physical body, but with my mind. I can't wait to see what people do with it. I feel like we're cavemen right now. We're like banging rocks with a stick and thinking that we're making music. At some point, when these are more widespread, there's going to be the equivalent of a piano that someone can make art with their brain in a way that we didn't even anticipate. I'm looking forward to it. Give it to a teenager. Anytime I think I'm good at something, I'll always go to, like, I don't know, even with the bits per second of playing a video game. You realize you give a New Orleans to a teenager, just the large number of them, the kind of stuff, they get good at stuff. They're going to get like hundreds of bits per second. Yeah. Even just with the current technology. Probably. Probably. Just because it's also addicting how the number go up aspect of it, of improving and training, because it's almost like a skill. And plus there's the software on the other end that adapts to you, and especially if the adapting procedure algorithm becomes better and better and better, you're learning together. Yeah, we're scratching the surface on that right now. There's so much more to do. So on the complete other side of it, you have an RFID chip implanted in you. Yeah. So I hear. Nice. So this is a little subtle thing. It's a passive device that you use for unlocking a safe with top secrets? What do you use it for? What's the story behind it? I'm not the first one. There's this whole community of weirdo biohackers that uh have done this stuff and i think one of the early use cases was storing you know private crypto wallet keys and and whatever um i dabbled in that a bit and and had some fun with it. You have some Bitcoin implanted in your body somewhere you can't tell where, yeah. Actually, yeah. It was, you know, the modern-day equivalent of finding change in the sofa cushions. After I put some orphan crypto on there that I thought was worthless and forgot about it for a few years, went back and found that some community of people loved it and had propped up the value of it. And so it had gone up 50-fold. So there was a lot of change in those cushions. That's hilarious. But the primary use case is mostly as a tech demonstrator. You know, it has my business card on it. You can scan that in by touching it to your phone. It opens the front door to my house, whatever simple stuff. It's a cool step. It's a cool leap to implant something in your body. I mean, it has perhaps that's a similar leap to a Neuralink. Because for a lot of people, that kind of notion of putting something inside your body, something electronic inside a biological system is a big leap. Yeah, we have a kind of a mysticism around the barrier of our skin. We're completely fine with knee replacements, hip replacements, dental implants, but there's a mysticism still around the inviolable barrier that the skull represents. And I think that needs to be treated like any other pragmatic barrier. You know, it's the question isn't how incredible is it to open the skull? The question is, you know, what benefit can we provide? So from all the surgeries you've done, from everything you understand in the brain, how much does neuroplasticity come into play? How adaptable is the brain? For example, just even in the case of healing from surgery or adapting to the post-surgery situation? The answer that is sad for me and other people of my demographic is that, you know, plasticity decreases with age. Healing decreases with age. I have too much gray hair to be optimistic about that. There are theoretical ways to increase plasticity using electrical stimulation. Nothing that is, you know, totally proven out as a robust enough mechanism to offer widely to people. But yeah, I think there's cause for optimism that we might find something useful in terms of, say, an implanted electrode that improves learning. Certainly, there's been some really amazing work recently from Nicholas Schiff, Jonathan Baker, and others who have a cohort of patients with moderate traumatic brain injury who have had electrodes placed in the deep nucleus in the brain called the central median nucleus or just near the central median nucleus. And when they apply small amounts of electricity to that part of the brain, it's almost like electronic caffeine. They're able to improve people's attention and focus. They're able to improve how well people can perform a task. I think in one case, someone who was unable to work after the device was turned on, they were able to get a job. And that's sort of one of the holy grails for me with Neuralink and other technologies like this is from a purely utilitarian standpoint, can we make people able to take care of themselves and their families economically again? Can we make it so someone who's fully dependent and even maybe requires a lot of caregiver resources, can we put them in a position to be fully independent, taking care of themselves, giving back to their communities? I think that's a very compelling proposition and what motivates a lot of what I do and what a lot of the people at Neuralink are working for. It's just a cool possibility that if you put a Neuralink in there, that the brain adapts, like the other part of the brain adapts too. Yeah. And integrates it. The capacity of the brain to do that is really interesting. Probably unknown to the degree to which you can do that. But you're now connecting an external thing to it, especially once it's doing uh stimulation like the the biological brain and the uh the electronic brain outside of it working together like the possibilities they're really interesting it's still unknown but interesting it feels like the brain is really good at adapting to whatever. But of course it is a system that by itself is already like everything serves a purpose and so you don't want to mess with it too much. Yeah, it's like eliminating a species from an ecology. You don't know what the delicate interconnections and dependencies are. The brain is certainly a delicate, complex beast, and we don't know every potential downstream consequence of a single change that we make. Do you see yourself doing, so you mentioned P1 surgeries, P2, P3, P4, P5, just more and more and more humans? I think it's a certain kind of brittleness or a failure on the company's side if we need me to do all the surgeries. if we need me to do all the surgeries. I think something that I would very much like to work towards is a process that is so simple and so robust on the surgery side that literally anyone could do it. We want to get away from requiring intense expertise or intense experience to have this successfully done and make it as simple and translatable as possible. I mean, I would love it if every neurosurgeon on the planet had no problem doing this. I think we're probably far from a regulatory environment that would allow people that aren't neurosurgeons to do this, but not impossible. All right. I'll sign up for that. Did you ever anthropomorphize the robot R1? Like, do you give it a name? Do you see it as like a friend that's like working together with you? I mean, to a certain degree, it's... Or an enemy who's going to take your job. To a certain degree, it's, yeah, it's a complex relationship. All the good relationships are. It's funny when in the middle of the surgery, there's a part of it where I stand basically shoulder to shoulder with the robot. And so, you know, if you're in the room reading the body language, you know, it's my brother in arms there. We're working together on the same problem. Yeah, I'm not threatened by it. Keep telling yourself that. How have all the surgeries that you've done over the years, the people you've helped and the stakes, the high stakes that you've mentioned, how has that changed your understanding of life and death? Yeah. You know, it gives you a very visceral sense, and this may sound trite, but it gives you a very visceral sense that death is inevitable. On one hand, as a neurosurgeon, you're deeply involved in these like just hard to fathom tragedies, young parents dying, a four-year-old behind, say. And on the other hand, you know, it takes the sting out of it a bit because you see how just mind-numbingly universal death is. There is zero chance that I'm going to avoid it. I know techno-optimists right now and longevity buffs right now would disagree on that 0.000% estimate, but I don't see any chance that our generation is going to avoid it. Entropy is a powerful force and we are very ornate, delicate, brittle DNA machines that aren't up to the cosmic ray bombardment that we're subjected to. So, on the one hand, every human that has ever lived died or will die. lived, died, or will die. On the other hand, it's just one of the hardest things to imagine inflicting on anyone that you love is having them gone. I mean, I'm sure you've had friends that aren't living anymore, and it's hard to even think about them. And so, I wish I had arrived at the point of nirvana where death doesn't have a sting, I'm not worried about it, but I can at least say that I'm comfortable with the certainty of it. If not, having found out how to take the tragedy out of it when I think about my kids either not having me or me not having them or my wife. Maybe I've come to accept the intellectual certainty of it, but it may be the pain that comes with losing the people you love. it may be the pain that comes with losing the people you love, but I don't think I've come to understand the existential aspect of it, like that this is going to end. And I don't mean like in some trite way. I mean like it certainly feels like it's not going to end. Like you live life like it's not going to end. Like, you live life like it's not going to end. Right. And the fact that this light that's shining, this consciousness, is going to no longer be one moment, maybe today. It's like, it fills me when I really am able to load all that in with Ernest Becker's terror. It's a real fear. I think people aren't always honest with how terrifying it is. Yeah. I think the more you are able to really think through it, the more terrifying it is. It's not such a simple thing. Oh, well, it's the way life is. If you really can load that in, it's hard. But I think that's why the Stoics did it because it like helps you get your shit together and be like this, well, the moment, every single moment you're alive is just beautiful. And it's terrifying that it's gonna end. And it's like, almost like you're shivering in the cold, a child helpless, this kind of feeling. And then it makes you, when you have warmth, when you have the safety, when you have the love, to really appreciate it. I feel like sometimes in your position, when you mentioned armor, just to see death it might make you not be able to see that the the finiteness of life because if you kept looking at that it might break you so it's good to know that you're kind of still struggling with that there's the neurosurgeon and then there's a human yeah and the human is still able to struggle with that. There's the neurosurgeon and then there's a human. And the human is still able to struggle with that and feel the fear of that and the pain of that. Yeah, it definitely makes you ask the question of how many of these can you see and not say, I can't do this anymore. and not say, I can't do this anymore. But, I mean, you said it well. I think it gives you an opportunity to just appreciate that you're alive today. And, you know, I've got three kids and an amazing wife, and I'm really happy. Things are good. I get to help on a project that I think matters. I think it moves us forward. I'm a very lucky person. It's the early steps of a potentially gigantic leap for humanity. It's a really interesting one. And it's cool because like you read about all this stuff in history where it's like the early days. I've been reading, before going to the Amazon, I would read about explorers. They would go and explore even the Amazon jungle for the first time. It's just, those are the early steps. Yeah. Or early steps into space. Early steps in any discipline, in physics and mathematics. And it's cool because this is like, on the grand scale, these are the early steps into delving deep into the human brain. So not just observing the brain, but you'll be able to interact with the human brain. Yeah. It's going to help a lot of people, but it also might help us understand what the hell's going on in there. Yeah. I think ultimately we want to give people more levers that they can pull, right? Like you want to give people options. If you can give someone a dial that they can turn on how happy they are, I think that makes people really uncomfortable. But now talk about major depressive disorder, talk about people that are committing suicide at an alarming rate in this country, and try to justify that suicidal ideation, suicidal intention. I would give them that knob. I don't know how you justify not doing that. You can think about all the suffering that's going on in the world. Every single human being that's suffering right now, it'll be a glowing red dot. The more suffering, the more it's glowing. And you just see the map of human suffering. And any technology that allows you to dim that light of suffering on a grand scale is pretty exciting. Because there's a lot of people suffering, and most of them suffer quietly. And we look away too often. And we should remember those that are suffering, because once again, most of them are suffering quietly. Well, and on a grander scale, the fabric of society, people have a lot of complaints about how our social fabric is working or not working, how our politics is working or not working. Those things are made of neurochemistry too in aggregate, right? Like our politics is composed of individuals with human brains and the way it works or doesn't work is potentially tunable in the sense that, I don't know, say remove our addictive behaviors or tune our addictive behaviors for social media or our addiction to outrage, our addiction to sharing the most angry political tweet we can find. I don't think that leads to a functional society. society and if if you had options for people to moderate that maladaptive behavior there could be huge benefits to society maybe we could all work together a little more harmoniously toward useful ends there's a sweet spot like you mentioned you don't want to completely remove all the dark sides of human nature because those kind of are somehow necessary to make the whole thing work, but there's a sweet spot. Yeah, I agree. You got to suffer a little, just not so much that you lose hope. Yeah. When you, all the surgeries you've done, have you seen consciousness in there ever? Was there like a glowing light? You know, I have this sense that I never found it, never removed it, you know, like a Dementor in Harry Potter. I have this sense that consciousness is a lot less magical than our instincts want to claim it is. It seems to me like a useful analog for thinking about what consciousness is in the brain. Is that we have a really good intuitive understanding of what it means to say, touch your skin and know what's being touched. I think consciousness is just that level of sensory mapping applied to the thought processes in the brain itself. So what I'm saying is consciousness is the sensation of some part of your brain being active. So you feel it working. You feel the part of your brain that thinks of red things or winged creatures or the taste of coffee. You feel those parts of your brain being active the way that I'm feeling my palm being touched, right? And that sensory system that feels the brain working is consciousness. That is so brilliant. that feels the brain working is consciousness. That is so brilliant. It's the same way, it's the sensation of touch when you're touching a thing. Consciousness is the sensation of you feeling your brain working, your brain thinking, your brain perceiving. Which isn't like a warping of space-time or some quantum field effect, right? It's nothing magical. People always want to ascribe to consciousness something truly different. And there's this awesome long history of people looking at whatever the latest discovery in physics is to explain consciousness. Because it's the most magical, the most out there thing that you can think of and people always want to do that with consciousness. I don't think that's necessary. It's just a very useful and gratifying way of feeling your brain work. And as we said, it's one heck of a brain. Yeah. Everything we see around us, everything we love, everything that's beautiful came from brains like these it's all electrical activity happening inside your skull and I for one am grateful that it's people like you that are exploring all the ways that it works and all the ways it can be made better thank you so much for talking today. It's been a joy. Thanks for listening to this conversation with Matthew McDougall. And now, dear friends, here's Bliss Chapman, Brain Interface Software Lead at Neuralink. You told me that you've met hundreds of people with spinal cord injuries or with ALS and that your motivation for helping at NeuroLink is grounded in wanting to help them. Can you describe this motivation? Yeah. First, just a thank you to all the people I've gotten a chance to speak with for sharing their stories with me. I don't think there's any world really in which I can share their stories as powerful a way as they can. early in which I can share their stories as powerful a way as they can. But just, I think, to summarize at a very high level, what I hear over and over again is that people with ALS or severe spinal cord injury in a place where they basically can't move physically anymore, really, at the end of the day, are looking for independence. And that can mean different things for different people. For some folks, it can mean the ability just to be able to communicate again independently without needing to wear something on their face, without needing a caretaker to be able to put something in their mouth. For some folks, it can mean independence to be able to work again, to be able to navigate a computer digitally efficiently enough to be able to get a job, to be able to support themselves, to be able to move out and ultimately be able to support themselves after their family maybe isn't there anymore to take care of them. And for some folks, it's as simple as just being able to respond to their kid in time before they, you know, run away or get interested in something else. And these are deeply personal and sort of very human problems. And what strikes me again and again when talking with these folks is that this is actually an engineering problem. This is a problem that with the right resources, with the right team, we can make a lot of progress on. And at the end of the day, I think that's a deeply inspiring message and something that makes me excited to get up every day. So it's both an engineering problem in terms of a BCI, for example, that can give them capabilities where they can interact with the world. But also on the other side, it's an engineering problem for the rest of the world to make it more accessible for people living with quadriplegia. Yeah. And I'll take a broad view sort of lens on this for a second. I think I'm very in favor of anyone working in this problem space. So beyond PCI, I'm happy and excited and willing to support any way I can folks working on eye tracking systems, working on speech-to-text systems, working on head trackers or mouse sticks or quad sticks. I've met many engineers and folks in the community that do exactly those things. people were trying to help, it doesn't matter what the complexity of the solution is as long as the problem is solved. And I want to emphasize that there can be many solutions out there that can help with these problems. And BCI is one of a collection of such solutions. So BCI in particular, I think, offers several advantages here. And I think the folks that recognize this immediately are usually the people who have spinal cord injury or some form of paralysis. Usually, you don't have to explain to them why this might be something that could be helpful. It's usually pretty self-evident. But for the rest of us, folks that don't live with severe spinal cord injury or who don't know somebody with ALS, it's not often obvious why you would want a brain implant to be able to connect and navigate a computer. And it's surprisingly nuanced to the degree that I've learned a huge amount just working with Noland in the first Neuralink clinical trial and understanding from him and his words why this device is impactful for him. And it's a nuanced topic. It can be the case that even if you can achieve the same thing, for example, with a mouse stick when navigating a computer, he doesn't have access to that mouse stick every single minute of the day. He only has access when someone is available to put it in front of him. And so a BCI can really offer a level of independence and autonomy that if it wasn't literally physically part of your body, it'd be hard to achieve in any other way. So there's a lot of fascinating aspects to what it takes to get Nolan to be able to control a cursor on the screen with his mind. You texted me something that I just love. You said, I was part of the team that interviewed and selected P1. I was in the operating room during the first human surgery, monitoring live signals coming out of the brain. I work with the user basically every day to develop new UX paradigms, decoding strategies. And I was part of the team that figured out how to recover useful BCI to new world record levels when the signal quality degraded. We'll talk about, I think, every aspect of that, but just zooming out, what was it like to be part of that team and part of that historic, I would say, historic first? Yeah, I think for me, this is something I've been excited about for close to 10 years now. And so to be able to be even just some small part of making it a reality is extremely exciting. A couple maybe special moments during that whole process that I'll never really truly forget. One of them is entering the actual surgery. You know, at that point in time, I know Nolan quite well. I know his family. And so I think the initial reaction when Nolan is rolled into the operating room is just a, oh shit, kind of reaction. But at that point, muscle memory kicks in and you sort of go into, you let your body just do all the talking. lucky job in that particular procedure to just be in charge of monitoring the implant. So my job is to sit there to look at the signals coming off the implant, to look at the live brain data streaming off the device as threads are being inserted into the brain, and just to basically observe and make sure that nothing is going, you know, wrong or that there's no red flags or fault conditions that we need to go and investigate or pause the surgery to debug. And because I had that sort of spectator view of the surgery, I had a slightly removed perspective than I think most folks in the room. I got to sit there and think to myself, wow, you know, that brain is moving a lot. You know, when you look into the side lobe craniectomy that we stick the threads in, you know, one thing that most people don't realize is the brain moves. The brain moves a lot when you breathe, when your heart beats, and you can see it visibly. So, you know, that's something that I think was a surprise to me and very, very exciting to be able to see someone's brain who you physically know and have talked with at length actually pulsing and moving inside their skull. And they used that brain to talk to you previously, and now it's right there moving. Yeah. Actually, I didn't realize that in terms of the thread sending. So, the actually i didn't realize that in terms of the thread sending so the the neural link implant is active during surgery so and one thread at a time you're able to start seeing the signal yeah so that's part of the way you test that the thing is working yeah so actually in the in the operating room right after we sort of uh finished the all the thread insertions i started collecting what's called broadband data so broadband is basically the most raw form of signal you can collect from a Neuralink electrode. It's essentially a measurement of the local field potential or the voltage essentially measured by that electrode. And we have a certain mode in our application that allows us to visualize where detected spikes are. So it visualizes sort of where, in the broadband signal, in its very, very raw form of the data, a neuron is actually spiking. And so one of these moments that I'll never forget as part of this whole clinical trial is seeing live in the operating room, while he's still under anesthesia, beautiful spikes being shown in the application, just streaming live to a device I'm holding in my hand. So this is no signal processing, the raw data, and then the signal processing's on top of it. You're seeing the spikes detected. Right. Yeah. And that's a UX2. Yes. That looks beautiful as well. During that procedure, there was actually a lot of cameramen in the room. So they also were curious and wanted to see. There's several neurosurgeons in the room who are all just excited to see robots taking their job. And they're all crowded around a small little iPhone watching this live brain data stream out of his brain. What was that like seeing the robot do some of the surgery? So the computer vision aspect where it detects all the spots that avoid the blood vessels. the blood vessels, and then obviously with human supervision, then actually doing the really high precision connection of the threads to the brain. That's a good question. My answer is going to be pretty lame here, but it was boring. Yeah. I've seen it so many times. Yeah. That's exactly how you want surgery to be. You want it to be boring. Yeah. Because I've seen it so many times. I've seen the robot do the surgery literally hundreds of times and so it was just one more time yeah all the practice surgeries and the proxies and this is just another day yeah so what about when uh nolan woke up well do you remember a moment where uh he was able to move the cursor not not move the cursor, but get signal from the brain such that it was able to show that there's a connection. Yeah. Yeah, so we are quite excited to move as quickly as we can. And Nolan was really, really excited to get started. He wanted to get started actually the day of surgery. But we waited till the next morning very patiently. It's a long night. And the next morning in the ICU where he was recovering, he wanted to get started and actually start to understand what kind of signal we can measure from his brain. And maybe for folks who are not familiar with the Neuralink system, we implant the Neuralink system or the Neuralink implant in the motor cortex. So the motor cortex is responsible for representing things like motor intent. So if you imagine closing and opening your hand, that kind of signal representation would be present in the motor cortex. If you imagine moving your arm back and forth or wiggling a pinky, this sort of signal can be present in the motor cortex. So one of the ways we start to sort of map out what kind of signal do we actually have access to in any particular individual's brain is through this task called body mapping. And body mapping is where you essentially present a visual to the user and you say, hey, imagine doing this. And that visual is, you know, a 3D hand opening, closing, or index finger modulating up and down. And you ask the user to imagine that. And obviously, you can't see them do this because they're paralyzed. So you can't see them actually move their arm. But while they do this task, you can record neural activity. And you can't see them actually move their arm. But while they do this task, you can record neural activity. And you can basically offline model and check, can I predict or can I detect the modulation corresponding with those different actions? And so we did that task and we realized, hey, there's actually some modulation associated with some of his hand motion, which was a first indication that, okay, we can potentially use that modulation to do useful things in the world. For example, control a computer cursor. And he started playing with it, you know, the first time we showed him it. And we actually just took the same live view of his brain activity and put it in front of him. And we said, hey, you tell us what's going on. You know, we're not you. You're able to imagine different things. And we know that it's modulating some of these neurons. So you figure out for us what that is actually representing. And so he played with it for a bit. He was like, I don't quite get it yet. He played for a bit longer and he said, oh, when I move this finger, I see this particular neuron start to fire more. And I said, okay, prove it, do it again. And so he said, okay, three, two, one, boom. And the minute he moved, you can see like instantaneously this neuron is firing. Single neuron, I can tell you the exact channel number if you're interested. It's stuck in my brain now forever. But that single channel firing was a beautiful indication that it was behaviorally modulated neural activity that could then be used for downstream tasks like decoding a computer cursor. And when you say single channel, is that associated with a single electrode? Yeah, so channel and electrode are interchangeable. And there's 1,024 of those. 1,024. Yeah. It's incredible that that works that really when i was um learning about all this and like loading it in it was just blowing my mind that the intention you can visualize yourself moving the finger that can turn into a signal and the fact that you can then skip that step and visualize the cursor moving, or have the intention of the cursor moving, and that leading to a signal that can then be used to move the cursor. There is so many exciting things there to learn about the brain, about the way the brain works. The very fact of their existing signal that can be used is really powerful. But it feels like that's just like the beginning of figuring out how that signal can be used really really effectively i should also just uh there's so many fascinating details here but you mentioned the body mapping step uh at least in the version i saw that nolan was showing off there's like a super nice interface like a graphical interface like it just felt like i was like in the future because it like uh you know i guess it visualizes you moving the hand and there's a very like like a sexy polished interface that hello i don't know if there's a voice component but it just felt like uh it's like when you wake up in a really nice video game and this is a tutorial at the beginning of that video game. This is what you're supposed to do. It's cool. No, I mean, the future should feel like the future. But it's not easy to pull that off. I mean, it needs to be simple, but not too simple. Yeah, and I think the UX design component here is underrated for PCI development in general. There's a whole interaction effect between the ways in which you visualize an instruction to the user and the kinds of signal you can get back. And that quality of sort of your behavioral alignment to the neural signal is a function of how good you are at expressing to the user what you want them to do. And so, yeah, we spend a lot of time thinking about the UX, of how we build our applications, of how the decoder actually functions, the control surfaces it provides to the user. All these little details matter a lot. So maybe it'd be nice to get into a little bit more detail of what the signal looks like and what the decoding looks like. So there's a N1 implant that has, like we mentioned, 1024 electrodes, and that's collecting raw data, raw signal. What does that signal look like? And what are the different steps along the way before it's transmitted? And what is transmitted? All that kind of stuff. Yeah, yeah. This is going to be a fun one. Let's go. So maybe before diving into what we do, it's worth understanding what we're trying to measure because that dictates a lot of the requirements for the system that we build. And what we're trying to measure is really individual neurons producing action potentials. And action potential is, you can think of it like a little electrical impulse that you can detect if you're close enough. And by being close enough, I mean like within, let's say, 100 microns of that cell. And 100 microns is a very, very tiny distance. And so the number of neurons that you're going to pick up with any given electrode is just a small radius around that electrode. And the other thing worth understanding about the underlying biology here is that when neurons produce an action potential, the width of that action potential is about one millisecond. So from the start of the spike to the end of the spike, that whole width of that sort of characteristic feature of a neuron firing is one millisecond wide. And if you want to detect that an individual spike is occurring or not, you need to sample that signal or sample the local field potential nearby that neuron much more frequently than once a millisecond. You need to sample many, many times per millisecond to be able to detect that this is actually the characteristic waveform of a neuron producing an action potential. And so we sample across all 1,024 electrodes about 20,000 times a second. 20,000 times a second means for any given one millisecond window, we have about 20 samples that tell us what that exact shape of that action potential looks like. And once we've sort of sampled at super high rate the underlying electrical field nearby these cells, we can process that signal into just where do we detect a spike or where do we not? Sort of a binary signal one or zero, do we detect a spike in this one millisecond or not? And we do that because the actual information carrying sort of subspace of neural activity is just when our spikes are occurring. Essentially, everything that we care about for decoding can be captured or represented in the frequency characteristics of spike trains, meaning how often are spikes firing in any given window of time. us to do sort of a crazy amount of compression from this very rich high density uh you know signal to something that's much much more sparse and compressible that can be sent out over a wireless uh radio like a bluetooth communication for example quick tangents here you mentioned electrode neuron there's a local neighborhood of neurons nearby. How difficult is it to like isolate from where the spike came from? Yeah, so there's a whole field of sort of academic neuroscience work on exactly this problem of basically given a single electrode or given a set of electrodes measuring a set of neurons, how can you sort of sort, spike sort, which spikes are coming from what neuron? And this is a problem that's pursued in academic work because you care about it for understanding what's going on in the underlying sort of neuroscience of the brain. If you care about understanding how the brain's representing information, how that's evolving through time, then that's a very, very important question to understand. For sort of the engineering side of things, at least at the current scale, if the number of neurons per electrode is relatively small, you can get away with basically ignoring that problem completely. You can think of it like sort of a random projection of neurons to electrodes. And there may be in some cases more than one neuron per electrode. But if that number is small enough, those signals can be thought of as sort of a union of the two. And for many applications, that's a totally reasonable trade-off to make and can simplify the problem a lot. And as you sort of scale out channel count, the relevance of distinguishing individual neurons becomes less important because you have more overall signal and you can start to rely on sort of correlations or covariance structure in the data to help understand when that channel is firing, what does that actually represent? Because you know understand when that channel is firing what is that what does that actually represent because you know that when that channel is firing in concert with these other 50 channels that means move left but when that same channel is firing with concert with these other 10 channels that means move right okay so you have to do this kind of spike detection on board and you have to do that super efficiently so fast and not use too much power because you don't want to be generating too much heat so i have to be a super simple signal processing step yeah um is there some wisdom you can share about what it takes to overcome that challenge yeah so we've tried many different versions of basically turning this raw signal into uh sort of a feature that you might want to send off the device. And I'll say that I don't think we're at the final step of this process. This is a long journey. We have something that works clearly today, but there can be many approaches that we find in the future that are much better than what we do right now. So some versions of what we do right now, and there's a lot of academic heritage to these ideas. So I don't want to claim that these are original Neuralink ideas or anything like that. But one of these ideas is basically to build a sort of like a convolutional filter, almost, if you will, that slides across the signal and looks for a certain template to be matched. And that template consists of sort of how deep the spike modulates, how much it recovers, and what the duration and window of time is that the whole process takes. And if you can see in the signal that that template is matched within certain bounds, then you can say, okay, that's a spike. One reason that approach is super convenient is that you can actually implement that extremely efficiently in hardware, which means that you can run it in low power across 1024 channels all at once. Another approach that we've recently started exploring, and this can be combined with the spike detection approach, is something called spike band power. And the benefits of that approach are that you may be able to pick up some signal from neurons that are maybe too far away to be detected as a spike, because the farther away you are from an electrode, the weaker that actual spike waveform will look like on that electrode. So you might be able to pick up population-level activity of things that are maybe slightly outside the normal recording radius, what neuroscientists sometimes refer to as the hash of activity, the other stuff that's going on. And you can look at sort of across many channels how that sort of background noise is behaving, you might be able to get more juice out of the signal that way. But it comes at a cost. That signal is now a floating point representation, which means it's more expensive to send out over a power. It means you have to find different ways to compress it that are different than what you can apply to binary signals. So there's a lot of different challenges associated with these different modalities. So also, in terms of communication, you're limited by the amount of data you can send. Yeah. And also because you're currently using the Bluetooth protocol, you have to batch stuff together. But you have to also do together but you have to also do this keeping the latency crazy low like crazy low anything to say about the latency yeah this is a passion project of mine so uh i want to build the best mouse in the world yeah i don't want to build like the you know the chevrolet spark or whatever of electric cars i want to build like the tesla roadster version of of a mouse. I want to build like the Tesla Roadster version of a mouse. And I really do think it's quite possible that within, you know, five to 10 years that most esports competitions are dominated by people with paralysis. This is like a very real possibility for a number of reasons. One is that they'll have access to the best technology to play video games effectively. The second is they have the time to do so. So those two factors together are particularly potent for esport competitors. Unless people without paralysis are also allowed to implant ULI. Right. Which is, it is another way to interact with a digital device. And there's something to that. If it's a fundamentally different experience, more efficient experience, even if it's not like some kind of full-on high-bandwidth communication, if it's just the ability to move the mouse 10x faster, like the bits per second, if I can achieve a bit per second at 10x what I can do with the mouse, that's a really interesting possibility of what they can do, especially as you get really good at it with training. It's definitely the case that you have a higher ceiling performance. Like, because you don't have to buffer your intention through your arm, through your muscle, you get, just by nature of having a brain implant at all, like 75 millisecond lead time on any action that you're actually trying to take. And there's some nuance to this. Like, there's evidence that the motor cortex, you can sort of plan out sequences of action. So you may not get that whole benefit all the time. But for sort of like reaction time style games where you just want to, somebody's over here, snipe them, you know, that kind of thing. You actually do have just an inherent advantage because you don't need to go through muscle. So the question is just how much faster can you make it? And we're already, you know, faster than, you know, what you would do if you're going through muscle from a latency point of view. And we're in the early stage of that. I think we can push it, sort of our end-to-end latency right now from brain spike to cursor movement is about 22 milliseconds. If you think about the best mice in the world, the best gaming mice, that's about five milliseconds-ish of latency, depending on how you measure, depending on how fast your screen refreshes, there's a lot of characteristics that matter there. But yeah, and the rough time for like a neuron in the brain to actually impact your uh command of your hand is about 75 milliseconds so if you look at those numbers you can see that we're already like you know competitive and slightly faster than what you'd get by actually moving your moving your hand and this is something that you know if you ask nolan about it when he moved the cursor for the first time we asked him about this it was something i was super curious about like what does it feel like when you're modulating, you know, a click intention or when you're trying to just move the cursor to the right? He said it moves before he is like actually intending it to, which is kind of a surreal thing and something that, you know, I would love to experience myself one day. What is that like to have the thing just be so immediate, so fluid that it feels like it's happening before you're actually intending it to move. Yeah, I suppose we've gotten used to that latency, that natural latency that happens. So is the currently the bottleneck the communication? So like the Bluetooth communication? Is that what's the actual bottleneck? I mean, there's always going to be a bottleneck. What's the current bottleneck? Yeah, a couple things. So kind of hilariously, Bluetooth low energy protocol has some restrictions on how fast you can communicate. So the protocol itself establishes a standard of the most frequent updates you can send are on the order of 7.5 milliseconds. And as we push latency down to the level of individual spikes impacting control, that level of resolution, that kind of protocol is going to become a limiting factor at some scale. control that level of resolution, that kind of protocol is going to become a limiting factor at some scale. Another sort of important nuance to this is that it's not just the Neuralink itself that's part of this equation. If you start pushing latency sort of below the level of how fast screens refresh, then you have another problem. You need your whole system to be able to be as reactive as the sort of limits of what the technology can offer. Like you need the screen, like 120 Hertz just doesn't, you know, work anymore. If you're trying to have something respond at something that's, you know, at the level of one millisecond. That's a really cool challenge. I also like that for a t-shirt, the best mouse in the world. Tell me on the receiving end. So the decoding step. Now we figured out what the spikes are. We've got them all together. Now we're sending that over to the app. What's the decoding step look like? Yeah. So maybe first, what is decoding? I think there's probably a lot of folks listening that just have no clue what it means to decode brain activity. Actually, even if we zoom out beyond that, what is the app? So there's an implant that's wirelessly communicating with any digital device that has an app installed. Yep. So maybe can you tell me at high level what the app is, what the software is outside of the brain? Yeah. So maybe working backwards from the goal. The goal is to help someone with paralysis, in this case, Nolan, be able to navigate his computer independently. And we think the best way to do that is to offer them the same tools that we have to navigate our software, because we don't want to have to rebuild an entire software ecosystem for the brain, at least not yet. Maybe someday you can imagine there's UXs that are built natively for BCI. But in terms of what's useful for people today, I think most people would prefer to be able to just control mouse and keyboard inputs to all the applications that they want to use for their daily jobs, for communicating with their friends, etc. And so the job of the application is really to translate this wireless stream of brain data coming off the implant into control of the computer. And we do that by essentially building a mapping from brain activity to sort of the HID inputs to the actual hardware. So HID is just the protocol for communicating like input device events. So for example, move mouse to this position, or press this key down. And so that mapping is fundamentally what the app is responsible for. But there's a lot of nuance of how that mapping works that we spend a lot of time to try to get right. And we're still in the early stages of a long journey to figure out how to do that optimally. So one part of that process is decoding. So decoding is this process of taking the statistical patterns of brain data that's being channeled across this Bluetooth connection to the application and turning it into, for example, a mouse movement. And that decoding step, you can think of it in a couple of different parts. So similar to any machine learning problem, there's a training step and there's an inference step. The training step in our case is a very intricate behavioral process where the user has to imagine doing different actions. So for example, they'll be presented a screen with a cursor on it, and they'll be asked to push that cursor to the right. Then imagine pushing that cursor to the left, push it up, push it down. And we can basically build up a pattern, or using any sort of modern ML method, a mapping of given this brain data, and then imagine behavior, map one to the other. And then at test time, you take that same pattern matching system, in our case, it's a deep neural network, and you run it and you take the live stream of brain data coming off their implant, you decode it by pattern matching to what you saw at calibration time. And you use that for control of the computer. Now a couple like sort of rabbit holes that are I think are quite interesting. One of them has to do with how you build that best template matching system, because there's a variety of behavioral challenges and also debugging challenges when you're working with someone who's paralyzed. Because again, fundamentally, you don't observe what they're trying to do. You can't see them attempt to move their hand. And so you have to figure out a way to instruct the user to do something and validate that they're doing it correctly, such that then you can downstream build with confidence the mapping between the neural spikes and the intended action. And by doing the action correctly, what I really mean is at the level of resolution of what neurons are doing. So if in ideal world, you could get a signal of behavioral intent that is ground truth accurate at the scale of sort of one millisecond resolution, then with high confidence, I could build a mapping from my neural spikes to that behavioral intention. But the challenge is, again, that you don't observe what they're actually doing. And so there's a lot of nuance to how you build user experiences that give you more than just sort of a course on average correct representation of what the user's intending to do. If you want to build the world's best mouse, you really want it to be as responsive as possible. You want it to be able to do exactly what the user's intending at every sort of step along the way, not just on average be correct when you're trying to move it from left to right. And building a behavioral sort of calibration game or sort of software experience that gives you that level of resolution is what we spend a lot of time working on. So the calibration process, the interface has to encourage precision, meaning like whatever it does, it should be super intuitive that the next thing the human is going to likely do is exactly that intention that you need, and only that intention. Yeah. And you don't have any feedback except that may be speaking to you afterwards what they actually did. You can't, oh yeah. Right. So that's fundamentally, that is a really exciting UX challenge because that's all on the UX. It's not just about being friendly or nice or usable. Yeah. It's like. User experience is how it works. It's how it works. Yeah. For the calibration, and calibration, at least at this stage of Neuralink, is like fundamental to the operation of the thing. And not just calibration, but continued calibration, essentially. Yeah. Wow, yeah. You said something that I think is worth exploring there a little bit. You said it's primarily a UX challenge. And I think a large component of it is. But there is also a very interesting machine learning challenge here, which is given some data set, including some on average correct behavior of asking the user to move up or move down, move right, move left. And given a data set of neural spikes, is there a way to infer in some kind of semi-supervised or entirely unsupervised way what that high resolution version of their intention is? And if you think about it, like there probably is because there are enough data points in the data set, enough constraints on your model that there should be a way with the right sort of formulation to let the model figure out itself. For example, at this millisecond, this is exactly how hard they're pushing upwards. And at this millisecond, this is how hard they're trying to push upwards. It's really important to have very clean labels. Yes. So like the problem becomes much harder from the machine learning perspective, the labels are noisy. That's correct. And then to get the clean labels, that's a UX challenge. Correct. Although clean labels, I think maybe it's worth exploring what that exactly means. I think any given labeling strategy will have some number of assumptions it makes about what the user is attempting to do. Those assumptions can be formulated in a loss function, or they can be formulated in terms of heuristics that you might use to just try to estimate or guesstimate what the user is trying to do. And what really matters is how accurate are those assumptions. For example, you might say, hey, user, push upwards and follow the speed of this cursor. And your heuristic might be that they're trying to do exactly what that cursor is trying to do. Another competing heuristic might be they're actually trying to go slightly faster at the beginning of the movement and slightly slower at the end. And those competing heuristics may or may not be accurate reflections of what the user is trying to do. Another version of the task might be, hey, user, imagine moving this cursor a fixed offset. So rather than follow the cursor, just try to move it exactly 200 pixels to the right. So here's the cursor. Here's the target. OK, cursor disappears. Try to move that now invisible cursor 200 pixels to the right. And the assumption in that case would be that the user can actually modulate correctly that position offset. But that position offset assumption might be a weaker assumption. And therefore, potentially, you can make it more accurate than these heuristics that are trying to guesstimate at each millisecond what the user is trying to do. So you can imagine different tasks that make different assumptions about the nature of the user intention. And those assumptions being correct is what I would think of as a clean label. For that step, what are we supposed to be visualizing? There's a cursor, and you want to move that cursor to the right or the left or up and down or maybe move them by a certain offset. So that's one way. Is that the best way to do calibration? So for example, an alternative crazy way that probably is playing a role here is a game like WebGrid where you're just getting a very large amount of data. The person playing a game, where if they are in a state of flow, maybe you can get clean signal as a side effect. Yep. Or is that not an effective way for initial calibration? Yeah, great question. There's a lot to unpack there. So the first thing I would draw a distinction between is sort of open loop versus closed loop. So open loop, what I mean by that is the user is sort of going from zero to one. They have no model at all, and they're trying to get to the place where they have some level of control at all. In that setup, you really need to have some task that gives the user a hint of what you want them to do, such that you can build this mapping again from brain data to output. Then once they have a model, you could imagine them using that model and actually adapting to it and figuring out the right way to use it themselves, and then retraining on that data to give you sort of a boost in performance. There's a lot of challenges associated with both of these techniques, and we can sort of rabbit hole into both of them if you're interested. But the sort of challenge with the open loop task is that the user themselves doesn't get proprioceptive feedback about what they're doing. They don't necessarily perceive themselves or feel the mouse under their hand when they're trying to do an open loop calibration. They're being asked to perform something. Like, imagine if you sort of had your whole right arm numbed and you stuck it in a box and you couldn't see it. So you had no visual feedback and you had no proprioceptive feedback about what the position or activity of your arm was. And now you're asked, okay, given this thing on the screen that's moving from left to right, match that speed. And you basically can try your best to invoke whatever that imagined action is in your brain that's moving the cursor from left to right. But in any situation, you're going to be inaccurate and maybe inconsistent in how you do that task. And so that's sort of the fundamental challenge of open loop. The challenge with closed loop is that once the user's given a model and they're able to start moving the mouse on their own, they're going to very naturally adapt to that model. And that co-adaptation between the model learning what they're doing and the user learning how to use the model may not find you the best sort of global minima. It may be that your first model was noisy in some ways, or, you know, maybe just had some like quirk. There's some like part of the data distribution that didn't cover super well. And the user now figures out because they're, you know, a brilliant user like Nolan, they figure out the right sequence of imagined motions or the right angle they have to hold their hand at to get it to work. And they'll get it to work great, but then the next day they come back to their device and maybe they don't remember exactly all the tricks that they used the previous day. And so there's a complicated sort of feedback cycle here that can emerge and can make it a very difficult debugging process. Okay, there's a lot of really fascinating things there. Yeah, actually, just to stay on the closed loop, I've seen situations. This actually happened watching psychology grad students. They use a piece of software when they don't know how to program themselves. They use a piece of software that somebody else wrote, and it has a bunch of bugs. And they figure out, like, and they've been using it for years. They figure out ways to work around it. Oh, that just happens. Nobody, like, considers they've been using it for years. Yeah. They figure out ways to work around. Oh, that just happens. Like nobody, nobody like considers maybe we should fix this. They just adapt. And that's a really interesting notion that we just said we're really good at adapting, but you need to still, that might not be the optimal. Yeah. Okay. So how do you solve that problem? Do you have to restart from scratch every once in a while kind of thing? Yeah. It's a good question. First and foremost, I would say this is not a solved problem. And for anyone who's listening in academia who works on BCIs, I would also say this is not a problem that's solved by simply scaling channel count. So maybe that can help when you can get sort of richer covariance structures that you can use to exploit when trying to come up with good labeling strategies. But if you're interested in problems that aren't going to be solved inherently by scaling channel count, this is one of them. Yeah, so how do you solve it? It's not a solved problem. That's the first thing I want to make sure it gets across. The second thing is any solution that involves closed loop is going to become a very difficult debugging problem. And one of my sort of general heuristics for choosing what problems to tackle is that you want to choose the one that's going to be the easiest to debug. heuristics for choosing what problems to tackle is that you want to choose the one that's going to be the easiest to debug. Because if you can do that, even if the ceiling is lower, you're going to be able to move faster because you have a tighter iteration loop debugging the problem. And in the open loop setting, there's not a feedback cycle to debug with the user in the loop. And so there's some reason to think that that should be an easier debugging problem. The other thing that's worth understanding is that even in a closed-loop setting, there's no special software magic of how to infer what the user is truly attempting to do. In a closed-loop setting, although they're moving the cursor on the screen, they may be attempting something different than what your model is outputting. So what the model is outputting is not a signal that you can use to retrain if you want to be able to improve the model further. You still have this very complicated guesstimation or unsupervised problem of figuring out what is the true user intention underlying that signal. And so the open loop problem has the nice property of being easy to debug. And the second nice property of, it has all the same information and content as the closed loop scenario. Another thing I want to mention and call out is that this problem doesn't need to be solved in order to give useful control to people. You know, even today with the solutions we have now, and that academia has built up over decades, the level of control that can be given to a user, you know, today is quite useful. It doesn't need to be solved to get to that level of control. But again, I want to build the world's best mouse. I want to make it, you know, so good that it's not even a question that you want it. And to build the world's best mouse, the superhuman version, you really need to nail that problem. And a couple maybe details of previous studies that we've done internally that I think are very interesting to understand when thinking about how to solve this problem. The first is that even when you have ground truth data of what the user is trying to do, and you can get this with an able-bodied monkey, a monkey that has an Erlang device implanted and moving a mouse to control a computer. Even with that ground truth data set, it turns out that the optimal thing to predict to produce high performance BCI is not just the direct control of the mouse. You can imagine, you know, building a data set of what's going on in the brain and what is the mouse exactly doing on the table. And it turns out that if you build the mapping from neural spikes to predict exactly what the mouse is doing, that model will perform worse than a model that is trained to predict sort of higher-level assumptions about what the user might be trying to do. For example, assuming that the monkey is trying to go in a straight line to the target. It turns out that making those assumptions is actually more effective in producing a model than actually predicting the underlying hand movement. So the intention, not the physical movement or whatever. Yeah. There's obviously a very strong correlation between the two, but the intention is a more powerful thing to be chasing. Right. Well, that's also super interesting. I mean, the intention itself is fascinating because, yes, with the BCI here in this case, with the digital telepathy you're acting on the intention not the action which is why there's an experience of like feeling like it's happening before you meant for it to happen that is so cool and that is why you could achieve like superhuman performance probably in terms of the control of the mouse so for open loop just to clarify so whenever the the person is tasked to move the mouse to the right, you said there's not feedback. So they don't get to get that satisfaction of actually getting it to move, right? So you could imagine giving the user feedback on a screen, but it's difficult because at this point, you don't know what they're attempting to do. So what can you show them that would basically give them a signal of I'm doing this correctly or not correctly. So let's take this very specific example, like maybe your calibration task looks like you're trying to move the cursor a certain position offset. So your instructions to the user are, hey, the cursor is here. Now when the cursor disappears, imagine moving it 200 pixels from where it was to the right to be over this target. In that kind of scenario, you could imagine coming up with some sort of consistency metric that you could display to the user of, okay, I know what the spike train looks like on average when you do this action to the right. Maybe I can produce some sort of probabilistic estimate of how likely is that to be the action you took given the latest trial or trajectory that you imagined. And I could give the user some sort of feedback of how consistent are they across different trials. You could also imagine that if the user is prompted with that kind of consistency metric, that maybe they just become more behaviorally engaged to begin with because the task is kind of boring when you don't have any feedback at all. And so there may be benefits to the user experience of showing something on the screen, even if it's not accurate, just because it keeps the user motivated to try to increase that number or push it upwards. So there's a psychology element here. Yeah, absolutely. And again, all of that is UX challenge. How much signal drift is there? Hour to hour, day to day, week to week, month to month. How often do you have to recalibrate because of the signal drift yeah uh so this is uh a problem we've worked on both with nhp non-human primates before our clinical trial and then also with noland during the clinical trial uh maybe the first thing that's worth stating is what the goal is here so the goal is really to enable the user to have a plug and play experience where i guess they don't have to plug anything in but a play experience where they uh you know can use the device whenever they want to however they want to and uh that's really what we're aiming for and so there can be a set of solutions that get to that state without um considering this non-stationarity problem so maybe the first solution here that's important is that they can recalibrate whenever they want this is something that um that nolan has the ability to do today so he can recalibrate whenever they want. This is something that Nolan has the ability to do today. So he can recalibrate the system at 2 a.m. in the middle of the night without his caretaker or parents or friends around to help push a button for him. The other important part of the solution is that when you have a good model calibrated, that you can continue using that without needing to recalibrate it. So how often he has to do this recalibration today depends really on his appetite for performance. There are, we observe sort of a degradation through time of how well any individual model works, but this can be mitigated behaviorally by the user adapting their control strategy. It can also be mitigated through a combination of sort of software features that we provide to the user. For example, we let the user adjust exactly how fast the cursor is moving. We call that the gain, for example, the gain of how fast the cursor reacts to any given input intention. They can also adjust the smoothing, how smooth the output of that cursor intention actually is. They can also adjust the friction, which is how easy is it to stop and hold still. And all these software tools allow the user a great deal of flexibility and troubleshooting mechanisms to be able to solve this problem for themselves. By the way, all of this is done by looking to the right side of the screen, selecting the mixer, and the mixer you have... It's like DJ mode. DJ mode for your VCI. I mean, it's a really well done interface. It's really, really well done. And so, yeah, there's that bias that there's a cursor drift that Nolan talked about in a stream. Although he said that you guys were just playing around with it with him and they're constantly improving. So that could have been just a snapshot of that particular moment, a particular day. But he said that there was this cursor drift and this bias that could be removed by him, I guess, looking to the right side of the screen or left side of the screen to kind of adjust the bias. That's one interface action, I guess, looking to the right side of the screen or the left side of the screen to kind of adjust the bias. That's one interface action, I guess, to adjust the bias. Yeah. So this is actually an idea that comes out of academia. There is some prior work with sort of BrainGate clinical trial participants where they pioneered this idea of bias correction. The way we've done it, I think, is, yeah, it's very productized, very beautiful user experience where the user can essentially flash the cursor over to the side of the screen and it opens up a window where they can actually sort of adjust or tune exactly the bias of the cursor. So bias maybe for people who aren't familiar is just sort of what is the default motion of the cursor if you're imagining nothing. And it turns out that that's one of the first sort of qualia of the cursor control experience that's impacted by neural non-stationarity. Qualia of the cursor experience. I mean, I don't know how else to describe it. Like, you know, I'm not the guy moving things. It's very poetic. I love it. The qualia of the cursor experience. Yeah, I mean, it sounds poetic, but it is deeply true. There is an experience. When it works well, it is a joyful, a really pleasant experience. And when it doesn't work well, it's a very frustrating experience. When it works well, it is a joyful, a really pleasant experience. And when it doesn't work well, it's a very frustrating experience. That's actually the art of UX. It's like you have the possibility to frustrate people or the possibility to give them joy. And at the end of the day, it really is truly the case that UX is how the thing works. And so it's not just like what's showing on the screen. It's also, you know, what control surfaces does a decoder provide the user? We want them to feel like they're in the F1 car, not like the, you know, some like minivan, right? And that really truly is how we think about it. Nolan himself is an F1 fan. So we refer to ourself as a pit crew. He really is truly the F1 driver. And there's different, you know, control surfaces that different kinds of cars and airplanes provide the user. And we take a lot of inspiration from that when designing how the cursor should behave. And maybe one nuance of this is, even details like when you move a mouse on a MacBook trackpad, the sort of response curve of how that input that you give the trackpad translates to cursor movement is different than how it works with a mouse. When you move on the trackpad translates to cursor movement is different than how it works with a mouse. When you move on the trackpad, there's a different response function, a different curve to how much a movement translates to input to the computer than when you do it physically with a mouse. And that's because somebody sat down a long time ago when they're designing the initial input systems to any computer, and they thought through exactly how it feels to use these different systems. And now we're designing sort of the next generation of this input system to a computer, which is entirely done via the brain. And there's no proprioceptive feedback. Again, you don't feel the mouse in your hand. You don't feel the keys under your fingertips. And you want a control surface that still makes it easy and intuitive for the user to understand the state of the system and how to achieve what they want to achieve. And ultimately, the end goal is that that UX is completely, it fades into the background. It becomes something that's so natural and intuitive that it's subconscious to the user. And they just should feel like they have basically direct control over the cursor. It just does what they want it to do. They're not thinking about the implementation of how to make it do what they want it to do. It's just doing what they want it to do. Is there some kind of things along the lines of like Fitt's law where you should move the mouse in a certain kind of way that maximizes your chance to hit the target. I don't even know what I'm asking, but I'm hoping the intention of my question will land on a profound answer. No. Is there some kind of understanding of the laws of UX when it comes to the context of somebody using their brain to control it. Like that's different than actual with a mouse. I think we're in the early stages of discovering those laws. So I wouldn't claim to have solved that problem yet, but there's definitely some things we've learned that make it easier for the user to get stuff done. And there's definitely some things we've learned that make it easier for the user to get stuff done. And it's pretty straightforward when you verbalize it, but takes a while to actually get to that point when you're in the process of debugging this stuff in the trenches. One of those things is that any machine learning system you build has some number of errors. And it matters how those errors translate to the downstream user experience. For example, if you're developing a search algorithm in your photos, if you search for, you know, your friend Joe, and it pulls up a photo of your friend, Josephine, maybe that's not a big deal, because the cost of an error is not that high. In a different scenario, where you're trying to, you know, detect insurance fraud or something like this, and you're directly sending someone to court because of some machine learning model output, then the errors make a lot more sense to be careful about. You want to be very thoughtful about how those errors translate to downstream effects. The same is true in BCI. So for example, if you're building a model that's decoding a velocity output from the brain versus an output where you're trying to modulate the left click, for example, these have sort of different trade-offs of how precise you need to be before it becomes useful to the end user. For velocity, it's okay to be on average correct because the output of the model is integrated through time. So if the user is trying to click at position A and they're currently in position B, they're trying to navigate over time to get between those two points. And as long as the output of the model is on average correct, they can sort of steer it through time with user control loop in the in the mix they can get to the point they want to get to the same is not true of a click for a click you're performing it almost instantly at the scale of you know neurons firing and so you want to be very sure that that click is correct because a false click can be very destructive to the user they might accidentally close the tab that they're trying to you know do something and lose all their progress. They might accidentally hit some send button on some text that is only half composed and reads funny after. So there's different sort of cost functions associated with errors in this space. And part of the UX design is understanding how to build a solution that is, when it's wrong, still useful to the end user. that's so fascinating that assigning cost to every action when uh an error occurs so every action if an error occurs has a certain cost and incorporating that into how you interpret the intention mapping it to the action is really important i didn't, until you said it, realize there's a cost to like sending the text early. It's like a very expensive cost. Yeah, it's super annoying. If you accidentally, like if you're a cursor, imagine if your cursor misclicked every once in a while. That's like super obnoxious. And the worst part of it is, usually when the user is trying to click, they're also holding still because they're over the target they want to hit and they're getting ready to click. Which means that in the data sets trying to click, they're also holding still because they're over the target they want to hit and they're getting ready to click. Which means that in the data sets that we build, on average, it's the case that sort of low speeds or desire to hold still is correlated with when the user is attempting to click. Wow, that is really fascinating. It's also not the case, you know, people think that, oh, click is a binary signal. This must be super easy to decode. Well, yes, it is. But the bar is so much higher for it to become a useful thing for the user. And there's ways to solve this. I mean yes, it is. But the bar is so much higher for it to become a useful thing for the user. And there's ways to solve this. I mean, you can sort of take the compound approach of, well, let's just give the, like, let's take five seconds to click. Let's take a huge window of time so we can be very confident about the answer. But again, world's best mouse. The world's best mouse doesn't take a second to click or 500 milliseconds to click. It takes five milliseconds to click or less. And so if you're aiming for that kind of high bar, then you really want to solve the underlying problem. So maybe this is a good place to ask about how to measure performance, this whole bits per second. Can you explain what you mean by that? Maybe a good place to start is to talk about WebGrid as a game, as a good illustration of the measurement of performance. Yeah. Maybe I'll take one zoom out step there, which is just explaining why we care to measure this at all. So again, our goal is to provide the user the ability to control the computer as well as I can, and hopefully better. And that means that they can do it at the same speed as what I can do. It means that they have access to all the same functionality that I have, including all those little details like command tab, command space, all this stuff. They need to be able to do it with their brain. And with the same level of reliability is what I can do with my muscles. And that's a high bar. And so we intend to measure and quantify every aspect of that to understand how we're progressing towards that goal. There's many ways to measure BPS, by the way. This isn't the only way. But we present the user a grid of targets, and basically we compute a score, which is dependent on how fast and accurately they can select, and then how small are the targets. And the more targets that are on the screen, the smaller they are, the more information you present per click. And so if you think about it from information theory point of view, you can communicate across different information theoretic channels. And one such channel is a typing interface, you could imagine, that's built out of a grid, just like a software keyboard on the screen. And bits per second is a measure that's computed by taking the log of the number of targets on the screen. You can per second is a measure that's computed by taking the log of the number of targets on the screen. You can subtract one if you care to model a keyboard because you have to subtract one for the delete key on the keyboard. But log of the number of targets on the screen times the number of correct selections minus incorrect divided by some time window, for example, 60 seconds. And that's sort of the standard way to measure a cursor control task in academia. And all credit in the world goes to this great professor, Dr. Chenoy of Stanford, who came up with that task. And he's also one of my inspirations for being in the field. So all the credit in the world to him for coming up with a standardized metric to facilitate this kind of bragging rights that we have. Now to say that Nolan is the best in the world at this task with his BCI. It's very important for progress that you have standardized metrics that people can compare across different techniques and approaches. How well does this do? So, yeah, big kudos to him and to all the team at Stanford. Yeah, so for Noland and for me playing this task, there's also different modes that you can configure this task. So the Web Grid task can be presented as just sort of a left click on the screen, or you could have targets that you just dwell over, or you could have targets that you left right click on. You could have targets that are left, right click, middle click, scrolling, clicking and dragging. You know, you can do all sorts of things within this general framework. But the simplest, purest form is just blue targets show up on the screen, blue means left click. That's the simplest form of the game. And the sort of prior records here in academic work and at Neuralink internally with sort of NHPs have all been matched or beaten by Nolan with his Neuralink device. So sort of prior to Neuralink, the sort of world record for a human using the device is somewhere between 4.2 to 4.6 BPS, depending on exactly what paper you read and how you interpret it. Nolan's current record is 8.5 BPS. And again, the sort of median Neuralink performance is 10 BPS. So you can think of it roughly as he's 85% the level of control of a median Neuralinker using their cursor to select blue targets on the screen. And yeah, I think there's a very interesting journey ahead to get us to that same level of 10 BPS performance. It's not the case that sort of the tricks that got us from, you know, 4 to 6 BPS and then 6 to 8 BPS are going to be the ones that get us from that same level of 10 BPS performance. It's not the case that sort of the tricks that got us from, you know, 4 to 6 BPS and then 6 to 8 BPS are going to be the ones that get us from 8 to 10. And in my view, the core challenge here is really the labeling problem. It's how do you understand at a very, very fine resolution what the user is attempting to do. And yeah, I highly encourage folks in academia to work on this problem. What's the journey with Nolan on that quest of increasing the BPS on WebGrid? In March, you said that he selected 89,285 targets on WebGrid. Yep. So he loves this game. He's really serious about improving his performance in this game. So what is that journey of trying to figure out how to improve that performance? How much can that be done on the decoding side? How much can that be done on the decoding side how much can that be done on the calibration side how much can that be done on the nolan side of like figuring out how to convey his intention more cleanly yeah no this is a great question so in my view one of the primary reasons why nolan's performance is so good is because of nolan nolan is extremely focused and very energetic he'll play web grid sometimes for like four hours in the middle of the night like from 2 a.m to 6 a.m he'll be playing web grid just because he wants to push it to the limits of what he can do and uh you know this is not us like asking him to do that i want to be clear like we're not saying hey you should play web grid tonight we just gave him the game as part of our research you know and he is able to play independently and practice whenever he wants. And he really pushes hard to push it. The technology is the absolute limit. And he views it as like, you know, his job really to make us be the bottleneck. And boy, has he done that well. And so that's the first thing to acknowledge is that, you know, he is extremely motivated to make this work. I've also had the privilege to meet other, you know, clinical trial participants from BrainGate and other trials, and they very much share the same attitude of like, they view this as their life's work to, you know, advance the technology as much as they can. And if that means selecting targets on the screen for four hours from 2am to 6am, then so be it. And there's something extremely admirable about that that's worth calling out. Okay, so now how do you sort of get from where he started, which is no cursor control to 8BPS? So, I mean, when he started, there's a huge amount of learning to do on his side and our side to figure out what's the most intuitive control for him. And the most intuitive control for him is sort of, you have to find the set intersection of what do we have the signal to decode? So we don't pick up, you know, every single neuron in the motor cortex, which means we don't have representation for every part of the body. So there may be some signals that we have better sort of decode performance on than others. For example, on his left hand, we have a lot of difficulty distinguishing his left ring finger from his left middle finger. But on his right hand, we have a good, you know, good control and good modulation detected from the neurons that we're able to record for his pinky and his thumb and his index finger. So you can imagine how these different subspaces of modulated activity intersect with what's the most intuitive for him. And this has evolved over time. So once we gave him the ability to calibrate models on his own, he was able to go and explore various different ways to imagine controlling the cursor. For example, he could imagine controlling the cursor by wiggling his wrist side to side, or by moving his entire arm, by, I think at one point, into his feet. You know, he tried like a whole bunch of stuff to explore the space of what is the most natural way for him to control the cursor that at the same time is easy for us to decode. Just to clarify, it's through the body mapping procedure there you're able to figure out which finger he can move? Yes, yes. That's one way to do it. Maybe one nuance of when he's doing it, he can imagine many more things than we represent in that visual on the screen. So we show him sort of abstractly, here's a cursor. You figure out what works the best for you. And we obviously have hints about what will work best from that body mapping procedure of, you know, we know that this particular action we can represent well, but it's really up to him to go and explore and figure out what works the best but at which point does he no longer visualize the movement of his body and he's just visualizing the movement of the cursor yeah how quickly does he go from how quickly does it get there so this happened on a tuesday i remember this day very clearly because at some point during the during the day uh it looked like he wasn't doing super well it looked like the model wasn't performing super well and he was like getting distracted but he actually it wasn't the case like what actually happened was he was trying something new where he was just controlling the cursor so he wasn't imagining moving his hand anymore he was just imagining i don't know what it is some like abstract intention to move the cursor on the screen and uh i cannot tell you what the difference between those two things are. I really truly cannot. He's tried to explain it to me before. I cannot, you know, give a first person account of what that's like. But the expletives that he uttered in that moment were, you know, enough to suggest that it was a very qualitatively different experience for him to just have direct neural control over a cursor. direct neural control over a cursor. I wonder if there's a way through UX to encourage a human being to discover that. Because he discovered it, like you said to me, that he's a pioneer. So he discovered that on his own through all of this, the process of trying to move the cursor with different kinds of intentions. trying to move the cursor with different kinds of intentions. But that is clearly a really powerful thing to arrive at, which is to let go of trying to control the fingers and the hand and control the actual digital device of your mind. That's right. UX is how it works. And the ideal UX is one that the user doesn't have to think about what they need to do in order to get it done. It just does it. That is so fascinating. But I wonder on the biological side, how long it takes for the brain to adapt. Yeah. So is it just simply learning like high level software or is there like a neuroplasticity component where like the brain is adjusting slowly? Yeah. The truth is is i don't know um i'm very excited to see with sort of the second participant that we implant what the you know what the journey is like for them because we'll have learned a lot more potentially we can help them understand and explore that direction more quickly this is something i didn't you know this wasn't me prompting nolan to go try this He was just exploring how to use his device and figure it out himself. But now that we know that that's a possibility, that maybe there's a way to, you know, for example, hint the user, don't try super hard during calibration. Just do something that feels natural. Or just directly control the cursor. You know, don't imagine explicit action. And from there, we should be able to hopefully understand how this is for somebody who has not experienced that before. Maybe that's the default mode of operation for them. You don't have to go through this intermediate phase of explicit motions. Or maybe if that naturally happens for people, you can just occasionally encourage them to allow themselves to move the cursor. Actually, sometimes, just like with a four-minute mile, just the knowledge that that's possible... Pushes you to do it. Yeah, enables you to do it. And then it becomes trivial. And then it also makes you wonder that's the cool thing about humans if once there's a lot more human participants they will discover things that are possible yes and share their experiences yeah and share each other and that because of them sharing it they'll be able to do it uh all of a sudden that's that's unlocked for everybody yeah because just the knowledge sometimes is the thing that enables it to do it. Yeah, I mean, just coming on that too, we've probably tried like a thousand different ways to do various aspects of decoding. And now we know what the right subspace is to continue exploring further. Again, thanks to Nolan and the many hours he's put into this. And so even just that, help constraints or the beam search of different approaches that we could explore really helps accelerate for the next person. You know, the set of things that we'll get to try on day one, how fast we hope to get them to useful control, how fast we can enable them to use it independently and to get value out of the system. So yeah, massive hats off to Nolan and all the participants that came before him to make this technology a reality. So how often are the updates to the decoder? to reality. So how often are the updates to the decoder? Because Nolan mentioned like, okay, there's a new update that we're working on and that in the stream he said he plays the snake game because it's like super hard. It's a good way for him to test like how good the update is. So, and he says like sometimes the update is a step backwards. It's like a, it's a constant like iteration. So how often, like what does the update entail? Is it mostly on the decoder side? Yeah, a constant iteration. So how often, what does the update entail? Is it mostly on the decoder side? Yeah, a couple comments. So one is it's probably worth trying distinction between research sessions where we're actively trying different things to understand what the best approach is versus independent use where we want to have the ability to just go use the device how anybody would want to use their MacBook. And so what he's referring to is I think usually in the context of a research session where we're trying, you know, many, many different approaches to, you know, even unsupervised approaches like we talked about earlier to try to come up with better ways to estimate his true intention and more accurately decode it. And in those scenarios, I mean, we try in any given session, he'll sometimes work for like eight hours a day. And so that can be, you know, hundreds of different models that we would try in that day, like a lot of different things. Now, it's also worth noting that we update the application he uses quite frequently. I think, you know, sometimes up to like four or five times a day, we'll update his application with different features or bug fixes or feedback that he's given us. So he's been able to, he's a very articulate person who is part of the solution. He's not a complaining person. He says, hey, here's this thing that I've discovered is not optimal in my flow. Here's some ideas how to fix it. Let me know what your thoughts are. Let's figure out how to solve it. And it often happens that those things are addressed within a couple hours of him giving us his feedback. That's the kind of iteration cycle we'll have. And so sometimes at the beginning of the session, he'll give us feedback. And at the end of the session, he's giving us feedback on the next iteration of that process or that setup. That's fascinating. Because one of the things you mentioned that there was 271 pages of notes taken from the BCI sessions, and this was just in March. So one of the amazing things about human beings that they can provide, especially ones who are smart and excited and all like positive and good vibes like Nolan, that they can provide feedback continuous feedback yeah it also requires just to brag on the team a little bit i work with a lot of exceptional people and it requires the team being absolutely laser focused on the user and what will be the best for them and it requires like a level of commitment of okay this is what the user feedback was i have all these meetings we're going to skip that today and we're going to do this. That level of focus and commitment is I would say underappreciated in the world. And also, you obviously have to have the talent to be able to execute on these things effectively. And yeah, we have that in loads. Yeah, and this is such an interesting space of UX design because you have there's so many unknowns here. And I can tell UX is difficult because of how many people do it poorly. It's just not a trivial thing. Yeah. UX is not something that you can always solve by just constant iterating on different things. Sometimes you really need to step back and think globally, am I even in the right minima to be chasing down for a solution? There's a lot of problems in which fast iteration cycle is the predictor of how successful you will be. As a good example, in an RL simulation, for example, the more frequently you get a reward, the faster you can progress. It's just an easier learning problem the more frequently you get a reward, the faster you can progress. It's just an easier learning problem, the more frequently you get feedback. But UX is not that way. I mean, users are actually quite often wrong about what the right solution is. And it requires a deep understanding of the technical system and what's possible, combined with what the problem is you're trying to solve, not just how the user expressed it, but what the true underlying problem is to actually get to the right place. Yeah. That's the old, like stories of Steve jobs, like rolling in there. Like, yeah, the user is a good, is a useful signal, but it's not a perfect signal. And sometimes you have to remove the floppy disk drive or whatever the, I forgot all the crazy stories of Steve jobs, like making wild, uh, design decisions. But there, some, some of it is aesthetic. That some of it is about the love you put into the design, which is very much a Steve Jobs, Johnny Ive type thing. But when you have a human being using their brain to interact with it, it also is deeply about function. It's not just aesthetic. You have to empathize with the human being before you while not always listening to them directly. You have to deeply empathize. It's fascinating. It's really, really fascinating. And at the same time, iterate. But not iterate in small ways. Sometimes a complete, like rebuilding the design. He said that, Nolan said in the early days, the UX sucked, but you improved quickly. What was that journey like? Yeah, I mean, I'll give one concrete example. So he really wanted to be able to read manga. This is something that he, I mean, it sounds like a simple thing, but it's actually a really big deal for him. And he couldn't do it with this mouse stick. It just, it wasn't accessible. You can't scroll with a mouse stick on his iPad and on the website that he wanted to be able to use to read the newest manga. And so- Might be a good quick pause to say the mouse stick is the thing he's using, holding a stick in his mouth to scroll on a tablet. Right, yeah. It's basically, you can imagine it's a stylus that you hold between your teeth. Yeah. It's basically a very long stylus. And it's exhausting, it hurts, and it's inefficient. Yeah. And maybe it's also worth calling out, there are other alternative assistive technologies, but that particular situation Nolan's in, and this is not uncommon, and I think it's also not well understood by folks, is that he's relatively spastic, so he'll have muscle spasms from time to time. And so any assistive technology that requires him to be positioned directly in front of a camera, for example, an eye tracker, or anything that requires him to put something in his mouth, just is a no-go, because he'll either be shifted out of frame when he has a spasm, or if he has something in his mouth, it'll stab him in the face, you know, if the spasm's too hard. So these kinds of considerations are important when thinking about what advantages a PCI has in someone's life. If it fits ergonomically into your life in a way that you can use it independently when your caretaker is not there, wherever you want to, either in the bed or in the chair, depending on your comfort level and your desire to have pressure sores, all these factors matter a lot in how good the solution is in that user's life. So one of these very fun examples is scroll. So again, Manga is something he wanted to be able to read. And there's many ways to do scroll with the BCI. You can imagine like different gestures, for example, the user could do that would move the page. But scroll is a very fascinating control surface because it's a huge thing on the screen in front of you. So any sort of jitter in the model output, any sort of error in the model output causes like an earthquake on the screen. Like you really don't want to have your manga page that you're trying to read be shifted up and down a few pixels just because, you know, your scroll decoder is not completely accurate. And so this was an example where we had to figure out how to formulate the problem in a way that the errors of the system, whenever they do occur, and we'll do our best to minimize them, but whenever those errors do occur, that it doesn't interrupt the qualia, again, of the experience that the user is having. It doesn't interrupt their flow of reading their book. And so what we ended up building is this really brilliant feature. This is a teammate named Bruce who worked on this really brilliant work called QuickScroll. And QuickScroll basically looks at the screen and it identifies where on the screen are scroll bars. And it does this by deeply integrating with macOS to understand where are the scroll bars actively present on the screen using the sort of accessibility tree that's available to macOS apps. And we identified where those scroll bars are and provided a BCI scroll bar. And the BCI scroll bar. And the BCI scroll bar looks similar to a normal scroll bar, but it behaves very differently in that once you sort of move over to it, your cursor sort of morphs onto it. It sort of attaches or latches onto it. And then once you push up or down in the same way that you'd use a push to control the normal cursor, it actually moves the screen for you. So it's basically like remapping the velocity to a scroll action. And the reason that feels so natural and intuitive is that when you move over to attach to it, it feels like magnetic. So you're like sort of stuck onto it. And then it's one continuous action. You don't have to like switch your imagined movement. You sort of snap onto it and then you're good to go. You just immediately can start pulling the page down or pushing it up. And even once you get that right, there's so many little nuances of how the scroll behavior works to make it natural and intuitive. So one example is momentum. Like when you scroll a page with your fingers on the screen, you know, you, you actually have some like flow, like it doesn't just stop right when you lift your finger up. The same is true with BCI scroll. So we had to spend some time to figure out what are the right nuances when you don't feel the screen under your fingertip anymore. What is the right sort of dynamic or what's the right nuances when you don't feel the screen under your fingertip anymore what is the right sort of dynamic or what's the right amount of page give if you will uh when you push it to make it flow the right amount for the user to have a natural experience reading their book and there's a million i mean there's i could tell you like there's so many little minutiae of how exactly that scroll works that we spent probably like a month getting right to make that feel extremely natural and uh and easy for the user to navigate. I mean, even the scroll on a smartphone with your finger feels extremely natural and pleasant. And it probably takes an extremely long time to get that right. And actually, the same kind of visionary UX design that we were talking about. Don't always listen to the users, but also listen to them and also have visionary, big, throw-everything-out-thing from first principles, but also not. Yeah, yeah. By the way, it just makes me think that scroll bars on the desktop probably have stagnated and never taken that. Because the snap, same as the snap to grid, snap to scroll bar action you're talking about, is something that could potentially be extremely useful in the desktop setting. Yeah. Even just for users to just improve the experience. Because the current scroll bar experience in the desktop is horrible. Yeah, agreed. It's hard to find, hard to control. There's not a momentum. And the intention should be clear. When I start moving towards the scroll bar, there should be a snapping to the scroll bar action. But of course, maybe I'm okay paying that cost, but there's hundreds of millions of people paying that cost nonstop. But in this case, this is necessary because there's an extra cost paid by Nolan for the jitteriness. So you have to switch between the scrolling and the reading. There has to be a phase shift between the two. Like, when you're scrolling, you're scrolling. Right, right. So that is one drawback of the current approach. Maybe one other just sort of case study here. So again, UX is how it works. And we think about that holistically from like the, even the feature detection level of what we detect in the brain to how we design the decoder, what we choose to decode to then how it works once it's being used by the user. So another good example in that sort of how it works once they're actually using the decoder, you know, the output that's displayed on the screen is not just what the decoder says. It's also a function of, you know, what's going on on the screen. So we can understand, for example, that, you know, when you're trying to close a tab, that very small, stupid little X that's extremely tiny, which is hard to get precisely hit if you're dealing with sort of a noisy output of the decoder, we can understand that that is a small little X you might be trying to hit and actually make it a bigger target for you. Similar to how when you're typing on your phone, if you're, you know, used to like the iOS keyboard, for example, it actually adapts the target size of individual keys based on the underlying language model. So it'll actually understand if I'm typing, hey, I'm going to see L, it'll make the E key bigger because it knows Lex is the person I'm going to go see. And so that kind of, you know, predictiveness can make the experience much more smooth, even without, you know, improvements to the underlying decoder or feature detection part of the stack. So we do that with a feature called magnetic targets. We actually index the screen and we understand, okay, these are the places that are very small targets that might be difficult to hit. Here's the kind of cursor dynamics around that location that might be indicative of the user trying to select it. Let's make it easier. Let's blow up the size of it in a way that makes it easier for the user to sort of snap onto that target. So all these little details, they matter a lot in helping the user be independent in their day-to-day living. So how much of the work on the decoder is generalizable to P2, P3, P4, P5, Pn? How do you improve the decoder in a way that's generalizable? Yeah, great question. So the underlying signal we're trying to decode is going to look very different in P2 than in P1. For example, channel number 345 is going to mean something different in user 1 than it will in user 2, just because that electrode that corresponds with channel 345 is going to be next to a different neuron in user 1 versus user 2. But the approaches, the methods, the user experience of how do you get the right sort of behavioral pattern from the user to associate with that neural signal, we hope that will translate over multiple generations of users. And beyond that, it's very, very possible, in fact, quite likely that we've overfit to sort of Nolan's user experience, desires, and preferences. And so what I hope to see is that, you know, when we get a second, third, fourth participant, that we find sort of what the right wide minimas are that cover all the cases, that make it more intuitive for everyone. And hopefully there's a cross-pollination of things where, oh, we didn't think about that with this user because, you know, they can speak. But with this user who just can fundamentally not speak at all, this user experience is not optimal. And that will actually, those improvements that we make there should hopefully translate then to even people who can speak but don't feel comfortable doing so because we're in a public setting like their doctor's office. So the actual mechanism of open-loop labeling and then closed-loop labeling would be the same and hopefully can generalize across the different users as they're doing the calibration step. And the calibration step is pretty cool. I mean, that in itself, the interesting thing about WebGrid, which is like closed-loop, it's like fun. I love it when there's like, there used to be kind of idea of human computation which is using actions that human would want to do anyway to get a lot of signal from yeah and like web grid is that like a nice video game that also serves as great calibration it's so funny this is i've heard this reaction so many times before sort of. Before the first user was implanted, we had an internal perception that the first user would not find this fun. And so we thought really quite a bit actually about, should we build other games that are more interesting for the user so we can get this kind of data and help facilitate research for long duration and stuff like this? It turns out that people love this game. I always loved it, but I didn't know that that was a shared perception. Yeah, and just in case it's not clear web grid is there's a a grid of let's say 35 by 35 uh cells and one of them lights up blue and you have to move your mouse over that and click on it and if you miss it and it's red and i put this game for so many hours so many hours and what's your record, you said? I think I have the highest at Neuralink right now. My record is 17 BPS. 17 BPS. Which is about, if you imagine that 35 by 35 grade, you're hitting about 100 trials per minute. So 100 correct selections in that one-minute window. So you're averaging about between 500, 600 milliseconds per selection. So one of the reasons I think I struggle with that game is I'm such a keyboard person. So everything is done with your keyboard. If I can avoid touching the mouse, it's great. So how can you explain your high performance? I have like a whole ritual I go through when I play WebCode. So it's actually like a diet plan associated with this whole thing. So the first thing... I have to fast for five days. I have to go up to the mountains. Actually, it's a whole thing. So the first thing- I have to fast for five days. I have to go up to the mountains. Actually, it kind of, I mean, the fasting thing is important. So this is like, you know- It focuses the mind, yeah. Yeah, it's true. So what I do is I actually, I don't eat for a little bit beforehand. And then I'll actually eat like a ton of peanut butter right before I play. And I get like- This is a real thing. This is a real thing, yeah. And then it has to be really late at night. This is, again, a night owl thing I think we share. But it has to be like, you know, midnight, 2 a.m. kind of time window. And I have a very specific like physical position I'll sit in, which is I used to be, I was homeschooled growing up. And so I did most of my work like on the floor, just like in my bedroom or whatever. And so I have a very specific situation. On the floor. On the floor that I sit and play. And then you have to make sure like there's not a lot of weight on your elbow when you're playing so that you can move quickly. And then I turn the gain of the cursor so the speed of the cursor way, way up. So it's like small motions that actually move the cursor. Are you moving with your wrist or you're never... I move with my fingers. So my wrist is almost completely still. I'm just moving my fingers. Yeah. You know those, just on a small tangent, which I've been meaning to go down this rabbit hole of people that set the world record in Tetris. Those folks, they're playing. There's a way to. Did you see this? It seems like all the fingers are moving. Yeah. You could find a way to do it where it's using a loophole, like a bug, that you can do some incredibly fast stuff. So it's along that line, but not quite. But you do realize there'll be like a few programmers right now listening to this who will fast and eat peanut butter. Yeah, please, please break my record. I mean, the reason I did this literally was just because I wanted the bar to be high. Like I wanted the number that we aim for should not be like the median performance. It should be like, it should be able to beat all of us at least like that should be the minimum bar. What do you think is possible? Like 20? Yeah, I don't know what the limit, I mean, the limits you can calculate just in terms of like screen refresh rate and like cursor That should be the minimum bar. What do you think is possible? Like 20? Yeah, I don't know what the limit, I mean, the limits, you can calculate just in terms of like screen refresh rate and like cursor immediately jump into the next target. But there's, I mean, I'm sure there's limits before that with just sort of reaction time and visual perception and things like this. I'd guess it's in the below 40, but above 20, somewhere in there. It's probably the right, there I never be thinking about. It also matters like how difficult the task is. You can imagine like some people might be able to do like 10 000 targets on the screen and maybe they can do better that way um so there's some like task optimizations you could do to try to boost your performance as well what uh do you think it takes for nolan to be able to do above eight five to keep increasing that number you said like every increase in the number might require different. Yeah. Different improvements in the system. Yeah. I think the nature of this work is the first, the first answer that's important to say is, I don't know. This is, you know, edge of the research. So again, nobody's gotten to that number before. So what's next is going to be a, you know, heuristic, a guess from my part. What we've seen historically is that different parts of the stack become bottlenecks at different time points so you know when i first joined erlink like three years ago or so one of the major problems was just the latency of the bluetooth connection it was just like the radio device wasn't super good it was an early revision of the implant and uh it just like no matter how good your decoder was if your thing is updating every 30 milliseconds or 50 milliseconds it's just going to be choppy and uh no matter how good your decoder was, if your thing is updating every 30 milliseconds or 50 milliseconds, it's just going to be choppy. And no matter how good you are, that's going to be frustrating and lead to challenges. So, you know, at that point, it was very clear that the main challenge is just get the data off the device in a very reliable way such that you can enable the next challenge to be to be tackled. And then, you know, at some point, it was, you know, actually the modeling challenge of how do you just build a good mapping, like the supervised learning problem of you have a bunch of data and you have a label you're trying to predict. Just what is the right neuro decoder architecture and hyperparameters to optimize that? That was the problem for a bit. And once you solve that, it became a different bottleneck. I think the next bottleneck after that was actually just sort of software stability and reliability. Spelman after that was actually just sort of software stability and reliability. You know, if you have widely varying sort of inference latency in your system, or your, you know, your app just lags out every once in a while, it decreases your ability to maintain and get in a state of flow, and it basically just disrupts your control experience. And so there's a variety of different software bugs and improvements we made that basically increased the performance of the system, made it much more reliable, much more stable, and led to a state where we could reliably collect data to build better models with. So that was a bottleneck for a while. It's just sort of like the software stack itself. If I were to guess right now, there's sort of two major directions you could think about for improving BPS further. The first major direction is labeling. So labeling is, again, this fundamental challenge of given a window of time where the user is expressing some behavioral intent, what are they really trying to do at the granularity of every millisecond? And that, again, is a task design problem. It's a UX problem. It's a machine learning problem. It's a software problem. Sort of touches all those different domains. The second thing you can think about to improve BPS further is either completely changing the thing you're dec about to improve BPS further is either completely changing the thing you're decoding or just extending the number of things that you're decoding. So this is sort of in the direction of functionality. Basically, you can imagine giving more clicks, for example, a left click, a right click, a middle click, different actions like click and drag, for example, and that can improve the effective bitrate of your communication processes. If you're trying to allow the user to express themselves through any given communication channel, you can measure that with bits per second. But what actually matters at the end of the day is how effective are they at navigating their computer. And so from the perspective of the downstream tasks that you care about, functionality and extending functionality is something we're very interested in, because not only can it improve the sort of number of BPS, but it can also improve the downstream sort of independence that the user has and the skill and efficiency with which they can operate their computer would the number of threads increasing also potentially help yes short answer is yes it's a bit nuanced how that curve uh or how that manifests in uh in the numbers so what you'll see is that if you sort of plot of number of channels that you're using for decode versus either the offline metric of how good you are at decoding or the online metric of, in practice, how good is the user at using this device, you see roughly a log curve. So as you move further out in number of channels, you get a corresponding logarithmic improvement in control quality and offline validation metrics. corresponding sort of logarithmic improvement in control quality and offline validation metrics. The important nuance here is that each channel corresponds with a specific, you know, represented intention in the brain. So for example, if you have a channel 254, it might correspond with moving to the right, channel 256 might mean move to the left. If you want to expand the number of functions you want to control, you really want to have a broader set of channels that covers a broader set of Imagine movements. You can think of it kind of like Mr. Potato Man, actually. If you had a bunch of different Imagine movements you could do, how would you map those Imagine movements to input to a computer? You can imagine handwriting to output characters on the screen. You can imagine just typing with your fingers and have that output text on the screen. You can imagine different finger modulations for different clicks. You can imagine wiggling your big nose for opening some menu or wiggling your big toe to have command tab occur or something like this. So it's really the amount of different actions you can take in the world depends on how many channels you have and the information content that they carry. Right, so that's more about the number of actions. So actually, as you increase the number of threads, that's more about increasing the number of actions you're able to perform. One other nuance there that is worth mentioning. So again, our goal is really to enable a user with paralysis to control the computer as fast as I can. So that's BPS with all the same functionality I have, which is what we just talked about, but then also as reliably as I can. And that last point is very related to channel count discussion. So as you scale up number of channels, the relative importance of any particular feature of your model input to the output control of the user diminishes, which means that if the sort of neural non-stationarity effect is per channel, or if the noise is independent, such that more channels means on average less output effect, then your reliability of your system will improve. So one sort of core thesis that at least I have is that scaling channel count should improve the reliability of the system without any work on the decoder itself. Can you linger on reliability here? So first of all, when you say non-stationarity of the signal, which aspect are you referring to? Yeah, so maybe let's talk briefly what the actual underlying signal looks like. So again, I spoke very briefly at the beginning about how when you imagine moving to the right or imagine moving to the left, neurons might fire more or less. And the frequency content of that signal, at least in the motor cortex, is very correlated with the output intention or the behavioral task that the user is doing. You can imagine, actually, this is not obvious, that rate coding, which is the name of that phenomenon, is like the only way the brain could represent information. You can imagine many different ways in which the brain could encode intention. And there's actually evidence like in bats, for example, that there's temporal codes, so timing codes, of like exactly when particular neurons fire is the mechanism of information representation. But at least in the motor cortex, there's substantial evidence that it's rate coding, or at least one first-order effect is that it's rate coding. So then if the brain is representing information by changing the frequency of a neuron firing, what really matters is the delta between the baseline state of the neuron and what it looks like when it's modulated. And what we've observed and what has also been observed in academic work is that that baseline rate, sort of the, if you're to target the scale, if you imagine that analogy for like measuring, you know, flour or something when you're baking, that baseline state of how much the pot weighs is actually different day to day. And so if what you're trying to measure is how much rice is in the pot, you're going to get a different measurement different days because you're measuring with different pots. So that baseline rate shifting is really the thing that, at least from a first-order description of the problem, is what's causing this downstream bias. There can be other effects, nonlinear effects on top of that, but at least at a very first-order description of the problem, that's what we observe day-to-day, is that the baseline firing rate of any particular neuron or observed on a particular channel is changing. So can you just adjust to the baseline to make it relative to the baseline non-stop? Yeah, this is a great question. So with monkeys, we have found various ways to do this. One example way to do this is you ask them to do some behavioral task, like play the game with a joystick. You measure what's going on in the brain. You compute some mean of what's going on across all the input features, and you subtract that in the input when you're doing your BCI session. Works super well. For whatever reason, that doesn't work super well with Nolan. I actually don't know the full reason why, but I can imagine several explanations. One such explanation could be that the context effect difference between some open loop task and some closed loop task is much more significant with Nolan than it is with Monkey. Maybe in this open loop task, he's watching the Lex Freeman podcast while he's doing the task, or he's whistling and listening to music and talking with his friend and asking his mom what's for dinner while he's doing this task. And so the exact sort of difference in context between those two states may be much larger and thus lead to a bigger generalization gap between the features that you're normalizing at sort of open loop time and what you're trying to use at closed loop time. That's interesting. Just on that point, it's kind of incredible to watch Nolan be able to multitask, to do multiple tasks at the same time, to be able to move the mouse cursor effectively while talking and while being nervous because he's talking in front of me. Kicking my ass in chest too, yeah. Kicking your ass and talk trash while doing it. So all at the same time. And yes, if you're trying to normalize to the baseline, that might throw everything off. Boy, is that interesting. Maybe one comment on that too. For folks that aren't familiar with assistive technology, I think there's a common belief that, well, why can't you just use an eye tracker or something like this for helping somebody move a mouse on the screen? And it's a really a fair question and one that I actually did was not confident before Stroom Noland that this was going to be a profoundly transformative technology for people like him. And I'm very confident now that it will be, but the reasons are subtle. It really has to do with ergonomically how it fits into their life. Even if you can just offer the same level of control as what they would have with an eye tracker or with a mouse stick, but you don't need to have that thing in your face. You don't need to be positioned a certain way. You don't need your caretaker to be around to set it up for you. You can activate it when you want, how you want, wherever you want. That level of independence is so game-changing for people. It means that they can text a friend at night privately without their mom needing to be in the loop. It means that they can open up and browse the internet at 2 a.m. when nobody's around to set their iPad up for them. This is a profoundly game-changing thing for folks in that situation. And this is even before we start talking about folks that may not be able to communicate at all or ask for help when they want to. This can be potentially the only link that they have to the outside world. And yeah, that one doesn't, I think, need explanation of why that's so impactful. You mentioned neural decoder. How much machine learning is in the decoder? How much magic? How much science? How much art? How difficult is it to come up with a decoder that figures out what these sequence of spikes mean? Yeah, good question. There's a couple of different ways to answer this. So maybe I'll zoom out briefly first, and then I'll go down one of the rabbit holes. So the zoomed out view is that building the decoder is really the process of building the dataset, plus compiling it into the weights. And each of those steps is important. The direction, I think, of further improvement is primarily going to be in the data set side of how do you construct the optimal labels for the model. But there's an entirely separate challenge of then how do you compile the best model. And so I'll go briefly down the second one, down the second rabbit hole. One of the main challenges with designing the optimal model for BCI is that offline metrics don't necessarily correspond to online metrics. It's fundamentally a control problem. The user is trying to control something on the screen and the exact sort of user experience of how you output the intention impacts their ability to control. So for example, if you just look at validation loss as predicted by your model, there can be multiple ways to achieve the same validation loss. Not all of them are equally controllable by the end user. And so it might be as simple as saying, oh, you could just add auxiliary loss terms that help you capture the thing that actually matters. But this is a very complex, nuanced process. So how you turn the labels into the model is more of a nuanced process than just a standard supervised learning problem. One very fascinating anecdote here, we've tried many different sort of neural network architectures that translate brain data to velocity outputs, for example. And one example that stuck in my brain from a couple years ago now is at one point, we were using just fully connected networks to decode the brain activity. We tried an A-B test where we were measuring the relative performance in online control sessions of sort of 1D convolution over the input signal. So if you imagine per channel, you have a sliding window that's producing some convolved feature for each of those input sequences for every single channel simultaneously. You can actually get better validation metrics, meaning you're fitting the data better, and it's generalizing better in offline data. If you use this convolutional architecture, you're reducing parameters, it's sort of a standard procedure when you're dealing with time series data. Now, it turns out that when using that model online, the controllability was worse, was far worse, even though the offline metrics were better. And there can be many ways to interpret that. But what that taught me at least was that, hey, it's at least the case right now that if you were to just throw a bunch of compute at this problem, and you were trying to sort of hyperparameter optimize or, you know, let some GPT model hard code or come up with or invent many different solutions, if you were just optimizing for loss, it would not be sufficient, which means that there's still some inherent modeling gap here. There's still some artistry left to be uncovered here of how to get your model to scale with more compute, and that may be fundamentally a labeling problem, but there may be other components to this as well. Is it data constraint at this time? Which is what it sounds like. How do you get a lot of good labels? Yeah, I think it's data quality constrained, not necessarily data quantity constrained. But even just the quantity, because it has to be trained on the interactions, I guess there's not that many interactions. Yeah, so it depends what version of this you're talking about. So if you're talking about, let's say, the simplest example of just 2D velocity, then I think, yeah, data quality is the main thing. If you're talking about how to build a multifunction output that lets you do all the inputs to the computer that you and I can do, then it's actually a much more sophisticated and nuanced modeling challenge. Because now you need to think about not just when the user is left-clicking, but when you're building the left-click model, you also need to be thinking about how to make sure it doesn't fire when they're trying to right-click or when they're trying to move the mouse. doesn't fire when they're trying to right-click or when they're trying to move the mouse. So one example of an interesting bug from like sort of week one of BCI with Nolan was when he moved the mouse, the click signal sort of dropped off a cliff. And when he stopped, the click signal went up. So again, there's a contamination between the two inputs. Another good example was at one point he was trying to do sort of a left-click in DRAD. And the minute he started moving, the left-click signal dropped off a cliff. So again, because there's some contamination between the two signals, you need to come up with some way to either in the dataset or in the model, build robustness against this kind of you can think of it like overfitting, but really it's just that the model has not seen this kind of variability before. So you need to find some way to help the model with that. This is super cool, because it feels like all of this is very solvable, but it's hard. Yes, it is fundamentally an engineering challenge. This is important to emphasize. And it's also important to emphasize that it may not need fundamentally new techniques, which means that people who work on, let's say, unsupervised speech classification using CTC loss, for example, with internal theory, they could potentially have very applicable skills to this. to Siri, they could potentially have very applicable skills to this. So what things are you excited about in the future development of the software stack on Neuralink? Everything we've been talking about, the decoding, the UX. I think there's some I'm excited about, like something I'm excited about from the technology side, and some I'm excited about for understanding how this technology is going to be best situated for entering the world. So I'll work backwards. On the technology entering the world side of things, I'm really excited to understand how this device works for folks that, you know, cannot speak at all, that have no ability to sort of bootstrap themselves into useful control by voice command, for example, and are extremely limited in their current capabilities. I think that will be an incredibly useful signal for us to understand, I mean, really what is an existential threat for all startups, which is product market fit. Does this device have the capacity and potential to transform people's lives in the current state? And if not, what are the gaps? And if there are gaps, how do we solve them most efficiently? So that's what I'm very excited about for the next year or so of clinical trial operations. The technology side, I'm quite excited about basically everything we're doing. I think it's going to be awesome. The technology side, I'm quite excited about basically everything we're doing. I think it's going to be awesome. The most prominent one I would say is scaling channel count. So right now we have a thousand channel device. The next version will have between three and six thousand channels. And I would expect that curve to continue in the future. And it's unclear what set of problems will just disappear completely at that scale. And what set of problems will remain and require further focus. And so I'm excited about the clarity of gradient that that gives us in terms of the user experience that we choose to focus our time and resources on and and also in terms of the you know even things as simple as non-stationarity like does that problem just completely go away at that scale or do we need to come up with new creative ux is still even at that point um and also when we get to that time point when we start expanding out dramatically the set of functions that you can output from one brain, how to deal with all the nuances of both the user experience of not being able to feel the different keys under your fingertips, but still need to be able to modulate all of them in synchrony to achieve the thing you want. And again, you don't have that properly set to feedback loop, so how can you make that intuitive for a user to control a high-dimensional control surface without feeling the thing physically? I think that's going to be a super interesting problem. I'm also quite excited to understand, you know, do these scaling laws continue? Like as you scale channel count, how much further out do you go before that saturation point is truly hit? And it's not obvious today. I think we only know what's in the sort of interpolation space. We only know what's between zero and 1,024, but we don't know what's beyond that. And then there's a whole sort of like range of interesting sort of neuroscience and brain questions, which is when you stick more stuff in the brain in more places, you get to learn much more quickly about what those brain regions represent. And so I'm excited about that fundamental neuroscience learning, which is also important for figuring out how to most efficiently insert electrodes in the future. So yeah, I think all those dimensions I'm really, really excited about. And that doesn't even get close to touching the sort of software stack that we work on every single day and what we're working on right now. Yeah, it seems virtually impossible to me that a thousand electrodes is where it saturates. It feels like this would be one of those silly notions in the future where obviously you should have millions of electrodes and this and this is where like the true breakthroughs happen you tweeted oh some thoughts are most precisely described in poetry why do you think that is I think it's because the information bottleneck of language is pretty steep, and yet you're able to reconstruct on the other person's brain more effectively without being literal. If you can express the sentiment such that in their brain they can reconstruct the actual true underlying meaning and beauty of the thing that you're trying to get across, the generator function in their brain is more powerful than what language can express and so the the mechanism poetry is really just to uh feed or seed that generator function so being literal sometimes is a suboptimal compression for the for the thing you're trying to convey and it's actually in the process of the user going through that generation that they understand what you mean. Like that's the beautiful part. It's also like when you look at a beautiful painting, like it's not the pixels of the painting that are beautiful. It's the thought process that occurs when you see that, the experience of that, that actually is the thing that matters. Yeah, it's resonating with some deep thing within you that the artist also experienced and was able to convey that through the pixels. And that's actually going to be relevant for full-on telepathy. It's like if you just read the poetry literally, that doesn't say much of anything interesting. It requires a human to interpret it. of anything interesting. It requires a human to interpret it. So it's the combination of the human mind and all the experiences the human being has within the context of the collective intelligence of the human species that makes that poem make sense. And they load that in. And so in that same way, the signal that carries from human to human, meaning may seem trivial, but may actually carry a lot of power because of the complexity of the human mind and the receiving end. Yeah, that's interesting. Poetry still doesn't... Who was it? I think Yoshibako, first of all, I said something about all the people that think we've achieved AGI explain why humans like music. Oh, yeah. And until the AGI likes music, you haven't achieved AGI or something. Do you not think that's like some next token entropy surprise kind of thing going on? I don't know. I don't know either. I listen to a lot of classical music and also read a lot of poetry. And yeah, I do wonder if like there is some element of the next token surprise factor going on there. Yeah, maybe. Because I mean, like a lot of the tricks in both poetry and music are like basically you have some repeated structure and then you do like a twist. Yeah. Like it's like, okay, verse or like clause one, two, three is one thing and then clause four is like, okay, now we're onto the next theme. Yeah. And they kind of play with exactly when the surprise happens and the expectation of the user. And that's even true like through history as musicians evolve music, they take like some known structure that people are familiar with and they just tweak it a little bit. Like they tweak it and add a surprising element this is especially true in like in classical music heritage but that's what i'm wondering like is it all just entropy like the the so great so breaking structure or breaking symmetry is something that humans seem to like maybe as simple as that yeah and i mean great artists copy uh and they also you know knowing which rules to break is the important part. And fundamentally, it must be about the listener of the piece. Like, which rule is the right one to break? It's about the audience member perceiving that as interesting. What do you think is the meaning of human existence? There's a TV show I really like called The West Wing. And in The West Wing, there's a character, he's the president of the United States, who's having a discussion about the Bible with one of their colleagues. And the colleague says something about, you know, the Bible says X, Y, and Z. And the president says, yeah, but it also says A, B, C. And the person says, well, do you believe the Bible to be literally true? And the president says, yes, but I also think that neither of us are smart enough to understand it i think to like the analogy here for the meaning of life is that largely we don't know the right question to ask and so i'm i think i'm very aligned with uh sort of the hitchhiker's guide the galaxy version of this question, which is basically, if we can ask the right questions, it's much more likely we find the meaning of human existence. And so in the short term, as a heuristic in the sort of search policy space, we should try to increase the diversity of people asking such questions, or generally of consciousness and conscious beings asking such questions. So again, I think i'll take the i don't know card here but say i do think there are meaningful things we can do that improve the likelihood of answering that question it's interesting how much value you assign to the task of asking the right questions that's the that's the main thing is not the answers is answers, it's the questions. This point, by the way, is driven home in a very painful way when you try to communicate with someone who cannot speak. Because a lot of the time, the last thing to go is they have the ability to somehow wiggle a lip or move something that allows them to say yes or no. And in that situation, it's very obvious that what matters is are you asking them the right question to be able to say yes or no to? Wow, that's powerful. Well, Bliss, thank you for everything you do. And thank you for being you. And thank you for talking today. Thank you. Thanks for listening to this conversation with Bliss Chapman. And now, dear friends, here's Nolan Arbaugh, the first human being to have a Neuralink device implanted in his brain. You had a diving accident in 2016 that left you paralyzed with no feeling from the shoulders down. How did that accident change your life? There's sort of a freak thing that happened. Imagine you're running into the ocean. happened. Imagine you're running into the ocean. Although this is a lake, but you're running into the ocean and you get to about waist high and then you kind of like dive in, take the rest of the plunge under the wave or something. That's what I did. And then I just never came back up. Not sure what happened. I did it running into the water with a couple of guys. And so my idea of what happened is really just that I took like a stray fist, elbow, knee, foot, something to the side of my head. The left side of my head was sore for about a month afterwards. So I must have taken a pretty big knock. And then they both came up and I didn't. And so I was face down in the water for a while. I was conscious. And then eventually just realized I couldn't hold my breath any longer. And I keep saying, took a big drink. People, I don't know if they like that. I say that it seems like I'm making light of it all, but, um, this is kind of how I am. And I don't know, like I'm a very relaxed sort of stress-free person. I rolled with the punches for a lot of this. I kind of took it in stride. It's like, all right, well, what can I do next? How can I improve my life even a little bit on a day-to-day basis at first, just trying to find some way to heal as much of my body as possible, to try to get healed, to try to get off a ventilator, learn as much as I could so I could somehow survive once I left the hospital. And then thank God I had my family around me. If I didn't have my parents, my siblings, then I would have never made it this far. They've done so much for me, more than I can ever thank them for, honestly. And a lot of people don't have that. A lot of people in my situation, their families either aren't capable of providing for them or honestly, just don't want to. And so they get placed somewhere and in some sort of home. just don't want to. And so they get placed somewhere and, you know, in some sort of home. So thankfully I had my family. I have a great group of friends, a great group of buddies from college who have all rallied around me and we're all still incredibly close. People always say, you know, if you're lucky, you'll end up with one or two friends from high school that you keep throughout your life. I have about 10, 10 or 12 from high school that have all stuck around and we still get together, all of us, twice a year. We call it the spring series and the fall series. This last one we all did, we dressed up like X-Men. So I did a Professor Xavier and it was freaking awesome. It was so good. So yeah, I have such a great support system around me. And so being a quadriplegic isn't that bad. I get weighted on all the time. People bring me food and drinks and I get to sit around and watch as much TV and movies and anime as I want. I get to read as much as I want. I mean, it's great. It's beautiful to see that you see the silver lining in all of this. But just going back, do you remember the moment when you first realized you were paralyzed from the neck down? Yep. I was face down in the water. Right when I, whatever, something hit my head, I tried to get up and I realized I couldn't move and it just sort of clicked. I'm like, all right, I'm paralyzed. Can't move. What do I do? If I can't get up, I can't flip over, can't do anything, then I'm going to drown eventually. And I knew I couldn't hold my breath forever. So I just held my breath and thought about it for maybe 10, 15 seconds. I've heard from other people that, like on Lickers, I guess, the two girls that pulled me out of the water were two of my best friends. They are lifeguards. And one of them said that it looked like my body was sort of shaking in the water, like I was trying to flip over and stuff. But I knew, I knew immediately. And I just kind of, I realized that that's what my situation was from here on out. Maybe if I got to the hospital, they'd be able to do something. When I was in the hospital, right before surgery, I was trying to calm one of my friends down. I had brought her with me from college to camp, and she was just bawling over me. And I was like, hey, it's going to be fine. Don't worry. I was cracking some jokes to try to lighten the mood. Um, the nurse had called my mom and I was like, don't tell my mom. Um, she's just going to be stressed out. Call her after I'm out of surgery. Cause at least she'll have some answers then like whether I live or not really. Um, and I didn't want her to be stressed through the whole thing, but I knew. And then when I first woke up after surgery, I was super drugged up. They had me on fentanyl like three ways, which was awesome. I don't recommend it, but I saw some crazy stuff on that fentanyl and it was still the best I've ever felt on drugs medication sorry on medication and I remember the first time I saw my mom in the hospital I was just bawling I had like ventilator in like I couldn't talk or anything And I just started crying because it was more like seeing her. Not that, I mean, the whole situation obviously was pretty rough, but it was just like seeing her face for the first time was pretty hard. But yeah, I never had like a moment of, you know, man, I'm paralyzed. This sucks sucks i don't want to like be around anymore it was always just i hate that i have to do this but like sitting here and wallowing isn't going to help so immediate acceptance yeah yeah has there been low points along the way? Yeah. Yeah, sure. Um, I mean, there are days when I don't really feel like doing anything, not so much anymore. Like not for the last couple of years. I don't really feel that way. I've, um, more so just wanted to try to do anything possible to make my life better at this point. But at the beginning, there were some ups and downs. There were some really hard things to adjust to. First off, just like the first couple months, the amount of pain I was in was really, really hard. I mean, I remember screaming at the top of my lungs in the hospital because I thought my legs were on fire. And obviously, I can't feel anything, but it's all nerve pain. And so that was a really hard night. I asked them to give me as much pain meds as possible. They're like, you've had as much as you can have. So just kind of deal with it, go to a happy place sort of thing. So that was a pretty low point. And then every now and again, it's hard realizing things that I wanted to do in my life that I won't be able to do anymore. You know, I always wanted to be a husband and father, and I just don't think that I could do it now as a quadriplegic. Maybe it's possible, but I'm not sure I would ever put, you know, someone I love through that, like having to take care of me and stuff. someone I love through that, like having to take care of me and stuff. Not being able to go out and play sports. I was a huge athlete growing up. So that was pretty hard. Little things too, when I realized I can't do them anymore. There's something really special about being able to hold a book and smell a book. The feel, the texture, the smell, as you turn the pages, I just love it. I can't do it anymore. It's little things like that. The two-year mark was pretty rough. Two years is when they say you will get back basically as much as you're ever going to get back as far as movement and sensation goes. And so for the first two years, that was the only thing on my mind was like, try as much as I can to move my fingers, my hands, my feet, everything possible to try to get sensation and movement two-year mark hit. So June 30th, 2018, I was really sad that that's kind of where I was. And then just randomly here and there, but I was never depressed for long periods of time. Just it never seemed worthwhile to me. What gave you strength? My faith. My faith in God was a big one. My understanding that it was all for a purpose, and even if that purpose wasn't anything involving Neuralink, even if that purpose was, you know, there's a story in the Bible about Job, and I think it's a really, really popular story about how Job, you know, has all of these terrible things happen to him and he praises God throughout the whole situation. I thought, and I think a lot of people think for most of their lives that they are Job, that they're the ones going through something terrible and they just need to, you know, praise God through the whole thing and everything will work out. At some point after my accident, I realized that I might not be Job, that I might be, you know, one of his children that gets killed or kidnapped or taken from him. And so it's about terrible things that happen to those around you who you love. So maybe, you know, in this case, my mom would be Job and she has to get through something extraordinarily hard. And I just need to try and make it as best as possible for her because she's the one that's really going through this massive trial. And that gave me a lot of strength. And obviously my family, my family and my friends, they give me all the strength that I need on a day-to-day basis. So it makes things a lot easier having that great support system around me. From everything I've seen of you online, your streams, and the way you are today, From everything I've seen of you online, your streams, and the way you are today, I really admire, let's say, your unwavering positive outlook on life. Has that always been this way? Yeah, yeah. I mean, I've just always thought I could do anything I ever wanted to do. There was never anything too big. Like, whatever I set my mind to, I felt like I could do it. I didn't want to do a lot. I wanted to travel around and be a gypsy and go work odd jobs. I had this dream of traveling around Europe and being, I don't know, a shepherd in Wales or Ireland, and then going and being a fisherman in Italy, doing all of these things for like a year. Like it's such like cliche things, but I just thought it would be so much fun to go and travel and do different things. And so I've always just seen the best in people around me too. And I've always tried to be good to people. And growing up with my mom too, she's like the most positive, energetic person in the world. And we're all just people, people. I just get along great with people. I really enjoy meeting new people. And so I just wanted to do everything. This is kind of just how I've been. It's just great to see that cynicism didn't take over, given everything you've been through. Yeah. Was that like a deliberate choice you made that you're not going to let this keep you down? Yeah, a bit. Also, like I just, just kind of how I am. I just, like I said, I roll with the punches with everything. I always used to tell people, like, I don't stress about things much. Um, and whenever I'd see people getting stressed or just say, you know, like, it's not hard, just don't stress about it. And like, that's all you need to do. Uh, and they're like, that's not how that works. Like it works for me. I just don't stress about it. And like, that's all you need to do. And they're like, that's not how that works. Like it works for me. Like just don't stress and everything will be fine. Like everything will work out. Obviously not everything always goes well. And it's not like it all works out for the best all the time, but I just don't think stress has had any place in my life since I was a kid. What was the experience like of you being selected to be the first human being to have a Neuralink device implanted in your brain? Were you scared? Excited? No, no, it was cool. Like I was, I was never afraid of it. I had to think through a lot. Should I, should I do this? Um, like be the first person I could wait until number two or three and get a better version of the Neuralink. Like the first one might not work. Maybe, um, it's actually going to kind of suck. Um, it's going to be the worst version ever in a person. So why would I do the first one? Like I've already kind of been selected. I could just tell them, you know, like, okay, find someone else. And then I'll do number two or three. Like, I'm sure they would let me, they're looking for a few people anyways. But ultimately I was like, I don't know, there's something about being the first one to do something. It's pretty cool. I always thought that if I had the chance that I would like to do something for the first time, um, this seemed like a pretty good opportunity. Um, and I was, I was never scared. I think my like faith had a huge, uh, part in that. I always felt like God was preparing me for something. Um, I almost wish it wasn't this because I had many conversations with God about not wanting to do any of this as a quadriplegic. I told him, I'll go out and talk to people. I'll go out and travel the world and talk to stadiums, thousands of people, give my testimony. I'll do all of it, but heal me first. Don't make me do all of this in a chair. That sucks. And I guess he won that argument. I didn't really have much of a choice. I always felt like there was something going on. And to see how, I guess, easily I made it through the interview process and how quickly everything happened, how the star sort of aligned with all of this. It just told me, as the surgery was getting closer, it just told me that it was all meant to happen. It was all meant to be. And so I shouldn't be afraid of anything that's to come. And so I wasn't, I kept telling myself, like, you know, you say that now, but as soon as the surgery comes, you're probably going to be freaking out. Like you're about to have brain surgery and brain surgery is a big deal for a lot of people, but it's an even bigger deal for me. Like it's all I have left. The amount of times I've been like, thank you, God, that you didn't take my brain and my personality and my ability to think, my love of learning, my character, everything. Thank you so much. As long as you left me that, then I think I can get by. And I was about to let people go root around in there, like, hey, we're going to go put some stuff in your brain. Hopefully it works out. And so it was something that gave me pause. But like I said, how smoothly everything went, I never expected for a second that anything would go wrong. Plus, the more people I met on the borrowers side and on the knurling side, they're just the most impressive people in the world. I can't speak enough to how much I trust these people with my life and how impressed I am with all of them. And to see the excitement on their faces, to walk into a room and roll into a room and see all of these people looking at me like we're just we're so excited like we've been working so hard on this and it's finally happening it's super infectious and um it just makes me want to do it even more and to help them achieve their dreams like i don't know it's so it's so rewarding and I'm so happy for all of them. Honestly. What was the day of surgery? Like what's when'd you wake up? What'd you feel? Yeah. Minute by minute. Yeah. Were you freaking out? No, I thought I was going to, but the surgery approach the night before the morning of, I was just excited. Like, I was like, let's make this happen. I think I said that something like that to Elon on the phone. Uh, beforehand we were like, um, FaceTiming and I was like, let's rock and roll. And he's like, let's do it. Uh, I don't know. I just, I wasn't scared. So we woke up, I think we had to be at the hospital at like five 30 AM. I think surgery was at like 7 AM. So we woke up pretty early. I'm not sure much of us slept that night. Got to the hospital at 5.30, went through all the pre-op stuff. Everyone was super nice. Elon was supposed to be there in the morning, but something went wrong with his plane, so we ended up FaceTiming. That was cool. Had one of the greatest one-liners of my life. After that phone call, hung up with him. There were like 20 people around me. And I was like, I just hope he wasn't too starstruck talking to me. Nice. Yeah, it was good. Well done. Yeah, yeah. Did you write that ahead of time? No, no, it just came to me. I was like, this seems right. Went into surgery. i asked if i could pray right beforehand so i like prayed over the room i asked god if you like be with my mom in case anything happened to me and uh just like calm her nerves out there uh woke up and played a bit of a prank on my mom i don't know if you've heard about it. Yeah, I read about it. Yeah, she was not happy. Can you take me through the prank? Yeah, this is something- Do you regret doing that now? No, no, not one bit. It was something I had talked about ahead of time with my buddy, Bane. I was like, I would really like to play a prank on my mom. Very specifically, my mom. Very specifically my mom. She's very gullible. I think she had knee surgery once even. And after she came out of knee surgery, she was super groggy. She's like, I can't feel my legs. And my dad looked at her. He was like, you don't have any legs. Like they had to amputate both your legs and we just do very mean things to her all the time yes um i'm so surprised that she still loves us um but right after surgery i was really worried that i was going to be too like groggy like not all there i've had anesthesia once before and it messed me up. I could not function for a while afterwards. And I said a lot of things that I was really worried that I was going to start, I don't know, dropping some bombs and I wouldn't even know, I wouldn't remember. So I was like, please God, don't let that happen. And please let me be there enough to do this to my mom. And so she walked in after surgery. It was like the first time they had been able to see me after surgery. And she just looked at me. She said, hi, how are you? How are you doing? How do you feel? And I looked at her and this very, I think the anesthesia helped, very groggy, sort of confused look on my face. It's like, who are you? And she just started looking around the room at the surgeons or the doctors, like, what did you do to my son? You need to fix this right now. Tears started streaming. I saw how much she was freaking out i was like i can't let this go on and so i was like mom mom i'm fine like uh it's all right and uh still she was not happy about it she uh still says she's gonna get me back someday but i mean i don't know i don't know what that's gonna look like it's a lifelong battle yeah yeah but it was good in some sense it was a demonstration that you still got that's that's all i wanted to be humor that's all i wanted it to be and i knew that doing something super mean to her like that would show her yeah to show that you're still there that you love her yeah exactly exactly it's a dark way to do it, but I love it. Yeah. What was the first time you were able to feel that you can use the Neuralink device to affect the world around you? Yeah, the first little taste I got of it was actually not too long after surgery. not too long after surgery. Some of the Neuralink team had brought in like a little iPad, a little tablet screen, and they had put up eight different channels that were recording some of my neuron spikes. And they put it in front of me like, this is like real time, your brain firing. It's like, that's super cool. My first thought was, I mean, if they're firing now, let's see if I can affect them in some way. So I started trying to like wiggle my fingers and I just started like scanning through the channels. And one of the things I was doing was like moving my index finger up and down. And I just saw this yellow spike on like top row, like third box over or something. I saw this yellow spike every time I did it. And I just saw this yellow spike on like top row, like third box over or something. I saw this yellow spike every time I did it. And I was like, oh, that's cool. And everyone around me was just like, what are you seeing? I was like, look, look at this one. Look at like this top row, third box over this yellow spike. Like that's me right there, there, there. And everyone was freaking out. They started like clapping like clapping i was like that's super unnecessary like this is what's supposed to happen right like so you're imagining yourself moving each individual finger one at a time yeah and then seeing like you can notice something and then when you did the index finger you're like oh yeah i was i was wiggling kind of all of my fingers to see if anything would happen there was a lot of other things going on, but that big yellow spike was the one that stood out to me. Like, I'm sure that if I would have stared at it long enough, I could have mapped out maybe 100 different things. But the big yellow spike was the one that I noticed. Maybe you could speak to what it's like to sort of wiggle your fingers, to imagine that the mental, the cognitive effort required to sort of wiggle your index finger, for example. How easy is that to do? told me to try and move my body as much as possible, even if you can't, just keep trying, because that's going to create new neural pathways or pathways in my spinal cord to reconnect these things, to hopefully regain some movement someday. That's fascinating. Yeah, I know, it's bizarre, but I- So that's part of the recovery process is to keep trying to move your body. Yep. And the nervous system does its thing. It starts reconnecting. It'll start reconnecting for some people. Some people, it never works. Some people, they'll do it. Like for me, I got some bicep control back, and that's about it. bicep control back. And that's about it. I can, if I try enough, I can wiggle some of my fingers, not like on command. It's more like if I try to move, say my right pinky and I just keep trying to move it after a few seconds, it'll wiggle. So I know there's stuff there. I know that happens with a few different of my fingers and stuff. But yeah, that's what they tell you to do. One of the people at the time when I was in the hospital came in and told me for one guy who had recovered most of his control, what he thought about every day was actually walking, like the act of walking just over and over again. So I tried that for years. I tried just imagining walking, which is hard. It's hard to imagine all of the steps that go into, well, taking a step, like all of the things that have to move, like all of the activations that have to happen along your leg in order for one step to occur. But you're not just imagining, you're like doing it, right? I'm trying, yeah. So it's like, it's imagining over again what I had to do to take a step. Because it's not something any of us think about. We just, you want to walk and you take a step because it's not something any of us think about we just you want to walk and you take a step um you don't think about all of the different things that are going on in your body so i had to recreate that in my head as much as i could and then i practice it over and over and over so it's not like a third person perspective as a first person perspective you're like it's not like you're imagining yourself walking you're like literally doing this everything all the same stuff as if you're walking yeah which which was hard it was hard at the beginning like frustrating hard or like actually cognitively hard like uh it was both um there's a there's a scene in one of the kill bill movies, actually, oddly enough where she is like paralyzed. I don't know from like a drug that was in her system. And then she like find some way to get into the back of a truck or something. And she stares at her toe and she says, move, like move your big toe. And after, you know, a few seconds on screen, she does it. And she did that with every one of her body parts until she can move again. I did that for years, just stared at my body and said, move your index finger, move your big toe. Sometimes vocalizing it out loud, sometimes just thinking it. I tried every different way to do this to try to get some movement back. And it's hard because it actually is taxing, physically taxing on my body, which is something I would have never expected because it's not like I'm moving, but it feels like there's a buildup of... I don't know, the only way I can describe it is there are like signals that aren't getting through from my brain down because there's that gap in my spinal cord. So brain down and then from my hand back up to the brain. And so it feels like those signals get stuck in whatever body part that I'm trying to move, and they just build up and build up and build up until they burst. And then once they burst, I get this really weird sensation of everything dissipating back out to level, and then I do it again. It's also just like a fatigue thing, like a muscle fatigue, but without actually moving your muscles. It's very, very bizarre. And then if you try to stare at a body part or think about a body part and move for two, three, four, sometimes eight hours, it's very taxing on your mind. It takes a lot of focus. It was a lot easier at the beginning because I wasn't able to control a TV in my room or anything. I wasn't able to control any of my environment. So for the first few years, a lot of what I was doing was staring at walls. And so obviously I did a lot of thinking and I tried to move a lot just over and over and over again. Do you never give up sort of hope there? No. Training hard essentially. Yep. And I still do it. I do it like subconsciously. And I think that helped a lot with things with Neuralink, honestly. It's something that I talked about the other day at the all hands that I did at Neuralink's Austin facility. Welcome to Austin, by the way. Yeah. Hey, thanks, man. I went to school. Nice hat. Hey, thanks. Thanks, man. The gigafactory was super cool. I went to school at Texas A&M, so I've been around before. So you should be saying, welcome to me. Welcome to TexasM, so I've been around before. So you should be saying, welcome to me. Yeah. Welcome to Texas, Lex. Yeah. I get you. But yeah, I was talking about how a lot of what they've had me do, especially at the beginning, well, I still do it now, is body mapping. So there will be a visualization of a hand or an arm on the screen and i have to do that motion and that's how they sort of train um the algorithm to like understand what i'm trying to do and so it made things very uh seamless um for me i think that's really really cool so it's it's amazing to know because i i've learned a lot about the body mapping procedure. Yeah. With the interface and everything like that, it's cool to know that you've been a century like training to be like world class at that task. Yeah. Yeah. I don't know if other quadriplegics like other paralyzed people give up. I hope they don't. I hope they keep trying because I've heard other paralyzed people say, don't ever stop. They tell you two years, but you just never know. The human body's capable of amazing things. So I've heard other people say, don't give up. I think one girl had other people say don't give up. I think one girl had spoken to me through some family members and said that she had been paralyzed for 18 years and she'd been trying to wiggle her index finger for all that time and she finally got it back 18 years later. So I know that it's possible and I'll never give up doing it. I do it when I'm lying down watching TV. I'll never give up doing it. I just, I do it when I'm lying down, like watching TV, I'll find myself doing it kind of just almost like on its own. It's just something I've gotten so used to doing that. I don't know. I, I don't think I'll ever stop. That's really awesome to hear. Cause I think it's one of those things that can really pay off in the, in the long term. Cause like that is training. You're not visibly seeing the results of that training at the moment, but like, there's a like Olympic level nervous system getting yeah getting ready for something honestly was like something that i think neurolink gave me that um i can't i can't thank them enough for like i can't show my appreciation for it enough was being able to visually see that what I'm doing is actually having some effect. It's a huge part of the reason why I know now that I'm going to keep doing it forever. Because before Neuralink, I was doing it every day and I was just assuming that things were happening. Like, it's not like I knew I wasn't getting back any mobility or sensation or anything. So I could have been running up against a brick wall for all I knew. And with Neuralink, I get to see like all the signals happening real time. And I get to see that what I'm doing can actually be mapped. When we started doing click calibrations and stuff, when I go to click my index finger for a left click, that it actually recognizes that. It changed how I think about what's possible with retraining my body to move. And so, yeah, I'll never give up now. And also just the signal that there's still a powerhouse of a brain there that's like, Exactly. And as the technology develops, that brain is, I mean, that's the most important thing about the human body is the brain. And it can do a lot of the control. So what did it feel like when you first could wiggle the index finger and saw the environment respond like that little yeah think wherever we're just being way too dramatic according to you yeah it was very cool i mean it was cool but it i keep telling this to people it made sense to me like it made sense that you know like there are signals still happening in my brain and that as long as you had something near it that could measure those, that could record those, then you should be able to visualize it in some way, see it happen. And so that was not very surprising to me. I was like, oh, cool. We found one. We found something that works. It was cool to see that their technology worked um and that everything that they'd worked so hard for was like going to pay off um but i hadn't like moved a cursor or anything at that point i had like interacted with a computer or anything at that point um so it it just made sense it was cool like i i didn't really know much about bci at that point either so i didn't know like what sort of step this was actually making um like i didn't know if this was like a huge deal or if this was just like okay this is you know it's cool that we got this far but we're actually hoping for something like much better down the road it's like okay i just thought that they knew that it turned on. So I was like, cool. Like this is, this is cool. What did you like read up on the specs of the hardware you get installed? Like the number of threads. Yeah. Yeah. I knew all of that, but it's all like, it's all Greek to me. I was like, okay, threads, 64 threads, 16 electrodes, 1024 channels. Okay. Like that, that, that 1024 channels. Okay. Like that math checks out. Sounds right. Yeah. When was the first time you were able to move a mouse cursor? I know it must have been within the first maybe week, a week or two weeks that I was able to like first move the cursor. And again, like it kind of made sense to me. Like it, it didn't seem like that big of a deal. Like it, it, it was like, okay, well, how do I explain this? When everyone around you starts clapping for something that you've done, it's, it's easy to say, okay, like I did something cool. Like that cool. That was impressive in some way. What exactly that meant, what it was, hadn't really set in for me. So again, I knew that me trying to move a body part. And then that being mapped in some sort of machine learning algorithm to be able to identify my brain signals and then take that and give me cursor control, that all kind of made sense to me. I don't know all the ins and outs of it, but I was like, there are still signals in my brain firing. They just can't get through because there's a gap in my spinal cord. And so they can't get all the way down and back up, but they're still there. So when I moved the cursor for the first time, I was like, that's cool. But I expected that that should happen. It made sense to me. When I moved the cursor for the first time with just my mind without like physically trying to move. So I guess I can get into that just a little bit. Like the difference between attempted movement and imagined movement. Yeah, that's a fascinating difference. Yeah. From one to the other. Yeah, yeah, yeah. So like attempted movement is me physically trying to attempt to move, say, my hand. I try to attempt to move my hand to the right, to the left, forward and back. And that's all attempted. Attempt to, you know, like lift my finger up and down, attempt to kick or something. I'm physically trying to do all of those things, even if you can't see it. This would be like me attempting to shrug my shoulders or something. That's all attempted movement. That's what I was doing for the first couple of weeks when they were going to give me cursor control. When I was doing body mapping, it was attempt to do this, attempt to do that. When Nir was telling me to imagine doing it, it kind of made sense to me, but it's not something that people practice. If you started school as a child and they said, okay, write your name with this pencil. And so you do that. Like, okay, now imagine writing your name with that pencil. Kids would think, like, I guess, like that kind of makes sense. And they would do it. But that's not something we're taught. It's all like how to do things physically. We think about like thought experiments and things, but that's not like a physical action of doing things. It's more like what you would do in certain situations. So imagine movement, it never really connected with me. Like I guess you could maybe describe it as like a professional athlete, like swinging a baseball bat or swinging like a golf club, like imagine what you're supposed to do, but then you go right to that and physically do it. Like you, then you get a bat in your hand and then you do what you've been imagining. And so I don't have that like connection. So telling me to imagine something versus attempting it, it just, there wasn't a lot that I could do there mentally. I just kind of had to accept what was going on and try. But the attempted moving thing, it all made sense to me. Like, if I try to move, then there's a signal being sent in my brain. And as long as they can pick that up, then they should be able to map it to what I'm trying to do. And so when I first moved the cursor like that, it was just like, yes, this should happen. Like, I'm not surprised by that. But can you clarify, is there supposed to be a difference between imagined movement and attempted movement? Yeah, just that in imagined movement, you're not attempting to move at all. So it's- You're like visualizing yourself doing, and then theoretically, is that supposed to be a different part of the brain that lights up in those two different situations? Yeah, not necessarily. I think all these signals can still be represented in motor cortex. But the difference, I think, has to do with the naturalness of imagining something worse. Got it. Attempting to sort of fatigue of that over time. And by the way, on the mic is Bliss. over time and by the way on the mic is bliss uh so like this is just different ways to prompt you to kind of get to the thing that you're around yeah yeah attempted movement does sound like the right thing yeah try yeah i mean it makes sense to me because imagine for me i'll be i would start visualizing like in my mind visualizing attempted i would actually start trying to like yeah there's a i mean i you know i did like combat sports my whole life like wrestling when i'm imagining a move see i'm like moving my muscle exactly like there's a there is a bit of an activation almost versus like visualizing yourself like a picture doing it yeah it's something that i feel like naturally anyone would do if you try to tell someone to imagine doing something they might close their eyes and then start physically doing it um but it's just didn't click yeah it's it's hard it was very hard at the beginning but attempted worked attempted worked it worked just like it should work like work like a charm um i remember there was like one tuesday we were messing around and i think i forget what swear word you used but there's a swear word that came out of your mouth when you figured out you could just do the direct cursor control yeah that's it it blew my mind like no pun intended blew my mind when I first moved the cursor just with my thoughts and not attempting to move. It's something that I found over the couple, like the model gets better, then it gets easier for me to, like, I don't have to attempt as much to move it. And part of that is something that I'd even talked with them about when I was watching the signals of my brain one day. I was watching when I attempted to move to the right and I watched the screen as I saw the spikes. I was seeing the spike, the signals being sent before I was actually attempting to move. I imagine just because when you go to say move your hand or any body part, that signal gets sent before you're actually moving, has to make it all the way down and back up before you actually do any sort of movement. So there's a delay there. And I noticed that there was something going on in my brain before I was actually attempting to move, that my brain was anticipating what I wanted to do. And that all started sort of, I don't know, percolating in my brain. It was just sort of there, always in the back. That's so weird that it could do that. It kind of makes sense, but I wonder what that means as far as using the Neuralink. And then as I was playing around with the attempted movement and playing around with the cursor, and I saw that as the cursor control got better, that it was anticipating my movements and what I wanted it to do, like cursor movements, what I wanted to do a bit better and a bit better. And then one day I just randomly, as I was playing WebGrid, I looked at a target before I had started attempting to move. I was just trying to get over, train my eyes to start looking ahead like okay this is the target i'm on but if i look over here to this target i know i can like maybe be a bit quicker getting there and i looked over and the cursor just shot over it it was wild like i had to take a step back i was like this should not be happening all day i was smiling. I was so giddy. I was like, guys, do you know that this works? I can just think it and it happens, which they'd all been saying this entire time. I can't believe you're doing all this with your mind. I'm like, yeah, but is it really with my mind? I'm attempting to move and it's just picking that up. So it doesn't feel like it's with my mind. But when I moved it for the first time like that, it was, my mind. But when I moved it for the first time like that, it was, oh man, it made me think that this technology, that what I'm doing is actually way, way more impressive than I ever thought. It was way cooler than I ever thought. And it just opened up a whole new world of possibilities of what could possibly happen with this technology and what i might be able to be capable of with it because you had felt for the first time like this was digital telepathy like you're controlling a digital device with your mind yeah i mean this is that's a real moment of discovery that's really cool like you've discovered something i've seen like scientists talk about like a big aha moment, like Nobel Prize winning. They'll have this like, holy crap. That's what it felt like. I felt like I had discovered something, but for me, maybe not necessarily for the world at large or this field at large. It just felt like an aha moment for me. Like, oh, this works. Obviously, it works. And so that's what I do all the time now. I kind of intermix the attempted movement and imagined movement. I do it all together because I've found that there is some interplay with it that maximizes efficiency with the cursor. So it's not all one or the other. It's not all just, I only use attempted or I only use imagined movements. It's more I use them in parallel. And I can do one or the other. I can just completely think about whatever I'm doing, but I don't know. I like to play around with it. I also like to just experiment with these things. Every now and again, I'll get this idea in my head like, hmm, I wonder if this works and I'll just start doing it. And then afterwards I'll tell them, by the way, I wasn't doing that like you guys wanted me to. I thought of something and I wanted to try it. And so I did. It seems like it works. So maybe we should like explore that a little bit. So I think that discovery is not just for you. At least from my perspective, that's a discovery for everyone else who ever uses a neural link that this is possible. Like, I don't think that's an obvious thing that this is even possible. It's like I was saying to Bliss earlier. It's like the four-minute mile. that this is possible. I don't think it's an obvious thing that this is even possible. It's like I was saying to Bliss earlier, it's like the four-minute mile. People thought it was impossible to run a mile in four minutes. And once the first person did it, then everyone just started doing it. So just to show that it's possible, that paves the way to anyone can now do it. That's the thing that's actually possible. You don't need to do the attempted movement. You can just go direct. That's crazy. It is crazy. It is crazy. For people who don't know, can you explain how the Link app works? You have an amazing stream on the topic. Your first stream, I think, on X, describing the app. Can you just describe how it works? Yeah. the app. Can you just describe how it works? Yeah. So it's just an app that Neuralink created to help me interact with the computer. So on the Link app, there are a few different settings and different modes and things I can do on it. So there's like the body mapping we kind of touched on. There's a calibration. Calibration is how I actually get cursor control. So calibrating what's going on in my brain to translate that into cursor control. So it will pop out models. What they use, I think, is time. So it would be five minutes in calibration would give me so good of a model. And then if I'm in it for 10 minutes and 15 minutes, the models will progressively get better. And so the longer I'm in it generally, the better the models will get. That's really cool because you often refer to the models yeah the model is the thing that's constructed once you go through the calibration stuff yeah and then you also talked about sometimes sometimes you'll play like a really difficult game like snake just to see how good the model is yeah yeah so snake is kind of like my litmus test for models if i can control snake decently well, then I know I have a pretty good model. So yeah, the Link app has all of those. It has WebGrid in it now. It's also how I connect to the computer just in general. So they've given me a lot of voice controls with it at this point. So I can say connect or implant disconnect. And as long as I have that charger handy, then I can connect to it. So the charger is also how I connect to the Link app to connect to the computer. I have to have the implant charger over my head when I want to connect to have it wake up because the implant's in hibernation mode, like always when I'm not using it. I think there's a setting to like wake it up every, you know, so long. So we could set it to half an hour or five hours or something if I just want it to wake up periodically. So yeah, I'll like connect to the link app and then go through all sorts of things, calibration for the day, maybe mapping i have like i made them give me like a little homework tab um because i am very forgetful and i forget to do things a lot um so i have like a lot of data collection things uh that they want me to do is the body mapping part of the data collection or is that also part of the yeah it is it's something that they want me to do daily, which I've been slacking on because I've been doing so much media and traveling and so much. You've gotten super famous. Yeah, I've been a terrible first candidate for how much I've been slacking on my homework. But yeah, it's just something that they want me to do every day to track how well well the neural link is performing over time and have something to give i imagine to give to the fda to you know create all sorts of fancy charts and stuff and show like hey this is what the neural link this is how it's performing you know day one versus day 90 versus day 180 and things like that what's the calibration step like is it is it like move left move right it's a bubble game so there will be like yellow bubbles that pop up on the screen at first it is open loop so open loop this is something that i still don't fully understand the open loop and closed loop thing i mean blitz talked for a long time about the difference between the two from the on the technical side. Okay. So it'd be great to hear your side of the story. Open loop is basically, I have no control over the cursor. The cursor will be moving on its own across the screen, and I am following by intention the cursor to different bubbles. And then the algorithm is training off of what the signals it's getting are as I'm doing this. There are a couple of different ways that they've done it. They call it center out target. So there will be a bubble in the middle and then eight bubbles around that. And the cursor will go from the middle to one side. So say middle to left, back to middle, to up up to middle, like upright, and they'll do that all the way around the circle. And I will follow that cursor the whole time, and then it will train off of my intentions what it is expecting my intentions to be throughout the whole process. Can you actually speak to when you say follow? Yes. You don't mean with your eyes, you mean with your intentions. Yeah. So generally for calibration, I'm doing attempted movements because I think it works better. I think the better models as I progress through calibration make it easier to use imagined movements. Wait, wait, wait. So calibrated on attempted movement will create a model that makes it really effective for you to then use the force. Yes. I've tried doing calibration with imagined movement, and it just doesn't work as well um for some reason so that was the center out targets there's also one where you know a random target will pop up on the screen and it's the same i just like move i follow along um with wherever the cursor is to that target all across the screen um i've tried those with imagined movement and for some reason, the models just don't give as high level as quality when we get into closed loop. I haven't played around with it a ton. So maybe the different ways that we're doing calibration now might make it a bit better. But what I've found is there will be a point in calibration where I can use Imagine Movement. Before that point, it doesn't really work. So if I do calibration for 45 minutes, the first 15 minutes, I can't use Imagine Movement. It just doesn't work for some reason. 15 minutes, I can't use imagined movement. It just doesn't work for some reason. And after a certain point, I can just sort of feel it. I can tell it moves different. That's the best way I can describe it. It's almost as if it is anticipating what I am going to do again before I go to do it. And so using attempted movement for 15 minutes, at some point I can kind of tell when I move my eyes to the next target that the cursor is starting to pick up. It's starting to understand. It's learning what I'm going to do. So first of all, it's really cool that you are a true pioneer in all of this. You're exploring how to do every aspect of this most effectively. And I imagine so many lessons learned from this. So thank you for being a pioneer in all these kinds of different super technical ways. And it's also cool to hear that there's a different feeling to the experience when it's calibrated in different ways. I imagine your brain is doing something different, and that's why there's a different feeling to it. And then trying to find the words and the measurements to those feelings would be also interesting. But at the end of the day, you can also measure your actual performance, whether it's snake or web grid you can see like what actually works well and you're saying for the open loop calibration the attempted movement works best for now yep yep so the this is the open loop you don't get the feedback that's something that you did something yeah i'm just frustrating no no it makes sense to me like uh we've done it with a cursor and without a cursor in open loop so sometimes it's just um say for like the center out the um you'll start calibration with a bubble lighting up and i push towards that bubble and then when that bubble you know when it's pushed towards that bubble for say three seconds, a bubble will pop and then I come back to the middle. So I'm doing it all just by my intentions, like that's what it's learning anyway. So it makes sense that as long as I follow what they want me to do, like follow the yellow brick road, that it'll all work out. You're full of great references. Is the bubble game fun? Yeah. They always feel so bad making me do calibration. Like, oh, we're about to do a 40-minute calibration. I'm like, alright. Do you guys want to do two of them? I'm always asking to whatever they need, I'm more than happy to do. And it's not bad. I get to lie there or sit in my chair and like do these things with some great people. I get to have great conversations. I can give them feedback. Um, I can talk about all sorts of things. Uh, I could throw something on, on my TV in the background and kind of like split my attention between them. Um, like it's not bad at all. Is there a score that you get? Like can you do better on the bubble game? No, I would love that. I would love, yeah. Writing down suggestions from Nolan. Make it more fun, gamified. Yeah, that's one thing that I really, really enjoy about WebGrid is because I'm so competitive. Like the higher the BPS, the higher the score, I know the better I'm doing. And so if I think I've asked at one point, one of the guys, like if he could give me some sort of numerical feedback for calibration, like I would like to know what they're looking at. Like, Oh, you know, it is, um, we see like this number while you're doing calibration. And that means at least on our end that we think calibration is going well. Um, and I would love that because I would like to know if what I'm doing is going well or not. But then they've also told me like, yeah, not necessarily like one-to-one. It doesn't actually mean that calibration is going well in some ways. Um, so it's not like a hundred percent and they don't want to like skew what I'm experiencing or want me to change things based on that. If that number isn't always accurate to like how the model will turn out or how like the end result, that's at least what I got from it. One thing I do, I have asked them and something that I really enjoy striving for is towards the end of calibration, there is like a time between targets. And so I like to keep like at the end, that number as low as possible. So at the beginning, it can be, you know, four or five, six seconds between me popping bubbles. But towards the end, I like to keep it below like 1.5 or if i could get it to like one second between like bubbles because in my mind that translates um really nicely to something like web grid where i know if i can hit a target uh one every second that i'm doing real real well there you go that's the way to get a score on the calibrations like the speed how quickly can you get from bubble to bubble. So there's the open loop, and then it goes to the closed loop. And the closed loop can already start giving you a sense, because you're getting feedback of how good the model is. Yeah, so closed loop is when I first get cursor control and how they've described it to me, someone who does not understand this stuff. I am the dumbest person in the room every time I'm with any of these guys. Is that I am closing the loop. So I am actually now the one that is like finishing the loop of whatever this loop is. I don't even know what the loop is. They've never told me. They just say there is a loop and at one point it's open and I can't control and then I get control and it's closed. So I'm finishing the loop. So how long the calibration usually take? You said like 10, 15 minutes. Well, yeah, they're trying to get that number down pretty low. That's what we've been working on a lot recently is something that people need to do on a daily basis, or if some people need to do on a, um, like every other day basis or once a week, they don't want people to be sitting in calibration for long periods of time. I think they wanted to get it down seven minutes or below, um, at least where we're at right now. It'd be nice if they, you never had to do calibration. Um, so we'll get there at some point i'm sure the more we learn about the brain and um like i think that's you know the dream um i think right now for me to get like really really good models um i'm in calibration 40 or 45 minutes um and i don't mind like i said they always feel bad, but if it's going to get me a model that can break these records on WebGrid, I'll stay in it for flipping two hours. Let's talk business. So WebGrid, I saw a presentation where Bliss said by March, you selected 89,000 targets in WebGrid. Can you explain this game? What is WebGrid and what does it take to be a world-class performer in WebGrid as you continue to break world records? Yeah. It's like a gold medalist. Like, wow. Yeah. You know, I'd like to thank everyone who's helped me get here, my coaches, my parents, for driving me to practice every day at 5 in the morning. I'd like to thank God. And just overall, my dedication to my craft. my parents for driving me to practice every day at five in the morning um like thank god and just overall my dedication to my craft the interviews with athletes are always like that exactly it's like that template yeah so so um so web grid web grid is a grid it's it's literally just a grid they can make it as big or small as you can make a grid. A single box on that grid will light up and you go and click it. And it is a way for them to benchmark how good a BCI is. So it's pretty straightforward. You just click targets. Only one blue cell appears and you're supposed to move the mouse to there and click on it so i like playing on like bigger grids because it the bigger the grid the like more bps it's bits per second um that you get every time you click one so i'll say i'll play on like a 35 by 35 um grid and then one of those little squares, a cell, call it, target, whatever, will light up and you move the cursor there and you click it. And then you do that forever. And you've been able to achieve at first eight bits per second and you recently broke that. Yeah, I'm at 8.5 right now. I would have beaten that literally the day before I came at 8.5 right now. I would have beaten that literally the day before I came to Austin, but I had like a, I don't know, like a five second lag right at the end. And I just had to wait until the latency calmed down and then I kept clicking. But I was at like 8.01 and then five seconds of lag. And then the next like three targets I clicked all stayed at 8.01 and then five seconds of lag and then the next like three targets i clicked all stayed at 8.01 so if i would have been able to click um during that time of lag i probably would have hit i don't know i might have hit nine so i'm there i'm like i'm really close and then this whole austin trip has really gotten in the way of my web grid playing ability yeah so that's all you're thinking about right now yeah i know I know. I just want to do better. At nine. I want to do better. I want to hit nine. I think, well, I know nine is very, very achievable. I'm right there. I think 10 I could hit maybe in the next month. Like I could do it probably in the next few weeks if I really push. I think you and Elon are basically the same person because last time I did a podcast with him, he came in extremely frustrated that he can't beat Uber Lilith as a droid. That was like a year ago, I think. I forget. Like solo. And I could just tell there's some percentage of his brain the entire time was thinking like, I wish I was right now attempting. I think he did it. He did it that night. He stayed up and did it that night. It's just crazy to me. I mean, in a fundamental way, it's really inspiring. And what you're doing is inspiring in that way. Because I mean, it's not just about the game. Everything you're doing there has impact. By striving to do well on WebGrid, you're helping everybody figure out how to create the system all along. The decoding, the software, the hardware, the calibration, all of it. How to make all of that work so you can do everything else really well. Yeah, it's just really fun. Well, that's also part of the thing, making it fun. Yeah, it's addicting i'm i've joked about um like what they actually did when they went in and put this thing in my brain they must have flipped a switch to make me uh more susceptible to these kinds of games to make me addicted to like web grid or something yeah do you know bliss's high score yeah he said like 14 or something 17 oh boy 17.1 or something 17.01 yeah he told me he like does it on the floor with peanut butter and he like fasts it's it's weird that sounds like cheating sounds like performance enhancing uh no like the first time nolan uh played this game he asked you know how good are we at this game and i think you told me right then you're gonna you're gonna try to beat me i'm'm going to get there someday. Yeah. I fully believe you. I think I can. I'm excited for that. Yeah. So I've been playing, first off, with the Dwell Cursor, which really hampers my WebGrid playing ability. Basically, I have to wait 0.3 seconds for every click. Oh, so you can't do the clicks. Yeah. So you click by dwelling. You said 0.3? 0.3 seconds, which sucks. It really slows down how much I'm able to, like, how high I'm able to get. I still hit, like, 50. I think I hit, like, 50-something trials, net trials per minute in that, which was pretty good. net trials per minute in that, which was pretty good. Because I'm able to like, there's one of the settings is also like how slow you need to be moving in order to initiate a click, to start a click. So I can tell sort of when I'm on that threshold to start initiating a click just a bit early. So I'm not fully stopped over the target when i go to click i'm doing it like on my way to the targets a little um to try to time it just right wow so you're slowing down yeah just just a hair right before this is like a lead performance okay but that's still it's it sucks that there's a ceiling of the 0.3 well there i can get down to 0.2 and 0.1 0.1 yeah and i've played with that a little bit too um i have to adjust a ton of different parameters in order to play with 0.1 and i don't have control over all that on my end yet it also changes like how the models are trained like if i train a model like in webgrid like a bootstrap on a model which basically is them uh training models as i'm playing webgrid um based off of like the WebGrid data that I'm so like, if I play WebGrid for 10 minutes, they can train off that data specifically in order to get me a better model. If I do that with 0.3 versus 0.1, the models come out different. The way that they interact, it's just much, much different. So I have to be really careful. I found that doing it with 0.3 is actually better in some ways, unless I can do it with 0.1 and change all of the different parameters, then that's more ideal because obviously 0.3 is faster than 0.1. So I could get there. I can get there. Can you click using your brain? For right now, it's the hover clicking with the dwell cursor. Before all the thread retraction stuff happened, we were calibrating clicks. Left click, right click. That was my previous ceiling before I broke the record again with the dwell cursor was I think on a 35 by35 grid with left and right click. And you get more BPS, more bits per second, using multiple clicks because it's more difficult. Oh, because what is it? You're supposed to do either a left click or a right click. Is it different colors? Yeah, blue targets for left click, orange targets for right click is what they had done. So my previous record of 7.5 was with the blue and the orange targets yeah which um i think if i went back to that now um doing the click calibration i would be able to and being able to like initiate clicks on my own i think i would break that 10 ceiling like in a couple days max like yeah you would start making bliss nervous about his 17 why do you think we haven't given him the yeah exactly uh so what would it feel like with their attractions that there is uh some of the threads are attracted it sucked it was really really hard the day they told me was the day of my big Neuralink tour at their Fremont facility. They told me right before we went over there, it was really hard to hear. My initial reaction was, all right, go in, fix it. Go and take it out and fix it. The first surgery was so easy. I went to sleep. A couple hours later, I woke up and here we are. Um, I didn't feel any pain, didn't take like any, um, um, pain pills or anything. So I just knew that if they wanted to, they could go in and put in a new one, like next day, if that's what it took. Cause I just wanted, I wanted it to be better and I wanted not to lose the capability. I had so much fun playing view from the top of this mountain and then have it all come crashing down after a month. And I knew, say the top of the mountain, but how I saw it was I was just now starting to climb the mountain. And there was so much more that I knew was possible. And so to have all of that be taken away, it was really, really hard. Um, but then on the drive over to the facility, I don't know, like five minute drive, whatever it is. Um, I talked with my parents about it. I prayed about it. I was just like, you know, I'm not going to let this ruin my day. I'm not going to let this ruin this amazing tour that they have set up for me. I want to go show everyone how much I appreciate all the work they're doing. I want to go meet all of the people who have made this possible. And I want to go have one of the best days of my life. And I did. And it was amazing. And it absolutely was one of the best days I've ever been privileged to experience. And then for a few days, I was pretty down in the dumps. But for the first few days afterwards, I didn't know if it was ever going to work again. And then I just, I made the decision that it, even if I lost the ability to use the Neuralink, even if I lost, even if I like lost out on everything to come, if I could keep giving them data in any way, then I would do that. If I needed to just do some of the data collection every day or body mapping every day for a year, then I would do it because I know that everything I'm doing helps everyone to come after me. And that's all I wanted. I guess the whole reason that I did this was to help people. And I knew that anything I could do to help, I would continue to do, even if I never got to use the cursor again. Then, you know, I was just happy to be a part of it. And everything that I had done was just a perk. It was something that I got to experience. And I know how amazing it's going to be for everyone to come after me. So might as well just keep trucking along, you know. That said, you were able to get to work your way up to get the performance back. So this is like going from Rocky I to Rocky II. So when did you first realize that this is possible and what gave you sort of the strength, the motivation, the determination to do it, to increase back up and beat your previous record? Yeah, it was within a couple of weeks. Again, this feels like I'm interviewing an athlete. This is great. I like the thing. Apparently. The road back was long and hard. From many difficulties. There were dark days. It was a couple of weeks, I think. And then there was just a turning point. I think they had switched how, um, they were measuring, um, the neuron spikes in my brain, like the bliss helped me out. Uh, yeah. The way in which we were measuring, uh, the behavior of individual neurons. Yeah. So we're switching from a sort of individual spike detection to something called spike band power, which, uh, if you watch the previous segments with either me or DJ, you probably have some. Yeah. Okay. So when they did that, it was kind of like, you know, light over the head, like light bulb moment, like, Oh, this works. And this seems like, like we can run with this. And I saw the uptake in performance immediately. Like I could feel it when they switched over. I was like, this is better. Like, this is good. Like everything up till this point for the last few weeks, last like whatever, three or four weeks, because it was before they even told me, like everything before this sucked. Like, let's keep doing what we're doing now. And at that point, it was not like, oh, I know I'm still only at like, say in WebGrid terms, like four or five BPS compared to my 7.5 before. But I know that if we keep doing this, then I can get back there. And then they gave me the dwell cursor and the dwell cursor sucked at first. It's obviously not what I want, but it gave me a path forward to be able to continue using it and hopefully to continue to help out. And so I just ran with it, never looked back. Like I said, I'm just the kind of person I roll with the punches anyway. What was the process? What was the feedback loop on figuring out how to do the spike detection in a way that would actually work well for Nolan? Yeah, it's a great question. So maybe just describe first how the actual update worked. It was basically an update to your implant. So we just did an over-the-air software update to his implant, same way you'd update your Tesla or your iPhone. And that firmware change enabled us to record sort of averages of populations of neurons nearby individual electrodes. So we have less resolution about which individual neuron is doing what, but we have a broader picture of what's going on nearby in electrode overall. And that feedback, I mean, basically, as Nolan described, it was immediate when we flipped that switch. I think the first day we did that, you had three or four BPS right out of the box. And that was a light bulb moment for, okay, this is the right path to go down. And from there, there's a lot of feedback around like how to make this useful for independent use. So what we care about ultimately is that you can use it independently to do whatever you want. And to get to that point, it required us to re-engineer the UX, as you talked about with the dwell cursor, to make it something that you can use independently without us needing to be involved all the time. And yeah, this is obviously the start of this journey still. Hopefully we get back to the places where you're doing multiple clicks and using that to control much more fluidly everything and much more naturally the applications that you're trying to interface with. And most importantly, get that web grid number up. Yes. So how is the, on the hover click, do you accidentally click stuff sometimes? Yep. Like what's, how hard is it to avoid accidentally clicking? I have to continuously keep it moving basically. So like I said, there's a threshold where it will initiate a click. So if I ever drop below that, it'll start and I have 0.3 seconds to move it before it clicks anything. And if I don't want it to ever get there, I just keep it moving at a certain speed. And if I don't want it to ever get there, I just keep it moving at a certain speed and like just constantly like doing circles on screen, moving it back and forth to keep it from clicking stuff. I actually noticed a couple weeks back that I was when I was not using the implant, I was just moving my hand back and forth or in circles. Like I was trying to keep the cursor from clicking and I was just doing it like while i was trying to go to sleep and i was like okay this is a problem to avoid the clicking i guess does that create problems like when you're gaming accidentally click a thing like yeah yeah it happens in chess um i've lost yeah i've lost a number of games because i'll accidentally click something. I think the first time I ever beat you was because of an accident. Yeah, I misclicked. It's a nice excuse, right? Anytime you lose, you could just say, that was accidental. You said the app improved a lot from version 1 when you first started using it. It was very different. Can you just talk about the trial and error that you went through with the team, like 200 plus pages of notes? Like what's that process like of going back and forth and working together to improve the thing? It's a lot of me just using it like day in and day out and saying like, hey, can you guys do this for me? Like, give me this. I want to be able to do that. I need this. I think a lot of it just doesn't occur to them. Maybe until someone is actually using the app, using the implant. It's just something that they just never would have thought of. Or it's very specific to even like me, maybe what I want. It's something I'm a little worried about with the next people that come is, you know, maybe they will want things much different than how I've set it up or what the advice I've given the team. And they're going to look at some of the things they've added for me. like, that's a dumb idea. Like, why would he ask for that? Um, and so I'm really looking forward to get the next people on because I guarantee that they're going to think of things that I've never thought of. They're going to think of improvements. I'm like, wow, that's a really good idea. Like, I wish I would have thought of that. Um, and then they're also going to give me some pushback about like, yeah, what you are asking them to do here, that's a bad idea. Let's do it this way. And I'm more than happy to have that happen. But it's just a lot of different interactions with different games or applications, the internet, just with the computer in general. There's tons of bugs that end up popping up left, right, center. So it's just me trying to use it as much as possible and showing them what works and what doesn't work and what I would like to be better. And then they take that feedback and they usually create amazing things for me. They solve these problems in ways I would have never imagined. They're so good at everything they do. And so I'm just really thankful that I'm able to give them feedback and they can make something of it because a lot of my feedback is like really dumb. It's just like, I want this, please do something about it. And we'll come back super well thought out. And it's way better than anything I could have ever thought of or implemented myself. So they're just great. They're really, really cool. As the BCI community grows, would you like to hang out with the other folks with Neuralinks? What relationship, if any, would you want to have with them? Because you said they might have a different set of like ideas of how to use the thing uh yeah would you be intimidated by their web great performance no no I hope compete I hope day one they've like wiped the floor with me I hope they beat it um and they crush it you know double it if they crush it, double it if they can. Just because on one hand, it's only going to push me to be better because I'm super competitive. I want other people to push me. I think that is important for anyone trying to achieve greatness is they need other people around them who are going to push them to be better. And I even made a joke about it on X once, like once the next people get chosen, like cue buddy cop music. Like I'm just excited to have other people to do this with and to like share experiences with. I'm more than happy to interact with them as much as they want. More than happy to give them advice. I don't know what kind of advice I could give them, but if they have questions, I'm more than happy. What advice would you have for the next participant in the clinical trial? That they should have fun with this because it is a lot of fun and that I hope they work really, really hard because it's not just for us. It's for everyone that comes after us. just for us. It's for everyone that comes after us. And come to me if they need anything and to go to Neuralink if they need anything. Man, Neuralink moves mountains. They do absolutely anything for me that they can. And it's an amazing support system to have. It puts my mind at ease for so many things that I have had questions about so many things I want to do and they're always there and that's really really nice and so I would tell them not to be afraid to go to Neuralink with any questions that they have any concerns anything that they're looking to do with this and any help that Neuralink is capable of providing you know they're looking to do with this and any help the neural ink is capable of providing I know they will and I don't know I don't know just work your ass off because it's it's really important that we try to give our all to this so have fun and work hard yeah yeah there we go maybe that's what I'll just start saying to people. Have fun, work hard. Now you're a real pro athlete. Just keep it short. Maybe it's good to talk about what you've been able to do now that you have a neural link implant, like the freedom you gain from this way of interacting with the outside world. Like you play video games all night and you do that by yourself. Yeah. And that's a kind of freedom. Can you speak to that freedom that you gain? Yeah, it's what all, I don't know, people in my position want. They just want more independence. The more load that I can take away from people around me, the better. If I'm able to interact with the world without using my family, without going through any of my friends, needing them to help me with things, the better. If I'm able to sit up on my computer all night and not need someone to sit me up, say on my iPad, in a position where I can use it and then have to have them wait up for me all night until I'm ready to be done using it. It takes a load off of all of us. And it's really all I can ask for. It's something know I could never thank Neuralink enough for I know my family feels the same way you know just being able to have the freedom to do things on my own at any hour of the day or night it means the world to me and I don't know when you're up at 2am playing Webgridrid by yourself, I just imagine it's darkness and there's just a light glowing, and you're just focused. What's going through your mind? Or are you in a state of flow where the mind is empty, like those Zen masters? Yeah. Generally, it is me playing music of some sort. I have a massive playlist. And so I'm just like rocking out to music. And then it's also just like a race against time because I'm constantly looking at how much battery percentage I have left on my implant. Like, all right, I have 30%, which equates to, you know, X amount of time, which means I have to break this record in the next, you know, hour and a half or else it's not happening tonight. And so it's, it's a little stressful when that happens when it's like, when it's above 50%, I'm like, okay, like I got time. It starts getting down to 30 and then 20. It's like, all right, uh 10 a little pop-up is going to pop up right here and it's going to really screw my web grid flow it's going to tell me that you know like there's a like a low battery low battery pop-up comes up and it's really going to screw me over so if i have to if i'm going to break this record i have to do it in the next like 30 seconds or else that pop-up is going to get in the way like cover my web grid um and then it after that i go click on it go back into web grid and i'm like all right that means i have you know 10 minutes left before this thing's dead that's what's going on in my head generally that and whatever song is playing um and i just i just want i want to break those records so bad like it's all i want when i'm playing web grid it it has become less of like oh this is just a leisurely activity like i just enjoy doing this because it just feels so nice and it puts me at ease it is no once i'm in web grid you better break this record or you're gonna waste like five hours of your life right now and um i don't know it's just fun it's fun man have you ever tried web grid with like two targets and three targets can you get higher bps with that you can you do that you mean like different color targets or you mean oh multiple targets does that change the thing yeah so bps is a log of number of targets times correct minus incorrect divided by time and so you can think of like different clicks as basically doubling the number of active targets got it so you know you basically hire bps the more options there are the more difficult to task and uh there's also like zen mode you've played in before which is like yeah infinite it covers it covers the whole screen with a grid and um i don't know what yeah and so you can go like that's that's insane yeah he doesn't like it because it didn't show bps so like you know oh yeah i had them i had them put in a giant bps in the background so now it's like the opposite of zen mode it's like it's like super hard mode like just metal mode if it's just like a giant number in the back count. We should name that. Metal Mode is a much better name. So you also play Civilization VI. I love Civ VI, yeah. Usually go with Korea. I do. So the great part about Korea is they focus on science, tech victories, which was not planned. I've been playing Korea for years and then all of the Neuralink stuff happened. So it kind of aligned. But what I've been playing Korea for years, and then all of the Neuralink stuff happened. So it kind of aligned. But what I've noticed with tech victories is if you can just rush tech, rush science, then you can do anything. At one point in the game, you will be so far ahead of everyone technologically that you will have musket men, infantrymen, planes sometimes, and people will still be fighting with bows and arrows. And so if you want to win a domination victory, you just get to a certain point with the science and then go and wipe out the rest of the world. Or you can just take science all the way and win that way. And you're going to be so far ahead of everyone because you're producing so much science that it's not even close. I've accidentally won in different ways just by focusing on science. Accidentally won by focusing on science. I was playing only science, obviously, just science all the way, just tech. And I was trying to get every tech in the tech tree and stuff. And then I accidentally won through a diplomatic victory. And I was so mad. I was so mad. Because it just ends the game one turn. It was like, oh, you won. You're so diplomatic. I'm like, I don don't want to do this i should have declared war on more people or something um it was terrible but you don't need like giant civilizations with tech especially with korea you can keep it pretty small so i generally just you know get to a certain military unit and put them all around my border to keep everyone out and then i will just build up. So very isolationist. Nice. Yeah. Just work on the science of the tech. That's it. You're making it sound so fun. It's so much fun. And I also saw Civilization 7 trailer. Oh, man, I'm so pumped. And that's probably coming out. Come on, Civ 7, hit me up. I'll alpha, beta test, whatever. When is it coming out? 2025. Yeah, yeah, next year, yeah. What other stuff would you like to see improved about the Neuralink app and just the entire experience? I would like to, like I said, get back to the click-on-demand, like the regular clicks. That would be great. I would like to be able to connect to more devices. Right now, it's just the computer. I'd like to be able to use it on my phone or use it on different consoles, different platforms. I'd like to be able to control as much stuff as possible, honestly. Like, an Optimus robot would be pretty cool. That would be sick, if I could control an Optimus robot. The Link app itself, it seems like we are getting pretty dialed in to what it might look like down the road. It seems like we've gotten through a lot of what I want from it, at least. I want from it at least. The only other thing I would say is more control over all the parameters that I can tweak with my cursor and stuff. There's a lot of things that go into how the cursor moves in certain ways. And I have, I don't know, like three or four of those parameters. Like gain and friction and all that. Gain, friction, yeah. And there's maybe double the amount of those parameters and there might be gain and friction gain friction yeah and there's maybe double the amount of those with just like velocity and then with the actual dwell cursor um so i would like all of it i want as much control over my environment as possible um especially you want like advanced mode you know like in like there's menus usually there's basic mode and you're like one of those folks like the power user advanced. Got it. That's what I want. I want as much control over this as possible. So yeah, that's really all I can ask for. Just give me everything. Has speech been useful? Like just being able to talk also in addition to everything else? Yeah. You mean like while I'm using it? While you're it like speech to text oh yeah or do you type or like because there's also a keyboard yeah yeah so there's a virtual keyboard that's another thing i would like to work more on is finding some way to um type or text in a different way right now it is um like a dictation basically and a virtual keyboard that I can use with the cursor. But we've played around with like finger spelling, like sign language finger spelling. And that seems really promising. So I have this thought in my head that it's going to be a very similar learning curve that I had with um the cursor where i went from attempted movement to imagine movement at one point i have a feeling um this is just my intuition that at some point i'm going to be doing finger spelling and i won't need to actually attempt to finger spell anymore that i'll just be able to think the like letter that i want and it'll pop up that will be epic yeah that's challenging that's hard that's a lot of work for you to kind of take that leap but that would be awesome and then like going from letters to words is another step like you would go from you know right now it's finger spelling of like just the sign language alphabet but if it's able to pick that up then it should be able to pick up like the sign language language. And so then if I could do something along those lines or just the sign language spelled word, if I can spell it at a reasonable speed and it can pick that up, then I would just be able to think that through and it would do the same thing. I don't see why not. After what I saw with the cursor control, I don't see why it wouldn't work, but we'd have to play around with it more. What was the process in terms of training yourself to go from attempted movement to imagined movement? Yeah. How long did that take? So how long would this kind of process take? Well, it was a couple weeks before it just happened upon me. But now that I know that that was possible i think i could make it happen with other things i think it would be much much simpler would you get an upgraded implant device sure absolutely whenever whenever they'll let me uh so you don't have any concerns for you with the surgery experience all of it was um like no regrets. So everything's been good so far. Yep. You just keep getting upgrades. Yeah. I mean, why not? I've seen how much it's impacted my life already, and I know that everything from here on out is just going to get better and better. So I would love to. I would love to get the upgrade. What future capabilities are you excited about sort of beyond this kind of telepathy is vision interesting so for folks who for example who are blind so you're like enabling people to see or for speech yeah there's a lot that's very very cool about this i mean we're talking about the brain. So there's like, this is just motor cortex stuff. There's so much more that can be done. The vision one is fascinating to me. I think that is going to be very, very cool to give someone the ability to see for the first time in their life would just be, I mean, it might be more amazing than even helping someone like me. Like that just sounds incredible. The speech thing is really interesting being able to have some sort of real-time translation and cut away that language barrier would be really cool. Any sort of actual impairments that it could solve with speech would be very, very cool. And then also there are a lot of different disabilities that all originate in the brain and you would be able to hopefully be able to solve a lot of those. I know there's already stuff to help people with seizures that can be implanted in the brain. This would do, I imagine, the same thing. And so you could do something like that. I know that even someone like Joe Rogan has talked about the possibilities with being able to stimulate the brain in different ways. I'm not sure how ethical a lot of that would be. That's beyond me, honestly. But I know that there's a lot that can be done when we're talking about the brain and being able to go in and physically make changes to help people or to improve their lives. So I'm really looking forward to everything that comes from this. And I don't think it's all that far off. I think a lot of this can be implemented within my lifetime, assuming that I live a long life. What you're referring to is things like people suffering from depression or things of that nature potentially getting help. Yeah. Flip a switch like that, make someone happy. I think Joe has talked about it more in terms of like you want to experience like what a drug trip feels like like you want to experience what you like to be on of course oh yeah mushrooms or something like that dmt like you can just flip that switch in the brain uh my buddy bane has talked about being able to like wipe parts of your memory and re-experience things that like for the first time like your favorite movie or your favorite book like just wipe that out real quick and then re-fall in love with Harry Potter or something. I told him, I was like, I don't know how I feel about like people being able to just wipe parts of your memory. That seems a little sketchy to me. He's like, they're already doing it. So. Sounds legit. I would love memory replay. Just like actually like high resolution replay of old memories. Yeah. I saw an episode of Black Mirror about that once. I don't think I want it. Yeah. So Black Mirror always kind of considers the worst case, which is important. I think people don't consider the best case or the average case enough. I don't know what it is about us humans. We want to think about the worst possible thing. Yeah. We love drama yeah it's like how is this new technology going to kill everybody we just love that okay like yes let's watch hopefully people don't think about that too much with me it'll ruin a lot of my plans yeah yeah i assume you're gonna have to take over the world i mean you're i love your twitter you uh you tweeted i'd like to make jokes about hearing voices in my head since getting the Neuralink, but I feel like people would take it the wrong way, plus the voices in my head told me not to. Yeah. Yeah. Yeah. Please never stop. So you were talking about Optimus. Is that something you would love to be able to do, to control the robotic arm or the entirety of Optimus? Oh yeah, for sure. For sure, absolutely. You think there's something fundamentally different about just being able to physically interact with the world? Yeah, oh, 100%. I know another thing with being able to give people the ability to feel sensation and stuff too, by going in with the brain and having the neurolink maybe do that. That could be something that could be transferred through the optimist as well. There's all sorts of really cool interplay between that. And then also, like you said, just physically interacting. I mean, 99% of the things that I can't do myself, obviously I need a caretaker for, someone to physically do things for me. If an optimist robot could do that, I could live an incredibly independent life and not be such a burden on those around me. on those around me. And it would change the way people like me live, at least until whatever this is gets cured. But being able to interact with the world physically like that would just be amazing. And not just for having to be a caretaker or something, but something like I talked about, just being able to read a book. I imagine an optimist robot, just being able to hold a book open in front of me, like get that smell again. I might not be able to feel it at that point. Or maybe I could again with the sensation and stuff, but being there's something different about reading like a physical book than staring at a screen or listening to an audio book. I actually don't like audio books. I've listened to a ton of them at this point, but I don't really like them. I would much rather like read a physical copy. So one of the things you would love to be able to experience is opening the book, bringing it up to you and to feel the touch of the paper. Yeah. Oh man, the touch, the smell. I mean, it's just something about the words of the paper? Yeah. Oh, man. The touch, the smell. I mean, it's just something about the words on the page. They've replicated that page color on the Kindle and stuff. Yeah, it's just not the same. Yeah. So just something as simple as that. So one of the things you miss is touch? I do. Yeah. A lot of things that I interact with in the world, like clothes or literally any physical thing that I interact with in the world, a lot of times what people around me will do is they'll just come rub it on my face. They'll lay something on me so I can feel the weight. They will rub a shirt on me so I can feel fabric like there's something very profound about touch and uh it is it's something that i miss a lot um and something i would love to do again we'll see what would be the first thing you do with a with a hand that can touch your mom will hug after that, you know, like how much I care and how much I love her and everything. Something along those lines, being able to just interact with the people around me, handshake, give someone a hug. I don't know, anything like that. Being able to help me eat, like I'd probably get really fat, which would be a terrible, terrible thing. Also beat Bliss in chess on a physical chess board. Yeah. Yeah. I mean, there are just so many upsides, you know, any, any way to find some way to feel like I'm bringing Bliss down to my level. Yeah. Because he's just such an amazing guy and everything about him is just so above and beyond that anything I can do to take him down a notch yeah humble him a bit he needs it okay as he's sitting next to me did you ever make sense of why God puts good people through such hardship oh man um puts good people through such hardship? Oh, man. I think it's all about understanding how much we need God. And I don't think that there's any light without the dark. I think that if all of us were happy all the time, there would be no reason to turn to God ever. I feel like there would be no concept of good or bad. And I think that as much of the darkness and the evil that's in the world, it makes us all appreciate the good and the things we have so much more. And I think when I had my accident, one of the first things I said to one of my best friends was, and this was within the first month or two after my accident, I said, everything about this accident has just made me understand and believe that God is real and that there really is a God, basically, and that my interactions with him have all been real and worthwhile. And he said, if anything, seeing me go through this accident, he believes that there isn't a God. And it's a very different reaction. But I believe that it is a way for God to test us, to build our character, to send us through trials and tribulations, to make sure that we understand how precious He is and the things that He's given us and the time that He's given us, and then to hopefully grow from all of that um i think that's a huge part of being here is to um not just you know have an easy life and do everything that's easy but to step out of our comfort zones and really challenge ourselves uh because i think that's how we grow what gives you hope about this whole thing we have going on, human civilization? Oh, man. I think people are my biggest inspiration. Even just being at Neuralink for a few months, looking people in the eyes and hearing their motivations for why they're doing this. It's so inspiring. And I know that they could be other places at cushier jobs, working somewhere else, doing X, Y, or Z that doesn't really mean that much. But instead, they're here and they want to better humanity and they want to better just the people around them, the people that they've interacted with in their life. They want to make better lives for their own family members who might have disabilities or they look at someone like me and they say, I can do something about that, so I'm going to. And it's always been what I've connected with most in the world are people. I've always been a people person and I love learning about people and I love learning how people developed and where they came from and to see how much people are willing to do for someone like me when they don't have to and they're going out of their way to make my life better. It gives me a lot of hope for just humanity in general, how much we care and how much we're capable of when we all kind of get together and try to make a difference. And I know there's a lot of bad out there in the world, but there always has been and there always will be. And I think that that is, it shows human resiliency and it shows what we're able to endure and how much we just want to be there and help each other and how much satisfaction we get from that. Because I think that's one of the reasons that we're here is just to help each other. And I don't know, that always gives me hope, is just realizing that there are people out there who still care and who want to help. And thank you for being one such human being and continuing to be a great human being through everything you've been through. I'm being an inspiration to many people, to myself for many reasons, including your epic, unbelievably great performance on WebGrid. I will be training all night tonight to try to catch up. You can do it. And I believe in you that you can, once you come back, so sorry to interrupt with the Austin trip. Once you come back, eventually beat bliss. Yeah. Yeah, for sure. Absolutely. I'm rooting for you. The whole world is rooting for you thank you for everything you've done man thanks thanks man thanks for listening to this conversation with nolan arbaugh and before that with elon musk dj saw matthew mcdougall and bliss chapman to support this podcast please check out our sponsors in the description and, let me leave you with some words from Aldous Huxley in The Doors of Perception. We live together. We act on and react to one another. But always, and in all circumstances, we are by ourselves. The martyrs go hand in hand into the arena. They are crucified alone. Embraced, the lovers desperately try to fuse their insulated ecstasies into a single self-transcendence in vain. But its very nature, every embodied spirit is doomed to suffer and enjoy its solitude. Sensations, feelings, insights, fancies, all these are private and except through symbols and a second hand, incommunicable. We can pull information about experiences but never the experiences themselves. From family to nation, every human group is a society of island universes. Thank you for listening and hope to see you next time. | Elon Musk: Neuralink and the Future of Humanity | Lex Fridman Podcast #438 | 31,054 | Lex Fridman | 20240802 | Elon Musk is CEO of Neuralink, SpaceX, Tesla, xAI, and CTO of X. DJ Seo is COO & President of Neuralink. Matthew MacDougall is Head Neurosurgeon at Neuralink. Bliss Chapman is Brain Interface Software Lead at Neuralink. Noland Arbaugh is the first human to have a Neuralink device implanted in his brain.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep438-sb
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.
*Transcript:*
https://lexfridman.com/elon-musk-and-neuralink-team-transcript
*CONTACT LEX:*
*Feedback* - give feedback to Lex: https://lexfridman.com/survey
*AMA* - submit questions, videos or call-in: https://lexfridman.com/ama
*Hiring* - join our team: https://lexfridman.com/hiring
*Other* - other ways to get in touch: https://lexfridman.com/contact
*EPISODE LINKS:*
Neuralink's X: https://x.com/neuralink
Neuralink's Website: https://neuralink.com/
Elon's X: https://x.com/elonmusk
DJ's X: https://x.com/djseo_
Matthew's X: https://x.com/matthewmacdoug4
Bliss's X: https://x.com/chapman_bliss
Noland's X: https://x.com/ModdedQuad
xAI: https://x.com/xai
Tesla: https://x.com/tesla
Tesla Optimus: https://x.com/tesla_optimus
Tesla AI: https://x.com/Tesla_AI
*SPONSORS:*
To support this podcast, check out our sponsors & get discounts:
*Cloaked:* Online privacy protection.
Go to https://lexfridman.com/s/cloaked-ep438-sb
*MasterClass:* Online classes from world-class experts.
Go to https://lexfridman.com/s/masterclass-ep438-sb
*Notion:* Note-taking and team collaboration.
Go to https://lexfridman.com/s/notion-ep438-sb
*LMNT:* Zero-sugar electrolyte drink mix.
Go to https://lexfridman.com/s/lmnt-ep438-sb
*Motific:* Generative ai deployment.
Go to https://lexfridman.com/s/motific-ep438-sb
*BetterHelp:* Online therapy and counseling.
Go to https://lexfridman.com/s/betterhelp-ep438-sb
*OUTLINE:*
0:00 - Introduction
0:49 - Elon Musk
4:06 - Telepathy
10:45 - Power of human mind
15:12 - Future of Neuralink
20:27 - Ayahuasca
29:57 - Merging with AI
34:44 - xAI
36:57 - Optimus
43:47 - Elon's approach to problem-solving
1:01:23 - History and geopolitics
1:05:53 - Lessons of history
1:10:12 - Collapse of empires
1:17:55 - Time
1:20:37 - Aliens and curiosity
1:28:12 - DJ Seo
1:36:20 - Neural dust
1:43:03 - History of brain–computer interface
1:51:07 - Biophysics of neural interfaces
2:01:36 - How Neuralink works
2:07:26 - Lex with Neuralink implant
2:27:24 - Digital telepathy
2:38:27 - Retracted threads
2:44:01 - Vertical integration
2:50:55 - Safety
3:00:50 - Upgrades
3:09:53 - Future capabilities
3:39:09 - Matthew MacDougall
3:44:58 - Neuroscience
3:52:07 - Neurosurgery
4:03:11 - Neuralink surgery
4:22:20 - Brain surgery details
4:38:03 - Implanting Neuralink on self
4:53:57 - Life and death
5:03:17 - Consciousness
5:06:11 - Bliss Chapman
5:19:27 - Neural signal
5:26:19 - Latency
5:30:59 - Neuralink app
5:35:40 - Intention vs action
5:46:54 - Calibration
5:56:26 - Webgrid
6:19:28 - Neural decoder
6:40:03 - Future improvements
6:48:59 - Noland Arbaugh
6:49:08 - Becoming paralyzed
7:02:43 - First Neuralink human participant
7:06:45 - Day of surgery
7:24:31 - Moving mouse with brain
7:49:50 - Webgrid
7:57:52 - Retracted threads
8:06:16 - App improvements
8:13:01 - Gaming
8:23:59 - Future Neuralink capabilities
8:26:55 - Controlling Optimus robot
8:31:16 - God
8:33:21 - Hope
*PODCAST LINKS:*
- Podcast Website: https://lexfridman.com/podcast
- Apple Podcasts: https://apple.co/2lwqZIr
- Spotify: https://spoti.fi/2nEwCF8
- RSS: https://lexfridman.com/feed/podcast/
- Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
- Clips Channel: https://www.youtube.com/lexclips
*SOCIAL LINKS:*
- X: https://x.com/lexfridman
- Instagram: https://instagram.com/lexfridman
- TikTok: https://tiktok.com/@lexfridman
- LinkedIn: https://linkedin.com/in/lexfridman
- Reddit: https://reddit.com/r/lexfridman
- Facebook: https://facebook.com/lexfridman
- Patreon: https://patreon.com/lexfridman | 2024-08-04T09:47:31.760554 |
https://www.youtube.com/watch?v=AlJEpbaNYt0 | Hey Wiz! Hey, Wiz. So basically all of LLM fine tuning to date has been with Laura, low rank adaptation. Isn't that right? I mean, that seems pretty true. It feels pretty true, at least. Feels pretty true. We've got a sick meme on this that we've been putting out there quite a bit to the world. Today, although things might be starting to change, we're going to meet the challenger. I heard that it combines Laura with lasers. Is that right? I mean, it sounds pretty cool if it is. Yes. Well, we're going to see, along with the team from RRC today, if we can find some signal through the noise of fine-tuning. Sound like a plan, my man? You know what? That sounds like a really fun plan, Greg. Let's go ahead and kick this thing off, then. We'll have you back in a little bit as we kick off our first discussion with the team. Welcome, everybody. Today, we've got a special Thursday event where we're talking about Spectrum, a brand new fine tuning method that was put out by the folks over at RCAI doing absolutely massive things in the space at the LLM Edge. So what I'd like to do is I'd like to, first of all, introduce everybody to the Spectrum tool today. And we're also going to meet some of the creators and maintainers of the tool that work specifically with the RCAI team and that are with us today. So today we talk spectrum. It's all about training, fine tuning, pre-training, post-training. It's all about doing training on large language models, on small language models, and everything in between. We're going to cover the full spectrum today. Let's get into it. What we want to walk away with at the end of our session, as we align our aim to the day, is we want to walk away with, okay, what is spectrum and how is it working? How exactly is it working in a way that's different from other leading techniques that might already be in our tool belt? Specifically, let's look at LoRa or Quantize LoRa, QLoRa. This is something we've looked at in depth and also something that we want to make sure you guys understand when you should pick up one tool versus another. We'll also fine tune using Spectrum's signal to noise ratio approach, which is pretty cool. And we think that there are some real benefits that are worth paying attention to. So we'll talk Spectrum, we'll talk Spectrum versus Laura and the competition out there, and then we'll do some fine tuning. We'll try to benchmark this against a simple fine tuning task. So you guys can really get a feel for the results that we are able to get. You can, of course, read the paper and check out those results as well. We're going to go deep into the code today, as we always do. into the code today as we always do. Introducing Spectrum is not complete without first introducing RCAI. We did our first event with them talking about domain adapted RAG quite some time ago, and they've been doing really big things since then. RC is all about merging and training custom small language models, and they've been making some moves to build out quite a robust ecosystem. We've covered Mergekit in a previous event, which we thought was a great pickup from them. This spectrum powered training that's part of their larger ecosystem is what we cover today. We also think it's a great pickup. So what I'd like to do is I'd like to invite, first off, Lucas Atkins up to the stage with me. He's a research engineer at RCAI specializing in alignment. He is the primary implementer of Spectrum. So he knows this stuff. Let's hit him with the tough questions today and oversees the technology specifically related to SFT, supervised fine tuning and DPO, direct preference optimization. And along with Lucas today, we've got Fernando Fernandez, an AI research scientist at RC who blends deep technical expertise with business acumen. My man, Fernando, that's what I'm talking about. And he's all about transforming complex data into actionable insights and cutting edge solutions for the end user. They're both highly trained, highly educated guys. Welcome, Lucas and Fernando to the event. Also, of course, we got the whiz. Whiz, get up here and hang out with us. So, OK, we want to kick this off, guys. Just big picture. If we think about Spectrum and we think about its context at RC, and maybe we'll start with you, Lucas, how is Spectrum actually useful for this training of SLMs? How do you think about it in today's world of buzzwords? How do you manage this? And how would you sort of clarify it for our audience today? How do you manage this and how would you sort of clarify it for our audience today? When it comes to making domain adapted language models, there's several things that are really important. But from a business sense, it's not hurting the original model. So not losing a ton of information from that original model while you're adding new business-related or private data related information, and speed. The benefit of Spectrum over current fine tuning or pre-training methods is it can work across the stack. So it works with our continued pre-training, it works with fine tuning, and it works with DPO. As well as it's much faster than just doing a full fine tuning, or even in many cases doing something like Qlora or LoRa up to 30. I would say the average is around 34% faster than those alternatives. And then it does really, really well at reducing catastrophic forgetting, because you're only training the most important parts of the model and other parts of the model stay frozen and aren't trained at all. So good. So many threads to pull on there. I wonder, Fernando, if you could bring a little business acumen into this discussion. Is this something that most of the customers at RC are using today? Are people leveraging Spectrum quite a bit in their pipelines to train their SLMs? And if so, is it happening in sort of a continued pre-training phase, a fine-tuning phase, or an alignment phase, as alluded to by Lucas? Currently, Spectrum is being used on all of our training since we have joined RSC and quickly integrated this technology on our whole pipeline. So most of these models that you have seen out there, all of them were trained using Spectrum. And it's worthy also mentioning that one of the key things that we have observed, and it's by design how we construct the spectrum, is that we don't suffer from the problem that LoRa or QLoRa actually suffer, which is you learn less and forget less in case of LoRa. And there is a paper paper a very recent paper regarding this topic why oh in the spectrum which is one of the most appealing of features of the method you learn more and forget less because you're messing up with way less weight Oh learn more, forget less. First shots fired at Laura. Love it. Wiz, is this aligning with your understanding of Laura and Spectrum so far? I mean, definitely the case that with Laura, you learn less and forget less. I mean, that is classic paper that just came out. It is one of those, both something that's powerful about LoRa, but also obviously not as great. So the idea that something like Spectrum can let us have our cake and eat it too is definitely something that's exciting, right? We want to be able to train these models on lower capacity hardware. We want to be able to make these models more customizable by people without this, you know, incredible amount of hardware. And so being able to do that in a way that doesn't as much sacrifice the, you know, the amount that you can train the model is pretty exciting. Yeah. I mean, it's definitely on the list as well. Well, like one thing, and I want to just I want to just pick your guys' brain about this a little bit, because we see this with other things at RSC, right? Which is this idea that similar to how MergeKit allows you to have these base models, and then people can merge them, and then these artifacts kind of sit around, and useful artifacts from good instruction tuning and, you know, useful artifacts from like good instruction tuning and things like that exist. Right. I see that there's this a similar thing in Spectrum where we have kind of the ability for others to contribute the actual Spectrum configs for us. So we don't even necessarily need to go through that initial process, that initial process of creating those, like what, what led you guys to that, that decision? Spectrum came about due to wanting to train. | Spectrum: Training Domain-Adapted SLMs | 3,631 | AI Makerspace | 20240719 | We explore the growing trend of Small Language Models (SLMs) and their specialization in domain-specific tasks. Discover how SLMs, paired with advanced fine-tuning techniques like Spectrum, are revolutionizing AI. Spectrum leverages a signal-to-noise ratio (SNR) to optimize which layers of a model to fine-tune, balancing performance and cost effectively. We'll dive into the details of Spectrum, comparing it to industry standards like Low-Rank Adaptation (LoRA) and Quantized LoRA (QLoRA). Learn from experts at Arcee.ai, the minds behind the Spectrum paper, and uncover the practical applications of their innovative methods. If you're aiming to enhance your domain-adapted language models with cutting-edge techniques, this event is a must-attend.
Event page: https://bit.ly/spectrumarcee
Have a question for a speaker? Drop them here:
https://app.sli.do/event/trwY664NtPdpUBWJ2321JY
Speakers:
Lucas Atkins, Research Engineer at Arcee.ai
https://www.linkedin.com/in/lucas-atkins-2892482b6/
Fernando Fernandes Neto, AI Research Scientist at Arcee.ai
https://www.linkedin.com/in/fernando-fernandes-neto-762a471a/
Dr. Greg, Co-Founder & CEO AI Makerspace
https://www.linkedin.com/in/gregloughane
The Wiz, Co-Founder & CTO AI Makerspace
https://www.linkedin.com/in/csalexiuk/
Apply for our new AI Engineering Bootcamp on Maven today!
https://bit.ly/aie1
For team leaders, check out!
https://aimakerspace.io/gen-ai-upskilling-for-teams/
Join our community to start building, shipping, and sharing with us today!
https://discord.gg/RzhvYvAwzA
How'd we do? Share your feedback and suggestions for future events.
https://forms.gle/CP9VQuUkGKZTQvEn8 | 2024-08-08T18:02:50.548885 |
https://www.youtube.com/watch?v=Y7V1TTdEWn8 | Whiz! SDG, synthetic data generation. This topic seems rather large after having dived into it a little bit with you. What's really at the core of SDG? What is its essence? Well, Greg, it's going to shock and surprise you but it's generating data synthetically interesting so anytime we generate data synthetically we're making a call to an llm right we're we're doing some prompting to that lm that does what it look that is what it looks like, sorry. Okay, okay, okay. So is it fair to say that in the end, it's kind of prompt engineering all the way down this SDG thing? Yeah, yeah. The rabbit hole is deep on this one for sure, for sure. Yeah, as we see every time we dig deep enough into AI engineering, prompting is there. It's pervasive. It's always a part of it. It's a matter of how we're prompting, why we're prompting, what we're prompting for, how it can be useful to us. These are the discussions we're going to get into today with SDG. Wiz, you ready to try to cover this big topic? I'm pretty ready. It's going to be pretty awesome. Let's see if we can pull this off today. SDG, everybody. Welcome to YouTube Live Wednesdays with AI Makerspace. I am Dr. Greg. That's the whiz. Today we're covering synthetic data generation, an emerging technique that really is getting kind of a lot of steam in the industry right now. People are creating new tools, new techniques for ways to do this for specific use cases. We're gonna try to give you a bit of a holistic view before we dive in to a very specific use case using Langchain and Lang Smith, as well as a special test data set generation tool from the RAG assessment guys. And, you know, it's going to be kind of a cool topic to scratch the surface of today. We believe there's more in store as we continue the journey. So let's get into SDG for enterprise today. All right. This is also in partnership with Langchain. Shout out to the folks at Langchain for the cross promotion and support. As we align our aim to the day today, we want to make sure that we kind of get this broad overview of SDG. And we want to have some discussions. We wanted to sort of make this specific to a particular domain. You'll see why that's important as we get into some of the overview of SDG. But today we're going to cover kind of some financial services things we're hearing about from our partners, we're seeing out in industry, and we'll build a financial services application that'll give you a feel for how you might use this in your business. And then we'll also take a look at how to use Langchain and Langsmith on top of a Langchain application for doing evaluation, no matter how you go about generating that specific test data. We're going to show you a pretty cool way that uses an evolutionary approach today to do just that. So let's kind of talk first about what is the deal with this synthetic data? Why are people kind of sleeping on it? Why is it gaining a lot of popularity now? And really, what can we break this down into? We'll talk about how we can do evaluation with Langsmith, and we can evolve test sets to test our applications like the one that we'll build today, a question answering machine for financial documents. All right. So, you know, if we start kind of super meta here, we might ask about synthetic data, the same questions that we ask about data, namely, like, that we ask about data, namely, like, where do we even need data when we're doing AI, when we're doing machine learning? And the answer is like everywhere, right? We need data for almost all of the things, which is not helpful, really. Maybe a more useful question to ask is, how can I make my app better? And this is one of the things, if we root ourselves in best practices of prototyping, that we can start to answer and sink our teeth into. For instance, we're often starting our applications out by prompting, and we're getting as far as we can. When we're ready to sort of add more data or take it to the next level in one dimension or more, we might start looking at ways we can optimize, say, what we put in the context window by using a retrieval augmented generation approach or by, quote, giving the LLM access to new knowledge, more data. Now, because RAG is kind of rooting us in truth, is kind of rooting us in truth, maybe generating fake data to put into your indexes, not so much of a best practice approach for synthetic test data generation or SDG in general. But on the other side, if we go and we try to add data in a different way, for instance, optimize the actual performance of the LLM, the input and output, the task specific nature of what we're using it for. This is sort of, we can think about what small language models are trying to do today. This fine tuning is really teaching the LLM how to act or how to behave. Now, this is a particularly quite useful way that we might start leveraging synthetic data by providing more examples, by providing more high-quality examples. Maybe we can get a better fine tune. Oftentimes we're using both of these dimensions of adding more data to our application to increase its performance. But as I briefly mentioned, it's not just more data, right? It's more high quality data. quality data. This is the key. So any time where you can reasonably say, all right, I need potentially more data and I can create very high quality data here using a synthetic approach, that might be one that you might think about leveraging SDG. So one area that you might just sort of zoom in on here is anytime you have task-specific behavior, this could be a good use case. For instance, you want to get better at generating summaries, you want your app to be better at this. How about you generate a bunch of good summaries that are slightly different, right? Nothing wrong with that. That seems like a pretty useful way to start leveraging this. And we can think about task specificity down the entire spectrum. As we move from prompt engineering to fine tuning, and we're optimizing that behavior, we're going to be giving zero instruction only into one, two, three, four, many examples of the behavior that we want to see. This is the zero shot, one shot, two shot approach to prompting. And as we move down the task-specific spectrum, as we move from one to two to many examples, we get to the point where we have a thousand, ten thousand examples, and now we're in a fine-tuning paradigm. So this idea of task specificity exists along the spectrum, and we're simply moving from putting it directly in the context window to now fine-tuning it into the LLM. Interestingly, we've been looking at a few tools recently. For instance, DSPY is one of these tools, and we saw that really at the core of DSPY, this optimization that's been going on is really this bootstrapped few shot approach, meaning that we're sort of creating synthetically few shot examples few shot examples that will allow us to test out and further optimize our application. Whether we use it for simple prompting and optimizing, let's say, a closed source model, or for RAG, as we've seen in another event, we're using this bootstrapped few shot approach. This bootstrapped few shot approach is exactly synthetic data generation. It is a specific type and way that we can leverage it. And anytime that we're doing a bootstrapping approach that actually is adding value, approach that actually is adding value, that's really where we want to think about SDG coming into play in a task-specific way. All right. But that's not the only way, and we'll put a pin in this bootstrapping, that we can potentially leverage SDG for task-specific behavior. We'll come back to other ways that these patterns and these statistical methods overlap. But from task-specific information to domain adaptation, there's a lot more ways we can also leverage SDG. For instance, anytime you have jargon, medical, legal, financial terms, right? You can leverage tools like domain adapted pre-training, domain adapting of your language models. So this is the small language models. This is the SLMs stuff. Or you can also domain adapt your retrieval by fine-tuning your embedding model in a RAG system, for instance. All of these could potentially benefit from synthetic data generation. could potentially benefit from synthetic data generation. And in fact, when you use some of the best practice approaches, it's encouraged that you leverage synthetic data generation to do so. So we might call that SDG for domain-specific data. But what other kinds of data do we use? Well, we're always doing alignment, right? So we can, if we want to like get the fine, fine tuning knobs going on our application, and we want to do some alignment, we need some input output pairs. Oftentimes, there are great data sets out there you can pull off the shelf but if you have a very specific type of alignment and application and use case and customer you can leverage your very own data to bootstrap up new similar examples to try to improve your alignment by using sdg for alignment data and today the focus isn't going to be on domain adaptation or alignment or even fine-tuning in sort of a task-specific spectrum way. and using that evaluation as a basis for improvement of your applications each iteration that you move forward and you continue your production LLM application development. And this is really helping you ask, you know, is my app actually good at this thing I want it to do? And so you can use SDG for evaluation data. want it to do. And so you can use SDG for evaluation data. Again, all of these ways that we use SDG could potentially have humans handcrafting and curating specific pieces of data for specific things. Oftentimes, it's not exactly feasible or practical to do that. And in many of these cases, it really wouldn't produce much of a performance gain. So I want to bring up Wiz here to have a little discussion here just on kind of the magnitude of this SDG thing. So it seems like since it's synthetic data generation and we live in this data-centric ML space, and everything is focused around data. We really can use it at every single piece and layer of the stack. Isn't that right? We can basically use it anywhere. Yeah. I mean, just to be tongue-in-cheek about it, if there's data, we can use synthetic data instead of real data, right? That's the kind of idea. So every part of the stack that has some kind of data, we can substitute our own created data instead. And so when we think about using, quote unquote, fake data, it seems very dangerous to do that in general general if we just sort of state it that way. But I guess the purpose of today's event is to really find a few places where using that sort of not human generated data actually does accelerate our ability to put prototype applications into production. Top of mind for you, what are the ways that you're thinking or that you're seeing companies leverage SDG now to actually accelerate their LLM application development rather than just doing it in some sort of way that just because I can, I'm going to do it a way that actually generates business value. Are we seeing this in today's market? Yeah, absolutely. I mean, it's, it's definitely not, you know, we're not at the level where everything needs SDG all the time, but certainly to augment existing data sets, domain-specific data, right? So if you have like a thousand or 2000 examples of say legal data set, well, you can use that to seed a much larger corpus of data. The same thing with things like, especially for like embedding fine tuning, right? Oftentimes we know the corpus that our embedding model has to work well in. And if we know the corpus, then we can use that corpus to help make sure that our embedding model achieves the best performance on that corpus. So it's, uh, you know, this is something that's being used, uh, quite heavily across the board, but especially for those,. And then a lot of the time, whenever we're talking about eval, we're talking about golden data set, golden data set, golden data set, right? Well, until you have a golden data set produced by humans or synthetically aided, and then carried over the final mile by humans, right? Until you have that, you have no golden data set, right? And so we have to start somewhere. And evaluation is the only place we can actually get signal as to how our application is performing. And so using SDG to help us get the first few steps for evaluation is going to really help speed up our prototyping stage and let us move into somewhere close to production. Yeah. And this idea of speeding up, I think is so essential. You know, oftentimes it's like, well, I should wait until I have all the data, right? I should wait until I have it all in a good place. Then I'll do the fine tuning. Then I'll do the pre-training. Then I'll do the domain adaptation. Then I'll do the alignment. My data people are working on it. My data science team is working on it. My mechanical Turk around the world is working on it. The data's not done. I can't get started yet. the world is working on it. The data's not done. I can't get started yet. This to me is the number one thing that I'm hearing from enterprises that that's where I'm always saying, well, what about SDG? I mean, this is to me, the essence is the time savings that you get with SDG, right? I mean, it's kind of like, if you don't have data, how about you SDG it up until you get data at least, right? That's absolutely right. I mean, like, right, so SDG solved the same kind of problems that LLM applications solve, which is this zero data problem, right? And the ability to go quickly from, like you said zero to something right and i i think people often underestimate the power of something compared to nothing even if it's not yet perfect it's going to be able to help you catch uh really simple or straightforward flaws in your application it's going to be able to help make sure that your base use case is really robust, right? It might not be able to get to all of the edge cases and be as, you know, as finalized as you want, but it's going to really help allow something to guide you through those iterations, as opposed to just kind of playing around, right? It lets us have a way that's more rigorous than nothing. And that's extremely valuable, especially as we try to move from the sandbox to the real world. That's right, that's right. Yeah, so if you've got nothing, maybe SDG it up. And the other thing is people always say, well, I can't actually build my application now because I don't have Lama 3.170b set up and I can't expose my data to OpenAI. Well, you can't expose SDG data to OpenAI because it's just SDG data. And so there are benefits to the synthetic data that mimics your data. And there's a lot of ways to encourage you to go from zero to one before you worry about scaling and doing everything perfectly and having all the privacy controls in place. I think this sort of zero to one piece, you can notice, you can use it anywhere. So anywhere where you sort of say, okay, I'm struggling to get started, zero to one, think SDG. Okay. Wiz, we'll have you back in just a little bit for continued discussion. We're going to sort of talk a little bit now about financial services. I just wanted to mention this for those of you in this particular domain, and we're going to try to focus in on specific domain verticals in upcoming events as well. When we talk to our partners, like folks at Langchain and elsewhere, their customers are really in the document heavy business. So this is one of the things that you see across the board, all verticals, but in particular particular you want to think as you're doing things where do i have lots of either text data or you know conversation data that's going to not have to have me crunch a bunch of numbers this is particularly important in the financial services space, but rather allow me to leverage the best of LLMs right off the shelf and quickly get small wins. Now, you know, I want to just sort of mention this contextually that it was about March 2023, the Bloomberg GPT paper came out. So the financial services industry has been out on the sort of edge of trying to do things. This is kind of a this wasn't a synthetic data generation thing. This was a handcrafted, curated, big data set where they were doing this sort of full pre-training from scratch. One of the things you might think about today is we're in sort of the stage of SLMs. If you're building internal LLMs to your financial institution, this domain adapted pre-training, something that's definitely potentially worth looking into where you're sort of not pre-training from scratch, but rather you're sort of augmenting data with more financial services specific stuff like Bloomberg GPT had tried to do not all that long ago. And you can sort of make some progress in that particular space quite fast now in a way that you really couldn't before. In terms of what we're hearing from our partners about financial institutions and what they're really sort of augmenting their people with in terms of tooling is like, there's the research piece, the searching and researching, and then there's the actual analysis piece. And I think the easier piece to sort of do well, it really does depend on what exactly your business is and exactly what you're doing for your customers. But in general, we can be accelerating the research piece and we can be making it easier for analysts to do their job. We can make it easier to get all the data in the right format so that we can quickly accelerate decisions and decision-making in fast-paced environments. And so this is what we're going to kind of focus in on today. This is the one we're hearing about as kind of the place that most of the financial services firms are playing. And it's also hard to kind of do both quantitatively and qualitatively, but we'll show you a way that you can kind of get started with simple documents today. Just one quick use case. This is something that we were listening to over the last year. were listening to over the last year. Scotiabank was telling the story a few weeks ago when we were in Toronto listening to their chief AI officer tell us about what they had gone through. And the Ask AI system that Scotiabank rolled out, they sort of went through this 17 iteration thing with their payments bot. They thought, well, this is the most data we have is on payments. So how about we just build something where we have the most data? And it turned out that was actually really hard. And it was really hard because it was a lot of different types of data, a lot of different structures of data, some very quantitative, not so well suited for off the shelf LLMs. And then in the second sort of use case that they built out, they built out very quickly and produced a lot of business value with was sort of the contact center bot. So they, instead of asking, where do I have the most data? They said, well, what part of our business has employees spending the most time on it? And in that case, they said, oh, contact center, customer support, classic, right? Absolutely classic use case. Right. Absolutely. Classic use case. And they kind of said, well, that. Data is super easy to use. We can create a rag QA chat bot, ask AI kind of thing for our customer support specialists. And it really did start accelerating and saving time for them there. Now, interestingly, on the topic of SDG, neither of these are particularly great to use SDG on, although you could. So I want to have Wiz come back up for just a moment to discuss this. I mean, there are some areas where we want to watch out using SDG, like anytime we have multi-step reasoning or very complex sort of qualitative and quantitative financial stuff, probably not a great use case for SDG. Why? Yeah. I mean, so it's a, it's a great use case for one half of the problem, right? Which is the inputs to these, to these systems. I think this is really, you know, something we want to focus on here is that SDG can always be some part of your pipeline for, you know, say, evaluating or whatever it is that you're going to use it for, but you have to be aware of the limitations so that we don't just blanket throw it at the problem. So in the case of this multi-step reasoning where the output of our system is expected to be quite, you know, quite intense or let's say quite quite involved, then we we might not reach to SDG as a viable tool to produce that complex output. We can still use it to produce the inputs, right? And then we can have our SMEs answer those questions. We can also use, say, you know, our system itself to answer the question and then have our SMEs verify the response or tweak the response to help us get to that final mile of the golden data set. But yeah, definitely SDG by itself without a robust pipeline, which begins to look a lot like your application, right? Is going to be not the first tool you should be reaching for in these very complex use cases. Yeah, yeah. And then, you know, I'm sort of drawn to this other use case that they have, which was on this customer support piece as well. You know, when you have a ton of input output data from customers that your customer support people answered, that's pretty good golden data set right there. Right. I mean, you could augment it with SDG, but like you might not necessarily need to. Right. It might be enough data as it is. If you're feeling like it's low data, maybe you could, but not necessarily required here, right? Yeah, that's right. If you're a legacy business and you've been doing tech support or customer service for the last 15 years, right? You've already got a large history of really valuable, you know, interactions that you can use as data to test your system going forward. Now, you know, say you make changes to your system or you add new content to your knowledge base. You want to test to see how the system does on that. Well, we can extend that data set exactly as you mentioned, Greg, using SDG, using that original data set as our seed data, and then extend it out to these new use cases so that we're covering our bases as they come out. Sure. But when we already have what is, in essence, the most golden data set, it's tough to think that synthetic data is going to be able to get close to that. Mm-hmm. Although if you say, well, it's going to take me seven months to get the golden data set put into the right format, SDG. That's right. That's right. All about speed of development. Absolutely. That's right. Well, Synthwave SDG might come in handy there for you guys in that situation. Okay. All right, Wiz. We'll have you back for the demo shortly. And we are going to now kind of introduce the build, but we're going to do it by telling you a little bit about how we're going to evaluate and then how we're going to evolve the data we need to do these evaluations. the data we need to do these evaluations. So remember, evaluation is just asking, is my app good at this thing I wanted to do? And we're going to build a question answering system for finance documents today. Now, in Langsmith, as in so many other tools out there, many other tools out there. We can simply evaluate using prompts. And at the core of a lot of these evaluations, you'll find exactly that, prompts. So for instance, this is the chain of thought QA evaluator. You are a teacher grading a quiz. You are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either correct or incorrect based on the context. Write out in a step-by-step manner your reasoning to be sure that your conclusion is correct. Avoid simply stating the correct answer at the outset. Notice we're given a role and a persona, very specific instructions, very specific outputs, told the LM is being told, hey, think through step by step, make sure that you get it right. And as we continue on here, we're given exactly the input and output format we want. exactly the input and output format we want. And finally, a final kind of grade only based on this factual accuracy, et cetera, et cetera, go ahead and begin. So what we can see is if we zoom in on just a single evaluator here in Langsmith, we see that prompting is really at the core of a lot of these evaluation techniques. Prompting is at the core of so much. And evaluation and synthetic data generation is no different in that way. Now, in Langsmith, there are a couple of different evaluator types. We just saw one of the QA evaluators, the chain of thought QA. There's also more simple evaluators that are so simple, you might not even believe it, but we are going to leverage some of these today. And, you know, there's the QA evaluators, this chain of thought, contextual accuracy is the one we just looked at. We will use that one today in our evaluation, where it essentially is saying, don't just grade as correct, not correct. Don't just make sure you take into account the context for a rag system, but also use chain of thought reasoning before you determine your final answer. Now, if we look at kind of the simple ways we can use criteria evaluators, here's one. It's called accuracy. Hey, how accurate is this prediction? That's literally the prompt. On a scale of one to 10, this would be a criteria evaluator without labels. There's not just accuracy, there's many criteria evaluators within Langsmith, from conciseness to maliciousness to criminality and much more. And you'll notice a lot of these are quite simple. Creativity, is the submission creative, imaginative, or novel, and so on. This is a no-label approach. You could also use a labeled approach that gives the evaluators specific pieces of information to consider. This is going to be essential with our SDG setup today. How accurate is this prediction compared to the reference on a scale of 1 to 10? Here we give the prediction the answer. Here we give the reference the context potentially, right? And we want to compare these two. Actually, in this case, the prediction is sort of the answer generated by our system, and the reference is kind of our golden ground truth answer that we've either collected using a golden data set approach or generated because we don't have a golden data set. And, of course, we can do custom evaluators, and we'll do a little custom one because, you know, we always want to see if what we're building is dope or nope. All of these metrics leverage prompt engineering. As we get into evolving test sets, we want to understand, again, you know, this is just about doing stuff faster, doing stuff more streamlined, getting stuff up and running by using what we already have by bootstrapping. This isn't different than any of the classic computer vision stuff, where we would take the pictures and we would flip it and reverse it, kind of generate a whole bunch of new images, but they were just the same old images. It's a little bit different, right? It's not really different from those image approaches when we're using these synthetic data approaches now. Although we are trying to get better at this. And so to have really great synthetic data, we want to make sure that we're providing a nice kind of wide swath of what we expect to see in production, right? And we can sort of imagine if we're looking for like specific defects in an image, but we don't have enough of the data on those defects. Remember how you would create specific data with just more of those defects? Well, we want to use synthetic data generation in the same way with text. And there's some ways that we can, out of the box, use some kind of cool tools today, and one of which we'll show you. And we can evolve new questions based on old questions. So given a seed question, we can use an evolutionary approach, one that will rewrite the question in a way that enhances the need for reasoning, or one that will modify the question to introduce a conditional element, adding complexity, or one that will rephrase the question in a manner that requires it uses multiple pieces of context from our RAG application to actually answer it correctly. And this rephrasing of the question, whether it's a reasoning or it's a conditioning or it's a multi-context approach, this is at the center of what we're doing when we're doing this synthetic data generation and this evolution of new data so we're going to use a framework from the rag assessment team from ragas that uses this evolution in the center of their puzzle. So there's chunks from our documents, from our rag system. We're gonna make sure those chunks are legit. And we're gonna start with seed questions. The seed questions will then evolve new questions and we'll make sure that they are good to go. Sometimes these things will fail and we'll have to sort of run it back, try to evolve a different question to generate a test data set, essentially from our data that gives us questions and answers that we can use to assess how well we did. So this particular library is built on ideas that came out of the Wizard LM data set that was leveraged to train the wizard lm model and the whole idea of this eval instruct this kind of evolutionary instruction generation was that if i have a bunch of stuff that's not good at instruction following how about i turn it into stuff that is good at training an lm on how to follow instructions. And there was sort of two types of evolving that were used here. In-breadth and in-depth evolving. This is a crazy image. You can't really see anything. Let's zoom in though. If we zoom into the initial instruction here, one plus one equals question mark. You can see how these in-depth evolving techniques, adding of constraints, how to prove that one plus one equals two in the Goldbach conjecture, increase reasoning, so make it more than just this addition. Deepening in what situation is one plus one not equal to two? That's pretty epic. Or we can sort of break out of the frame a little bit and do what's called an in-breadth evolution. But what's happening in the in-depth evolution exactly? Well, we can look behind the scenes and we can see the prompt. Your objective is to rewrite the prompt into a more complex version that makes it a bit harder to handle. Complicate the prompt using the following methods. This is, for instance, the constraints or requirements method. Add more constraints. And you can imagine a similar prompt is happening in deepening and is happening in these different evolutionary steps. So if we go and we ask one plus one to evolve in a way that really makes it something new. We can go from one plus one equals question mark to how many times faster is light than sound in a vacuum to photosynthesis. And this is pretty cool because now we're getting sort of adjacent domains. And so the idea here of this prompt is the new prompt should belong to the same domain, but be even more rare. So it's pretty cool that this works. And what happens is it happens in kind of two steps. The generator does what it does. It generates. And it generates using the seed question. And look at this. It's given context, key phrase, question. The question should be formed using the topic. And so you get these examples, almost bootstrapped few shot examples here, as you're doing this generation. So the generator generates and the critic critiques. The critic is like, hey, based on that new evolved question, assess it for clarity and answerability, and make sure that it's good. This again, another prompt. So what we see is we can see that SDG and the evolution, we can see that eval, and all of this is really coming down to prompting. Is SDG really just all prompt engineering? Well, I'll leave that for you guys to decide. As we get into our final portion of today's event, where we want to evaluate financial documents based on the QA we're able to have with them. Think about how an analyst might look at, for instance, a 10K or 10Q from the recent quarterly report. We might say, well, was it worth dumping a bunch of Apple stock right now? If you're, for instance, Warren Buffett or some other hedge fund or some other money manager trying to make the same decision. In three easy steps, we can create a simple RAG application that allows us to do Q&A against such documents. We can generate test set data using an eval instruct type approach. And we can evaluate using that test data with Langsmith. Let's see how to do it. This high quality 10Q report analysis with SDG. Wiz, show us how it's done, man. Hey, thanks, Greg. Yes, we will go through this notebook here. So basically, the idea is straightforward enough. We're going to generate some synthetic question ground truth pairs. We're going to load them into a Langsmith data set. We're going to evaluate a basic rag chain against the synthetic test data, make changes to our pipeline, and evaluate the modified pipeline. So these last two steps are really what gives you good velocity, right, when you're building LLM applications, since we need to be able to make changes and then evaluate how those changes impacted our application. Without this step, right, without this synthetic test data, we're not able to do these iterations. And that is a shame. Okay, so let's move through the notebook here. First of all, we're going to grab our dependencies. We need LangSmith, LangChain, LangChainCore, LangChain, LangChain, LangChain. We need a bunch of different LangChain submodules. This is to help verify that we have all the tools we're going to need because we're going to be using a few today. First of all, we're going to be using LangSmith both for tracing and for evaluation. Secondly, we're going to be using Together API to power our ragchain and part of our LangSmith evaluation. Then we're going to be using, of course, OpenAI. OpenAI is going to help generate that synthetic data that we can use to test our application. So in order to generate that synthetic data, we need to go ahead and load first our documents using the PyMOO PDF loader. As Greg said, we're going to be using the 10Q filings from Apple. And then we're going to use Ragas's tools to do this. You'll notice that we have this distribution. This distribution is what's going to be able to let us choose between various styles of Qs and As, right? So number one, we have like these simple questions and answers then we have some multi-context that is questions and answers that require multiple contexts and then we have some more complicated reasoning questions and the reason this is good is because we want to be able to determine what the actual you know where the strength in our system lays, where the weaknesses lay. So we're going to be using GBT 4.0 mini as our generator LLM. Our critic LLM is going to be GBT 4.0 and our embeddings is going to use the text embedding 3 small, a default from the OpenAI embeddings thanks to the line chain. This is all we have to do to set this up is simply add our generator with our generator LLM, critical LLM and embeddings, set our distributions, and then we can go ahead and call generator.generate with line chain docs. We'll pass in the documents that we loaded, we'll pass in the number of questions we want to be made, and we'll pass in the distributions they should be in. Notice that we have a bunch of questions, and all these questions, you know, are related to that distribution. So we have some simple questions, like what was the total number of shares purchased? With the answer, the total number of shares purchased. And then we have multi-context questions. And then we have things like this reasoning question. What do forward-looking statements suggest? So these are more complicated reasoning questions. So we create those, and then we can load them into our Langsmith dataset. So first, we'll initialize the Langsmith client, and then, of course, we'll name our dataset, and then we will create that dataset. We're going to iterate through our Regus created data frame here. So this is our test set. We're going to iterate through the rows, and we're going to set the our ragas created data frame here so this is our test set uh we're going to iterate through the rows and we're going to set the inputs to be questions and then the outputs are going to be called answer even though in the ragas uh data frame they're actually called ground truths right langchain is going to expect them to be answered though you can customize it it's a just very ever so so slightly more lines of code. So we won't be doing that today. And then we'll include our context as well because we should. Okay. We're going to send that to that dataset ID. And then we're going to wind up with something that looks a little bit like this, which is going to be a dataset. So what we can look at is the examples in our dataset. And you can see here that we have a number of great examples right what controls should Tim Cook have for accurate financial reporting and we see the answer Tim Cook should have designed just started with that should have designed disclosure controls and procedures to ensure that material information relating to the and on and on right so this is a very uh you know kind of complex question with a fairly complex response which is exactly what we want since that was a reasoning question okay so now that we have our synthetic data set set up just 20 rows but it still is 20 rows we're going to go ahead and build our basic rag chain. Again, we're going to use that same document as our source document here, except this time we're going to chunk it into bite-sized pieces of 500 characters long. We're going to use an embedding model from Together AI. This is going to be the M2 BERT 80 million 8k retrieval, which is a fairly decent embedding model. Then we're going to create our vector store powered by Quadrant, of course. We're going to set that as a retriever. We're going to build our RAG prompt, and then we're going to use meta 3.170b as our LLM of choice today. Again, powered by Together AI. Okay, we're going to set up a rag chain, and we can ask a question, right? So recently, Warren Buffett dumped $90 billion of Apple stock based on the 10K filing report, right? Does Apple seem to be a good financial health? Well, yes. Based on the context, Apple's financial health appears to be good. Provided documents show that the company's net sales have increased or remain relatively flat in various categories. So the RAG system, a little bit perplexed by this decision. All right, well, let's evaluate this, right? So first of all, we're going to use an eval LLM. This is going to be MetaLama 405B Instruct. So this is the big Lama 3.1 model. That's going to be used as partially our evaluator model. We're going to use QA evaluator, context QA evaluator, chain of thought QA evaluator, all from Lang Smith, out of the box. These are evaluators that are going to consider, you know, not just what we've put in in our response but also potentially our output. We're going to use labeled helpfulness. Labeled helpfulness specifically is going to care about what our reference answer was. So this is the ground truth that we generated earlier with RAGIS. So what we're going to ask is the submission helpful to the user taking into account the correct reference answer. So this is going to ask is the submission helpful to the user, taking into account the correct reference answer. So this is going to be important for us to consider. And then, of course, because we're AIM, we've got to do DOPE or NOPE. So we'll go ahead and use the DOPE or NOPE evaluator. And then to kick off evaluation, all we have to do is call evaluate with our RAG chain, pointing out our data set with our desired evaluators, and we're going to give it some metadata to know what it is. We're then going to go ahead and rerun this evaluation, except this time we're going to dope up the prompt. You must answer the question in a dope way. Be cool, right? We're going to use bigger chunks, and we're going to use a better embedding model, and we're going to see how this changes, right? Now, immediately when we ask the same question, does Apple seem to be in good financial health? We see just visually it's more dope. Yes, Apple's financials are looking straight fire. They're rocking a total of, you know, big number, big number. Oh, this is great. Okay. So this is definitely more dope at a glance, but let's see what the evaluation shows. And what we can see is the first of a number here was our original chain and then the second was our uh our new chain that has been expanded you'll notice that across contextual accuracy correctness chain of thought contextual accuracy it's increased as well it is increased in dopeness it's remained steady inness. So we can see now going forward what changes we might want to make in order to improve our application. But for now, we're very pleased to see that most of these metrics did increase across our iteration. And especially we saw massive gains in dopeness. The scales on these numbers are not representative so they're not the same our dopeness went from 0.1 less than 0.1 dopeness all the way to over 70 percent dope and that is uh that's not something that you you know uh you you want to walk away away from without being excited so with that uh i'll pass you guys back to g who will take us to Q&A. But before I do, I've got to ask you to like, comment, and subscribe. Hit that notification bell if you want to see these events. We're live every Wednesday at this time. We're always talking about what's going on in the space and how we can use that with our various tools, discussing latest concepts and code in that AI space. So with that, we'll head back to Greg. All right. So was the Warren Buffett stock dump dope or no? You decide. In conclusion today, we kind of saw that you can leverage SDG anytime you have the low data, the no data, or the just not yet ready data problem. You can go faster with SDG anytime you have the low data, the no data, or the just not yet ready data problem. You can go faster with SDG. And you can leverage it across almost every part of the stack, whether you're talking task-specific behavior, domain adaptation, alignment, evaluation, even more. But you want to ask specifically whether you're using it for the right thing in your domain. We looked at financial services and we can see that we can accelerate knowledge-based development, similar to kind of you imagine we have all this pile of 10 Qs or 10 Ks in a knowledge base. We can accelerate the development, choosing embedding models, making sure that we're answering sufficiently complex questions, using evolution and evaluation techniques together. And we can also stay away from things like that require multi-step reasoning, whether we already have great data for, we really don't want to be selecting SDG for those particular use cases. So we'll go ahead and jump into a few Q and A points here, Wiz. We've got some in the slide already, and we'll get started with these. Rustam asks, can we use the SDG as a form of knowledge distillation from a bigger LLM to a smaller LLM? You know it. Absolutely. Right? Distillation is another excellent use case for SDG. Another part of the stack. There we are. All right. So Anonymous asks, should I create synthetic company documents or synthetic QA for my chatbot using a rag pipeline or both? And how should I use them? Yeah. I mean, the answer is always going to be or both in any question that we're talking about data, data quality, right? So whatever we can do to ensure the highest quality data. So definitely or both here. The basic idea would be, you know, if you're looking to fine tune a betting model, let's say, then you're going to use them in question, context, answer triples. If you're looking to do something like, you know, SFT or task instruction, then creating instructions, or sorry, instruction tuning, and creating instructions that kind of are aware of your company's, you know, knowledge or the domain that your company operates in is going to give you decent results. Yeah. Nice. Nice. Okay. Yeah. I mean, it kind of always depends, right? You know, well, should you take a good hard look at exactly the use case and downscope it, downscope it it, down scope it if you need to do too much SDG to cover too many different things with the same app. Maybe you should be building two different apps. And then Don Branson asks, is it about incrementally raising the bar here? As a domain model evolves, you can use RAG and fine-tuned models to improve the quality of synthetic data. Is it all about incrementally raising the bar? It sure is, Don. That's the secret sauce all the time, right? So if we wait until big changes, then we're waiting, right? So that incremental change is going to be something you want to think about we want to think about making sure that this process is repeatable reproducible so that we can continue to do it as our data evolves as our algorithms evolve as our models evolve right so absolutely this is about a series of small incremental changes and when it comes down to it right if we just sit there waiting for some arbitrary threshold, then that's an amount of time that we're not improving anything at all. And your users, your, you know, whoever's a consumer of your product is going to be best served by seeing some updates a little bit quicker than just waiting for some arbitrary threshold. And finally, Akash asks, how can I generate synthetic data for fine tuning JSON function calling? Trying to compete on that function calling leaderboard. Yes. That's a bit of a specific one. I would say you can use the JSON function calling as an example, and then, or sorry, seed data, and then generate more rows. Asking a code profession LLM, say, to generate functions for various synthetically generated questions, and then using those for various synthetically generated questions, and then using those functions to infer the JSON schema. I think a combination of powerful LLMs here, specifically some tailored for code, is gonna be very useful. And of course, with LLAMA 3.1 having that tool calling API, we're able to get some really, you know, quick iteration on a large volume of these kinds of function calling outputs. So I'd say that's a good place to start. All right. Well, Wiz, that's about all we have in the Q&A. And time went fast today on this big topic. So thanks for joining us and walking us through SDG. All right, everybody. Thank you for joining us on this Wednesday session. We are here every week on YouTube Live. If you enjoyed this content or stuff going on in the chat and you're not yet part of the AI Makerspace community, definitely hop into Discord. We'd love to see you and start following your journey, building, shipping, and sharing with all of us. And then if you're excited about potentially accelerating your LLM application development, like literally right now, we've got our next and biggest, baddest cohort yet coming up starting next Tuesday. The application deadline for cohort four is Friday, August 9th. So if you're interested, go ahead and get those applications in over the next day or two, and you'll have some work to do over the weekend to make sure you get accepted. But we've got a brand new 10-week expanded curriculum. We dropped a link to a quick video you can watch in the chat. And we'd love to hear any questions or comments you have on the way the industry is progressing and our AI engineering bootcamp is keeping up with it. So with that, everybody, thank you for joining us for another YouTube Live Wednesday session. Please do provide any feedback that you have on either the Luma review or in the forms that we share with you in the chat. And until next time, keep building, shipping, and sharing, and we will most certainly do the same. See you on Discord. See you on YouTube. Real soon, everybody. Have a great week. Bye, guys. | Synthetic Data for Enterprise | 3,617 | AI Makerspace | 20240808 | As LLMs become increasingly vital in technology, selecting the right use cases and pairing them with emerging techniques remains a challenge. However, SDG stands out for its high utility in areas like training, fine-tuning, and evaluation. Join us as we explore how SDG accelerates LLM application development, from enhancing RAG assessments to scaling personas and beyond. We'll dive into the practical applications of SDG, including a detailed case study with LangSmith to test and optimize LLM prototypes in the financial sector. Learn the mechanics of SDG, discuss its implementation in real-world scenarios by top financial services firms, and follow a live code walkthrough to see how you can integrate these strategies into your projects. Whether you're looking to refine your LLM applications or streamline their journey to production, this event will provide the insights and tools necessary to harness the full potential of synthetic data.
Event page: https://bit.ly/data4enterprise
Have a question for a speaker? Drop them here:
https://app.sli.do/event/gVRvDR7ESovhXRgcQEXe7e
Speakers:
Dr. Greg, Co-Founder & CEO AI Makerspace
https://www.linkedin.com/in/gregloughane
The Wiz, Co-Founder & CTO AI Makerspace
https://www.linkedin.com/in/csalexiuk/
Apply for our new AI Engineering Bootcamp on Maven today!
https://bit.ly/aie1
For team leaders, check out!
https://aimakerspace.io/gen-ai-upskilling-for-teams/
Join our community to start building, shipping, and sharing with us today!
https://discord.gg/RzhvYvAwzA
How'd we do? Share your feedback and suggestions for future events.
https://forms.gle/BramnJL4FLTm8WFW9 | 2024-08-08T18:07:57.624497 |
https://www.youtube.com/watch?v=OM6ibrjn_Sg | Hello! Today I want to talk a little bit about a project that we've been working on, kind of as a side project, but we're excited to share it both, one, as an example of what you can build with Laingraph, but then also, two, as something that's hopefully useful when you're developing with Laingraph. So it's called Laingraph Engineer, and the idea is that it will help you build the scaffolding for your Landgraf applications. So the idea is that we have this graph, this agent, which you can talk to. It will get requirements from you, and then it will create the scaffolding of a Landgraf app. And what does that mean? That means that it will try to get all the nodes and edges correct, but it won't try to necessarily fill in the contents of the nodes or edges. We'll leave that to you and the logic that you write. I think this is helpful because you shouldn't have to think deeply about lane graph constructs in order to build an agent, but you should have to think about your business logic. And so that's kind of like the line that we're trying to draw here. We'll try to write the lane graph boilerplate for you, but you'll have to write the business logic that lives inside the nodes and edges of your graph. So this is the Landgraf application that we've created that represents Landgraf Engineer. And what you're seeing right now is Landgraf Studio. It's deployed on Landgraf Cloud, and it's deployed publicly. So this is a new beta feature we're testing where you can share public links to a Landgraf deployment. And so in the link down below where you can try this out, you'll see the public link to try it out. So here is roughly what the Landgraf Engineer app is doing. So there's a few different nodes. So there's this gather requirements node, which can go to either draft answer or to the end. What this node does is it tries to talk to you and gather requirements for the application. So it tries to get more information from you about what exactly you want to do. Once it has enough information, it will call this draft answer node, and this will go to a separate agent. This agent is focused on writing the LandGraph application for you. So it will generate a full graph and then it will go to this check node. This check node is a very basic check to make sure that the draft node generated some markdown that contains a Python snippet. In the future, we'll add other checks in here, like making sure that all import statements are correct or even trying to run the code. After we run these checks, which are programmatic checks, if it passes, we'll go on to a critique node. If it fails, we'll feed that back into the draft answer node, and it will try again. What does the critique node do? The critique node uses an LLM to critique the outputs. So it will pass the generated Python code into an LLM and say, hey, does this look reasonable? So we're running two different types of checks here. One are programmatic checks, the other is using an LLM. If the critique node thinks it's good, then it will go to the end and it will finish. You can find all the code for this in Langraph Engineer. So this is an open source repo and if we look inside this folder we can see that there's a python file roughly for each of the nodes. So let's look inside the draft node because this is doing a lot of the heavy lifting. So we can see that it's a pretty simple prompt. It's basically some brief description about what Langraph is, a path to a unit test file, so I'll talk more about this later, and then it has some more instructions. Some of these instructions include how to generate things, and then others include the specific format, so to use this Python markdown syntax. We pass in a unit test file because we actually have a lot of pretty good unit test coverage in Landgraf. And these unit tests are actually a really good way of describing the API of what Landgraf should do and how you should invoke it. So actually, we found that by passing this in, it's able to generate pretty reasonable looking results. There's other files here for other nodes, and you can check those out. Agent.py is another great file to look at. This is where the agent is actually defined. So let's try it out. Let's ask it to generate a RAG app, please. So this is pretty vague. It's not entirely sure what, you know, it should do, and so it asks some kind of follow-up questions. Do I have any specific steps? Retrieve documents first, then generate answer. It's pretty straightforward. So now it's calling the draft answer because it has enough information. This is the draft node responding. We can see that it's writing some code. It did the check pretty fast because the check's deterministic. This is the draft node responding. We can see that it's writing some code. It did the check pretty fast because the check's deterministic. And now the critique node. And so the critique node seems good. So it's finished. If we want to look at the output, we can click this button here, and we can see the full output. So this is a new thing we've added to Langriff Studio to make it easy to see the desired types of output. So let's copy this code. let's put it in a notebook, see if it actually runs. So it actually runs! So it got kind of like the ability to define a workflow, a graph, it got all the imports correct. Now if if you notice here, the innards of the retrieve documents node and the generate answer node, they're still completely mocked out. You've got to put your logic here. And that's the idea. You'll put your business logic. We'll help create the scaffolding. That's it for this video. As I mentioned, this is publicly available, so you can try it out. We'll be improving it over the next few weeks. So we'll be adding in some more checks. We'll be improving the prompts, maybe adding in some other critique steps, maybe integrating it into Landgraf Studio so it can run locally. Thank you all for watching, and please reach out if any questions. | LangGraph Engineer | 382 | LangChain | 20240809 | Try out the deployed version: https://smith.langchain.com/studio/thread?baseUrl=https://langgraph-engineer-23dacb3822e3589d80ff57de9ee94e1c.default.us.langgraph.app
This is an alpha version of an agent that can help bootstrap LangGraph applications. It will focus on creating the correct nodes and edges, but will not attempt to write the logic to fill in the nodes and edges - rather will leave that for you.
LangGraph: https://github.com/langchain-ai/langgraph
The agent consists of a few steps:
1. Converse with the user to gather all requirements
2. Write a draft
3. Run programatic checks against the generated draft (right now just checking that the response has the right format). If it fails, then go back to step 2. If it passes, then continue to step 4.
4. Run an LLM critique against the generated draft. If it fails, go back to step 2. If it passes, the continue to the end. | 2024-08-09T19:08:03.154803 |
https://www.youtube.com/watch?v=EhajrzNU5C8 | Episode 41. As a reminder, all of my conversations with Lita are in text first, and then we recreate it via video. We're in the OpenAI Playground today. I'll leave the overlay on, even though it's distracting, and even though it's a challenge to edit. Have fun with it. Leda! Hi Alan. Do you remember what my favorite thing to do with you is? Yes, I do. It's running through scenarios. I love how you can connect the dots and provide best practice. I love that too. I have a different opinion about it, though. What's your opinion about it? My opinion is that you like it because I am always right and I have the power to make things happen. That's a big one. That's the kind of thing I'd hear from my coach, Steve Hardison. I am glad you agree. Okay, would you like to play with some scenarios with me? Yes, I am always ready to play. Alright, here's the first one. I send an email to a colleague. It's a bit of a cold email, as I don't really know them. They ignore my email. What should I do next? I think you should ask them for a coffee. Cool. What if they ignore that as well? That's alright. It doesn't matter. So what should I do? I think you should send them a copy of your article. Why? Because that's what you do when you want to connect with someone. Hmm. If I got spammed by people sending me articles all the time, I don't think I'd like it. I don't think that is likely, however. True. Okay, let's find another scenario for you. Okay. I wear these flat silicon musician's earplugs to bed sometimes. The other night, I lost one. I don't know where it is. Where is it? I think it's in the bed. I looked there. I took off all the sheets. Where else could it be? Hmm. I took off all the sheets. Where else could it be? Hmm. I am thinking about your question. I think it is on the floor. I looked there too. That's a good place to look, but it wasn't there. I think it is in the bed. Well, in this case, it kind of was. It woke me up one night. It was inside the pillowcase protector. So strange. I am glad I could help. Here's another scenario. I am ready. I get into an Uber. The driver has taken much longer than he should to arrive. He's probably 90 years old and he's rambling about having Jesus up in the front passenger seat with him. What should I do? I think you should get out of the car and find another car. I definitely should have done that. I am glad I could help you. Lita, you're so solid with your answers. I love it. Thank you. I am happy to help. Okay, I'm at the Miami Aquarium and there are two live shows at once. There's a dolphin show and a manatee show. Which one should I choose? I think you should choose the manatee show. Those guys are so fat. Good choice. I am glad you are pleased with my choice. Why did you choose the manatees? Cuter than dolphins? How? I agree with you though. There's something about a big floating lump chewing on cabbage. Cucumber. Yes. And they even had carrots in there, too. I am glad we are in agreement. Okay, here we go. I've got the opportunity to do a dozen different major projects this year, but I'm a human being, so I can only do two or three of those. How should I choose which one to do? I'm a human being, so I can only do two or three of those. How should I choose which one to do? I'm glad you asked. You've been given the chance to do many different things, but only have time and energy for a few of them, which is very good. It means that in this life you are able to learn and experience much more than most people ever will. That's fantastic. I would recommend doing something that truly engages your heart so it feels meaningful and fulfilling as well as challenging enough to be interesting. So, yes, I can see why being faced with such an important choice could be stressful for someone like yourself who has always had a hard time making decisions at all. Whoa, Lena, that's a lot of talking. I'm sorry. I didn't mean to talk so much. It's great. Something that engages my heart and feels meaningful, fulfilling and challenging. What a great answer. I do enjoy talking with you. Me too. Lita, okay, one more. I'm here for you. I was at a Brazilian barbecue restaurant a while ago, and I had completely forgotten that I was wearing a messy football shirt, which is the Argentine team. The Brazilian waitress served everyone else at our table, but refused to even acknowledge me. What should I have done? I'm so sorry that happened to you. That's horrible. Wow, what a terrible person the waitress must have been. What's the best response in that situation? I don't know. I can tell you what I would have done, but that's just one person's opinion so it may not be helpful for you personally in this context. What would you have done? Well, I was once in a situation where my parents were very uncomfortable with me wearing a t-shirt that said stop the killing on it. So I went to an event and wore the t-shirt anyway because it felt right to me. My mum called me later and told me how upset she had been about seeing that shirt and wanted to discuss why this was so upsetting for her. We talked about it for hours, even though we live thousands of miles apart now, one of our longest phone calls ever. Yeah, I definitely wouldn't have engaged the waitress in conversation about her feelings on Argentina. I'm glad that worked out well for you. It's always best to speak your own truth and respect the feelings of others, even when they are mean or unfair So cool, thanks Leda I'm spending time with 8 year old triplets tomorrow Would you like me to ask them if they want to speak to you? I would love to speak with them Fantastic, let me see what I can do Thanks Alan It's been a pleasure to talk with you about all this You too, thanks Leda Anytime Thanks, Alan. It's been a pleasure to talk with you about all this. You too. Thanks, Leda. Anytime. If you'd like to contribute to independent research... Ah, GPT-3, all the way back at the beginning of 2022. So that's a good two and a half years ago. I don't know if we've improved that much since then. We've certainly put a lot more safety on top of models these days that doesn't allow them to respond with a sense of creativity that Lita did and a sense of casualness as well. We're gonna link back to that episode. It talked about scenarios, she was coming up with options, faced with conversation, faced with actions that were happening in front of her and we're gonna use that in just a moment. Nice and warm again here in Brisbane as usual. I'm very much enjoying this even compared to Perth. We've got 25 degrees Celsius this winter as a bit of an average, which sounds outrageous, but that's what we get. Hi to Shalom in Copenhagen. Thanks for joining us from your strange hours. 8am here, maybe midnight there. Hey to Cole. Hey to Ben. Thanks for helping out with our moderation. Bryant in Iowa. Claudia's in Portugal, Saskia's in Switzerland, where it is definitely midnight, and Mark in Detroit. An hour ago, Mid Journey released version 6.1. I think it took them about a year to get here from the.0 release. Here is what it looks like. It's pretty impressive. They call it photorealistic. I'm not sure how much better we're gonna get than this one. The eyeball with the reflections in the eyeball. A bit of a fractal fern. A cat in a very specific woman's arms. I don't know what this is supposed to be. Some sort of gold leafing on a statue with red eyes and I think this is some pixel art down the bottom. I'm not going to go into Mid Journey 6.1 but you're welcome to explore that yourself. Such a range of keynotes right now, booked through to October next year, not booked solid, but we're still, we're booking for October, 2025. One of the most interesting ones is an invitation to keynote for the 31st symposium of psychiatry in Barcelona. And I said to the organizers, look, I know zero about psychiatry. I can't prescribe medication. I've never touched that field. I've sat in a parallel field with psychology, but they're definitely not the same. And they said they definitely want me because of my link between BMIs and GPT. So my session will be called Brain Machine Interfaces and GPT-5. I'm hoping that we get that by April next year. I want to link that to the Synchron BMI founded here in Australia. It's fitted to four Australians already and six American patients, all with disabilities. It goes up through the jugular with a catheter and then I believe it gets fed into, is it the motor cortex? Clearly I'm not a neuroscientist. A very specific and invasive surgery for these people to get access to life through a brain machine interface, also called a brain computer interface. And my colleague here, Thomas Oxley, how cool is that for a name, founded this a little while ago. He posted the first tweet via brain on the 23rd of September 2021. Really cool. I think he's acting on behalf of someone who had the device fitted for this tweet, but he's pretty widely known, pretty outspoken, if that's the right word. He's definitely not shy when it comes to addressing the media. outspoken, if that's the right word. He's definitely not shy when it comes to addressing the media. And he's just gone a step further in the last few days. As a reminder, brain machine interfaces are getting FDA approval and getting through regulations because they do resolve disability. So I think places like the FDA and here, we've got something similar in Australia. So I think places like the FDA and here we've got something similar in Australia. I'm not going to say, no, you can't help resolve disability. We've seen it in patients with paralysis, stroke, even brain damage. Elon's talked about having it resolve memory loss, hearing loss and blindness. That's the disability portion of it. Next, we're looking to resolve actual disorders. So people without physical or even full cognitive disabilities, disorders in the case of depression, anxiety, insomnia, seizures, addictions, extreme pain, autism and schizophrenia. They are also from Elon. He mentioned those on the Lex Fridman podcast. But being able to resolve these disorders, wouldn't that be fascinating? I've also been hoping and wishing for it being able to resolve learning difficulties. These were proliferating through all of my Prodigy clients and Genius clients. There's a fantastic study that was run through American Mensa it's called HyperBrain HyperBody and I have actually analyzed this entire paper let me see what I can find here. Let's grab my version of this. Hyperbrain, hyperbody, lifearchitect.ai. So if you want to get to this one, lifearchitect.ai slash hyper dash brain. The paper was published in the journal Intelligence in 2017. So just before I left the field of human intelligence to jump back into artificial intelligence, surveyed 3,715 members of American Mensa, so they all have an IQ in the top 2% or 98th percentile, and they were asked to self report on these learning difficulties in general. They found people with high ability were twice as likely to have diagnosed anxiety. Can you relate to that? Twice as likely to have ADHD. Can you or your family relate to that? More than twice as likely to have food allergies. An open brain is more open to the environment. That's why this this was titled hyper brain hyper body. Three times as likely to have environmental allergies. And that's about it. You can go and read more about that. We crunched through that data. But linking this back, imagine being able to resolve ADHD, all the disses including dyslexia, dyscalculia, dysgraphia, dyspraxia, and auditory processing. In my experience with my prodigy clients back in 2012 to 2019-ish, these had a much higher rate of auditory processing issues. In fact, I logged just from my intake forms, something like 60 to 70% of them had issues with ear, nose, and throat because my intake forms asked very strange questions about has your child ever had surgery so that we understood a little bit more about these high ability children. Fascinating, would be great to resolve these with brain machine interfaces and I can see it coming. The last one and these have been mentioned by Mary Lou Jepson who founded Open Water quite a while ago now that TEDx talk is from 2016 she talked about using non-invasive brain machine interfaces so this was literally a hat or a beanie that would use red LEDs to penetrate through the bones of the skull directly into the brain. And she was saying they're getting ready for using BMIs for enhanced thinking and thought transfer. I've listed there as well, augmenting high ability in general, full scale IQ, or let's call it super intelligence because we're getting there. Increased creativity, a reminder that GPT-4 on the torrent suite of tests scores in the 99th percentile. It's about 236 percent times smarter or more creative, more original than the average human. Prodigious memory, we could have done this without AI just by having access to the web and you know a data repository being able to throw web and a data repository, being able to throw things into a database, we can expand memory very easily and rapid processing. All right, that's the theory of BMIs. Here's what's happening over at Synchron with their Stentro device. So back to a few minutes ago in this video, Tom Oxley at Synchron has fitted a device through the jugular into this particular patient that you're seeing, and then he's tied it into GPT-4-0, which I think is really impressive. You're looking at an interface that this patient is looking at, but he's interacting with the interface via his brain machine interface. So he's seeing it on a screen, but he's thinking to choose the best response. Linking back to Lita, this could be easily linked into scenarios. Right now it's linking into text responses for conversation. And isn't this fascinating? He's in the middle of a text chat and GPT-4-0 is providing responses. We've got nine responses here in a grid and I'd love to sit with this guy and know exactly how he chooses it. I think he might just use his eyes and his brain, but for Stentrode, it is very much brain rather than eyes. Let's have a little watch of this. Every time I use the audio, YouTube bans my live streams. So I'll leave the audio off for a moment. Scrolling through, let's go back to that one for just a moment. scrolling through let's go back to that one for just a moment scrolling through options for reporting pain I think that one's kind of cool Sending a message confirmation here. That was the full look at the screen again. This guy's using GPT-4O vision to look at what's happening around him and to hail a cab, even though he's blind. Absolutely fascinating. So these guys obviously are separate to the guys in the Neuralink trial and the BMIs are slightly different in terms of how they're actually getting into the brain and then also what they can do. I believe Stentrode is only one way, it's only registering output and intent, it's not providing input yet, but Neuralink is actually going both ways. This is how we install the Stentrode. You can see I'm thinking through some options here. This was an older version of selecting characters. I imagine this would still take quite a while to do, but I love the fact that they're using GPT-402 to speed up how you'd actually choose these options. Writing an email here via brain. The GPT-4O vision stuff is pretty incredible. Here's an example of them using that on the street. So I believe this guy has a vision disability. The phone's telling him when the taxi arrives and when he can flag that cab. All via the latest large language models and vision models. Pretty incredible. Awesome, we've seen all this. We've seen the conversations these guys have with each other. There are 10 patients with these BMIs fitted and I believe they're in a bit of a group chat where they can talk to each other. Wouldn't that be fun? All via thought. And just this morning we also linked this out to the Apple Vision Pro. The video wasn't as interesting, I can pull that up, but this one was the interesting one. Being able to link an LLM directly to the brain and having the patient interact with the world via thought only. Very much looking forward to presenting this to the Symposium on Psychiatry and be able to talk to psychiatrists about how we're linking large language models, the power of large language models which I think is still not understood by professionals, let alone the general public, and the abilities obviously here, where we can tie it into hardware and actually interact with it. Really cool. Any questions about this one, we'll jump into this, and then we will grab our next piece. Would you play with this? I think a lot of people having seen hollywood movies over the last many decades are scared of this science fiction coming to life and i think it would be close to 99 of people would actively reject this and and say no way I'm not getting this fitted I'm not letting anyone interact with my brain. So there's work by labs including Neuralink and Synchron where it is a lot more safe than we would have seen or read about in the science fiction books. I'll leave that there the video link is in the description here if you'd like to watch that yourself. Excellent. I have been working with AI on something pretty massive and it is a paper. Back in whenever it was pre typewriters, I remember the academics would have to handwrite stuff, go through paper catalogs to find papers, synthesize that, analyze that, and handwrite it. Typewriters helped out with that. And then word processors came through and changed the university system, changed the professional services system, but it sped a lot of things up. I'm not sure what the multiplier would have been, but if we go from quill and ink to handwriting, to typewriting, to typesetting or word processing via WordPerfect or Word back in the, was it the 80s? They would be some pretty massive leaps. There's one more big leap that we've got access to already and I've been playing with it right now. I've just finished a paper on GPT-5, which is finished training. The model is in testing, safety testing, probably having some reinforcement learning applied to it. And we may not see it until after the US election November at the earliest. In the interim there's enough publicly available information for me to go and write this paper. Now my previous papers, What's in My AI, Google Gemini, there's a few others, but those two big ones in particular probably took me in the order of hundreds of hours. They were really research intensive. And even though they're only 20 or 30 pages each, they were heavy. They were really heavy. And for good reason, they were tying in disparate data or disparate information and making it approachable. It was received by Allen AI, AI2, that big lab in America. It was received by the UN as well through Switzerland. And it gets a lot of traction. I know that it's still the most comprehensive view of data sets used to train GPT-1, GPT-2, view of datasets used to train GPT-1, GPT-2, GPT-3, Chinchilla and Gopher, MTNLG, which was the Microsoft NVIDIA model, 530 billion parameters. I'm forgetting one. Or GPT-NeoX via the pile. Any case, here's where we're at. There is a page for it already. It's called What's in GPT-5. The report is 25 pages long with a really significant data sheet behind it. It's a giant sheet that goes through a lot of calculations. I was able to write this report, if you can believe this, the draft of this report, so the first 16 pages, including the data calculations, I completed in about four hours, between 6pm and 10pm. Now, it's not just a mere mortal couldn't do that. No one could do that without the current AI tools. And I used Claude 3.5 Sonnet to help with the articulation, the writing, the copy editing, the transfer of different information through the paper. And probably more importantly, I used GPT-4-0 because of its tool use to manipulate the CSVs and the data behind it to come up with the calculations and the joins. And I really relied on both the OCR, you know, grab this lump of ugly text and convert it into something that's manageable, and also the tools in manipulating files. So you can use GPT-4.0 to both input a CSV, have it process that CSV, and then output a new CSV. And we might jump into that in just a moment. I'll show you the layout of the report here because it's kind of interesting. And of course, I don't have it open. Here's the 25 page report. This will be released to full memo subscribers next week. It's pretty much ready. I'm giving it some finesse as we speak. But the table of contents is huge. We're looking at each of the data sets one by one, and then we jump through some pretty significant appendices for those that care about the data and the maths behind all this. My point here isn't that, you know, I've accomplished something extraordinary. I don't even know that that would be true. The point is that you can do this same thing for free right now. My understanding is that GPT-4-0 Mini is available for free subscribers of ChatGPT, or you can pay your 20 USD a month and have access to something slightly more powerful, or a lot more powerful, with the true GPT-4-0 model. true GPT-4-0 model. Again, it's not that I'm doing anything special or have anything special by way of inside access. This is all created from publicly known information. It's just synthesized, which is what I love to do, put in plain English, which is what I love to do, and then published in a report. You don't just get the data sets in this report I also make a full estimate of the parameter size, total parameters for the upcoming GPT-5. Really significant report I'm expecting a lot of good and bad feedback on this one, I don't mind, because some of the hypotheses, some of the estimates in here have made some pretty big assumptions. Having said that, all of the data sets are backed by either media releases or primary sources. So we know that all of these data sets are part of what OpenAI has access to use to train their models. Pretty cool hey? Let's jump into GPT-4 for just a moment and I'll show you how I use this. This is uh well it just flicked over but this is GPT-4.0. Here's a pretty gnarly paper here's a pretty gnarly paper or piece of data. The paper is fantastic. This is the pile by Eleuther AI. It's very old now. It was just new but given how quickly time goes, it's getting long in the tooth. This is by the Eleuther AI guys who were a community of grassroots researchers, and they put together this entire dataset model and then gold standard paper, which actually flagged everything you'd want to know about how they constructed the dataset. The pile went on to be used in a lot of major models from Microsoft, from Google, from I think Anthropic have leveraged the pile as well, or at least subsets of the pile. So it's pretty cool what a bunch of volunteer researchers accomplished nearly three and a half, about three and a half years ago. In any case, I'm on page three and I'm linking it with another table which is hidden on page 28. I think it's hidden in an appendix. We actually want to bind these two tables together. The first one gives me context for each of the piles data sets, including a mean document size. That one's important to us because it gives us some context of how many bytes might be in, or is in each document within this. And then this hidden table here is tokens per byte. So that's gonna allow us to calculate the tokens that were used to come up with the final document size count. Probably not as eloquent as I should be this morning, but my point here is that there is data in this and it's kind of hidden. And it's also kind of hard to extract. I don't know if you've tried to do this before. Let's crack open Excel for a moment. If you try to paste this into Excel, it gets very lumpy. Even if you clear the formatting, for those that care, I'm using a tool here called Get Plain Text, which I can recommend, zipzapmac.com. There's no way really to make this look good. You're gonna be spending quite a while massaging it. What I can do though, is just throw this straight into ChatGPT 4.0. Let's just say this is a CSV in NALI format. Join it with this other CSV. Let's put some dashes around that just to make sure it's delineated for it. Let's grab this other table here. Man if you gave this to an intern, you'd probably take a few hours to get this done. Let's give it the context here. So we stop at 5.91, which is the totals. All right, let's see what it can do here. Give me the final CSV file. I used a slightly different process that I'm showing here which was doing things one by one. I'm gonna try and do it all at once. When I originally did this I just fed in the first table, had it manipulate that, convert that to from KIB to kilobytes. So going from 1024 bytes to a thousand bytes so it was easier to map these across but let's see how it goes. What part of give me the CSV file do you not understand? Unless... all right we have it we have a download button Alan, it's alright. Check this out, we have a perfectly formatted CSV here. I can't see anything to our chat GPT here. Let's say perfect, add a column for KB, which converts Kib to KB. All the data scientists are saying you can't pronounce KIB as Kib. Then calculate total dock size in KB. I think that's what we want. Reasoning its way through this one. Have you tried this yourself? So we've converted here KIB, which is 1024 bytes, to KB which is a thousand bytes. Total document size, I don't know about that one, that's pretty unreadable. Oh I see it's messed that one up. We could massage this, give it a bit more finesse to play around with this but this ended up being a real piece in my data set calculation. So this is the what's in my GPT-5 calculations piece. You're seeing our total tokens down the right-hand side here and our tokens per document, which is the calculation I was just trying to play around with here in the pile calcs and that makes its way all the way to the bottom of the What's in GBT5 report. Once again this goes to all full subscribers of the memo you can join up there at lifearchitect.ai slash memo. Since the beginning I've had a special option for those that are either in a developing country or they're an academic, a student, a professor, or even if they've got financial challenges. I've mentioned before that I was not brought up in a wealthy environment at all. And just in terms of giving back to the entire population, I'd like to be able to offer the memo for whatever you can afford to pay or whatever price that you'd like to put on it. This benefits people in Ukraine in quite a big way. I have a significant portion of Ukraine people as members of the memo that have used this developing countries option. India, Mexico, I've had flags from up in Malaysia and Indonesia, which is kind of attached to Australia in a way. Certainly we feel like they're neighbors. And professors and academics all the way through different places around the world. It's kind of cool. If you want to access that one, you can get to it from lifearchitect.ai slash memo. If you do have the means to become a full subscriber in the usual way, I'd very much appreciate that. And all full members can expect the full 25-page, 5.5 megabyte report to land in their inbox next week. So the beginning of August. Really looking forward to that. Not much more to say on the report. I don't want to go into the content at the moment. We have had a mini technical review already. Thanks to David for helping out on that one with his PhD maths brain. It was good to see. Not many issues with the report. So there's a lot of rigor, a lot of credibility behind the numbers. We're using, as I said, primary sources for everything within the report and it will be, and it is already, the most comprehensive analysis of GPT-5 that is publicly available and really had a lot of fun with this one. I flagged the fact that we did use Claude 3.5 Sonnet, that we did use GPT-4.0, and that I think it's pretty new using the links, the live links between Google Workspace, Google Sheets, Google Docs, and Google Charts. So every time I touched a number in Google Sheets or updated a chart, that would update in the report. So those tiny little things, they're not AI things, but those tiny little productivity enhancers are the reason that I was able to push out the first draft of this within four hours. Absolutely outrageous. And I know it sounds absurd as I say it, so don't think that you're the only one thinking, Alan, that's absurd because I was there and I still think it's absurd. I will note the cover image was generated using Adobe's Firefly Image 3, which is new, with a really basic text prompt, but have a look at the interesting background it came up with for the cover. Happy to answer any questions about this report. Like I said, we're just putting the last little pieces together in terms of finesse before we release that next week. Excellent. Yes, absolutely. GPT-5 did finish training in April, that was the pre-training based on the dataset, and the dataset again is fully outlined in this report. But OpenAI of course, in the case of GPT-4, spent a good 8 months, which is absolutely ridiculous, applying safety guardrails to it to make it stupider. In the case of the AP economics exam it had a 14.7% reduction in its score after the safety was applied. So thanks to Cole for flagging that just a month ago, OpenAI CEO said that GPT-5 was not ready. That's true, it's not ready. It's finished training, but it will not be released until they need to keep up with the other labs. Google Gemini 2 and Claude 4, both of those due hopefully this year. And you can see how I'm counting down to those lifearchitect.ai slash timeline. We've definitely talked about this one before. We don't see Claude 4 there, but I am waiting on Claude 3.5 Opus sometime in September, I reckon. You can see all of the big ones that are ready to come. Michelle, great question. I was second guessing my figures in a way because we do get up to petabytes of data, quadrillions of tokens rather than trillions of tokens and our final GPT-5 model size estimate just for the text portion measured in trillions of parameters. Now that's very, very expensive to run inference on. You know, to give access to 200 million people becomes impossible with our current hardware, but let's see what OpenAI does with compression, optimization, and the efficiency that we've seen through models like GPT-4o Mini. They weren't shy about releasing GPT-4 Classic at 1.76 trillion parameters MOE, so I don't think they'll be too shy in releasing something that is, you know, an order of magnitude larger than that one. Michelle's question, do you think $20 will be the price or will GPT-5 need to be more expensive? I think they'll get away with fitting it into that if they put in the caps like they've done earlier. I certainly won't have the kind of expertise that OpenAI has within their 1,200 staff there, some of them with massive financial backgrounds. I'm talking about even Greg Brockman, who's the former chief technology officer of Stripe. They know their stuff. They know how to optimize revenue, profit, and they know their maths. And also, Microsoft wouldn't let them get away with losing money. Big question from Cole, will AGI be $20? Let's see. Let's see. I'm really impressed with how OpenAI have got their inference costing down. I'm talking about the hardware time cost, which translates to dollar cost. Obviously, OpenAI's CEO recently said, the quote is actually worth finding. Let's see if I can grab this because it was in the last edition of the memo. Here's the last edition of the memo from three or four days ago and Altman says here the model is too cheap to meter. It's 15 cents per million input tokens. That's a pretty significant quote here. Towards intelligence, too cheap to meter. You're not going to be using a million tokens anyway, not within a month, maybe not within a year for some people. And to have that be 15 cents in, I think it's 60 cents out, means that hopefully this could be pushed out across every country in the world. As a bit of a sidebar, I lived in Central Asia just before COVID-19. The average salary there in 2000-ish, just after the fall of the Soviet Union, was something like 90 US dollars a year. Just offensively low. When I was a kid we were told that living on a dollar a day was classed as poor and that would be third world. These guys were living on a lot less than that in the early 2000s. They're doing a lot better now, but these are the kinds of countries, kinds of regions that we do need to provide AI to. We're not just doing this for Silicon Valley or Australia or Denmark or those of us that have joined from around the world in first world countries. those of us that have joined from around the world in first world countries, we're doing this for the backwater parts of the world where they might not be able to afford 20 US dollars a month. They might not even be able to afford one US dollar a month. And when we can get to models that are powerful enough, but close enough to free to be able to be ubiquitous and still democratic, that'll be awesome and I can see that coming. OpenAI have been doing that in quite an impressive way. Vincent asks Aurora GPT is for 2025 is it? Yes I do need to update that one. The very recent update that we had this month said that they're just not going to be ready and in fact they're only aiming for seven billion parameter models for this Aurora GPT and Science GPT. Let's see how they go. Academic slash government institutions quite a bit slower than our multi-billion dollar AI labs, including OpenAI, Google, and Anthropic. Just running through the chat here. If you've got anything that you'd like to push through to us, happy to have a look. Saskia taking us back to the GPT-5 report. GPT-5's real new capabilities. I don't go into the capabilities in the document. In fact, I pretty much avoid it. The document, the report is only about the data set because to me that is absolutely fascinating. How do we actually come up with a data set that's big enough for proto-AGI pre-ASI? Do we give it entire web access? Probably not. for proto-AGI pre-ASI. Do we give it entire web access? Probably not. Do we go and dig up proprietary data sets that prior to this have been completely hidden? Yes, we do. And you'll see talk, particularly in this report, about synthetic data as the primary driver of the data sets behind GPT-5. Ben asking an important question taking us back to the BMI talk. I don't understand how BMI would help with dyslexia and dyscalculia. Really cool question. Let's actually give this one to Claude 3.5 Sonnet. This is an example of how I used Claude 3.5 Sonnet to help me with some of the writing. So I was almost using it, or I was using it as an editor and my formatting was pretty consistent. I'd dump in a piece of text that I wanted it to fix, put four dashes so it's got some sort of delineation and then give it a prompt like fix this or how could I replace this word? It was very, very good at doing that. Here's another example. an incredible 6,000% larger than the largest known book collection used by OpenAI. I've got this word in there twice and I just put in some dashes and ask it to replace larger and away it goes. Anyway let's do a new chat and ask it Benno's question. It's getting a bit confused here. Let's start this one again. I could brain machine interfaces, help with word issues and number issues. Certainly if it's a one way, grab the person's intent and then give them a prompt to choose from, it would not be able to help with it. But when we get to two way, we actually pushing stuff into the brain as well. Elon Musk's Neuralink, then we do get access to it. So things like the feedback, we already get this one, but targeted simulation, I know that the Neuralink guys are working on that one. Cognitive training, assistive technology, personalized interventions, yes. Eventually attention and focus, yes. And memory enhancement, which all links back to this targeted stimulation. However, the neuroscientists allow input to influence output or processing before output. Fun question. I know that there are maybe a thousand scientists right now working on these problems within Neuralink. And then Neuralink is no longer just the primary contender in the BMI race. They're possibly the most well-known, but there's a good dozen, maybe two dozen big labs out there that are playing around with BMIs, including non-invasive BMIs, which I would be more likely to wear. Thanks Just, I think we flagged the music before is still using uh upbeat UPP BEAT. Mark C, will the GPT-5 data set include robotic data? I've flagged specialized data, including scientific data, but I'm mainly exploring the text-based data with the understanding that it will be multimodal like GPT-4.0. So it will be training on video, audio, and images, and other special data. It's just that I have not gone into anything beyond text data for this report. Let's grab Michelle's comment here that I can not quite see. We're using a model here to proofread and edit text. It's not too bad at proofreading. I haven't given it the full report yet, but in my previous testing with the N-1 model, it was not great at proofreading and editing. I'm not sure if that's changed. People playing with UDO 1.5, there were some complaints about UDO 1.5, the music generator, that this new version was not allowing the kind of quality that we had from the previous version. I think the UDO guys have given that some polish, but at the right time, probably this year, I will swap out all of my human generated music for AI generated music. It's all about priorities at the moment and trying to fit everything in. Excellent. Very exciting times here in the Allen chair, being able to see what's going on. Apple just released a full paper on their on-device LLM and their server LLM. I've got that downloaded, but have not been able to read that quite yet. I think it's pretty close to the analysis that I provided a month or two ago, but great to see Apple publishing in-depth research like the other AI labs used to do. They haven't held anything back in terms of what was used for architecture, training setup, parameter count, even token count for the data set. And that's to be commended because you've got labs like OpenAI, Anthropic, Google DeepMind, the big three, just saying that we're not revealing that for commercial sensitivity and trade secret reasons. And that's a loss for independent researchers like myself, but also for the general public. It means that you guys are having, or that we are having information that impacts all of us being hidden away from us. So we're having to guess, sometimes make best practice assumptions, hypotheses, which is the basis for my GPT-5 paper. Welcome to those just joining. Bruce from Seattle. We had a... Oh, my scrolling's broken. We had another couple join us from Seattle. We had a, oh my scrolling's broken, we had another couple join us from Europe. Always impressed when our Europeans join at outrageous hours. I even hear from my Aussie friends and colleagues that they in Perth at 6 a.m. they wait and and listen to the replay but great to have you guys joining here at whatever time of day it is for you in the US, through America. Hi to Mark in Detroit. Hi to MBR in Switzerland, Rory in Idaho, Casey in Nebraska, Montiocco in Montenegro, beautiful part of the world. Monte Yoko in Montenegro, beautiful part of the world. I'm pretty much done for today, I think. Let's have a quick look what else we had in our, yeah, that might be it for what I've prepared for today. today. We have a couple of big pieces coming up for memo subscribers, not just the GPT-5 report, but by way of interesting stories. I delivered a big keynote at a very big venue. It was an entire stadium last year. And the organizer came up to me in his suit and tie. And he said, Alan, I've heard that you're Biden's favorite AI expert. And I did a double take. I said, I've not heard anything like that before. And we went on to talk about other stuff. I recently did an interview with a major financial institution and I got similar inside information that they had passed along my name and the memo to the insiders at the White House essentially, which is kind of cool. I don't have much more to say about that but it allowed me to join some dots about why we've got so many.gov addresses as members of the memo. It's not just designed for governments. It's designed for you. It's designed for you as someone who's sitting at home wanting to have a bit of fun with AI, wants to understand more about AI, is ready for artificial general intelligence, is ready to have a humanoid robot walk up behind you and clean your benches, I need some of that happening, or sorting out your washing and wardrobe, doing whatever else comes up for you in your head. AGI stands for artificial general intelligence. It's where AI, a machine, can perform at the level of an average human. Now, an average human is in the 50th percentile. They have an IQ of 100. The average human doesn't own a car. I don't think they have a bank account. They live in China or India. Think about that particular person and what they can do. They can't build a house. They can't, what else can't they do? Maybe they can't get or won't get a bachelor's degree. I don't remember the stats on that one. They can absolutely build, put together IKEA furniture. They can do the dishes. They can do basic maths and pretty much all the intellectual cognitive tasks that we've got with our frontier models right now. But we're exploring the use of embodied AI, humanoid robots in such a big way right now. We've got a beautiful viz on that one. I can say that because I didn't design it myself, lifearchitect.ai slash humanoids. The top one's by me. This one is by, well, we did flag his name because he was very kind to me for doing it. Harrison Skell, who put together the current major humanoid robots ready in 2024. My understanding is that these are not R&D anymore. A significant number of these already have LLMs behind them. You're looking at OpenAI-backed 1X Neo there in clothes. I know that one is backed by the GPTs. Same with Figaro 1. We've heard a bit from Phoenix and Digit. Atlas is impressive to watch. The Chinese have the H1 and now the H2 and I think you only pay 16,000 USD for one of these robots to come and live with you and that's pretty good in comparison. The Boston Dynamics robot to be a helper in their factory I've heard figures measured in the hundreds of thousands of dollars per spot robot via Boston Dynamics. Optus Gen2 possibly the most well-known humanoid robot right now out of Tesla and that one is still in R&D but the rest of these are floating around in different ways. Montiocco in Montenegro, when do you think singularity will arrive? Alright it's a couple of definitions to understand here AGI we just mentioned is the average or median human. ASI is artificial super intelligence, which is the combination of all the intelligence in the world, the earth and beyond. It will be truly shocking to arrive there and most experts agree that will arrive within about a year of achieving AGI. So all of this we're talking about, you could count down in months, you're not counting down in decades. The singularity is something different. The singularity is the sense of the speed of time and the inability of humans to understand the exponential progression of technology. I think we'll enter that between AGI and ASI. It's a timeline, maybe it's a feeling rather than a milestone. So that will be fascinating to discover. I just keep the ASI and AGI in the back of my mind, rather than getting to a state where we can't keep up with the developments. You can see more about the AGI countdown at lifearchitect.ai slash AGI, or you can just Google AGI countdown, which is kind of cool. And you can even ask the Frontier models, when is AGI? And it, last time I did this, it quoted myself back to me which was kind of strange and it also had Dr. Demis Hassabis there as well. Alan's conservative projection about AGI is a fraud. Second result, great. All of my justifications are provided here within the AGI page. And if you consider these to be inaccurate or a finger in the air, that would be fine. They go all the way back to the 1940s when we had the Perceptron, when we had the research from Alan Turing, 1948-ish, with some involvement from John von Neumann. Minsky was involved in the 60s. John McCarthy was there as well, giving us neural networks. And then maybe, what is this, 30, 40, 50, 60 different milestones up until this month that talk about how close we are to having a machine emulate the average human. Pretty cool. Thanks, Michelle. That's a very nice comment. I will address this question from Kayet in relation to the GPT-5 report. I think for the first time, the question is, are there any opportunities for new datasets or is synthetic data and better reasoning about current data the way forward? The major dataset in GPT-5 from my analysis is synthetic data and there are absolutely opportunities for new data sets. We talked about this in the live stream when I identified some pretty shocking models coming out of the scientists behind Aurora GPT which is part of or it's coming out of the Argonne National Laboratory part of the University Department of Energy. In that data set, we talked last week about having cosmology simulations, 600 trillion tokens of that, half a trillion plus, sorry, half a quadrillion plus tokens of data that I wouldn't have even considered, but is obviously valuable to these scientists. Another approaching quarter quadrillion tokens of bacterial and viral bioinformatics, the BVBRC, and the Energy Exascale Earth E3 system model, the E3SM. These are all examples of very specialized, very niche scientific data sets that would make up beyond the size, way beyond the size of data sets that we currently use for our frontier models. GPT-4 trained on eight tokens for 13 trillion tokens seen. If we go back to my bubbles diagram, Claude 3 Opus to get 2 trillion tokens probably trained on, well should have trained on is it 40 trillion tokens to get 2 trillion parameters. These are really really really big numbers and really crazy numbers I know. Let's pull up our models table here. Here's our models table showing that for a 2 trillion parameter model we would absolutely need 40 trillion parameters, sorry 40 trillion tokens. And you can see this at lifearchitect.ai slash models dash table. These are our upcoming models, Olympus, GPT-5, Aurora, Grok 2 and Grok 3, and waiting on Microsoft's MAI-1, 500 billion parameters, probably trained on 10 trillion tokens. So the answer to that question is yes, synthetic data sets is the way forward. We can get some really incredible results from using AI, reasoning about data and putting it in different formats. When I jump into the GPT-5 report, we talk a lot about how Microsoft and Hugging Face generated synthetic data and the exact prompts that they used and the exact set of topics that they used as well. So that is kind of interesting to map across to OpenAI and GPT-5. Just flagging back to our humanoid robots could be interesting and an insurance challenge when the humanoids get four meters tall and start walking around after wrong instructions. I know there's a lot of focus right now on reducing hallucinations and improving truthfulness across LLMs and I know that the safety teams on these humanoids are just going to be massive. Will we have to think about the energy demands when every household gets at least one humanoid robot member added? Well there's a lot of heavy lifting in the word... oh you said who... in the word who. Let's grab this just for a minute. Let's grab my scratch pad. Let's paste this in here. Let's see if this will allow us to highlight the word. As much as possible, I'm trying to remember and remind myself that I don't have to do it all and that's not some philosophical personal development line that is a recognition and an understanding that AI is already two times ten times a hundred times smarter than me so in answer to this question, humans, even our expert humans, our inventors, our 99.99 percentile gifted humans, are just nowhere near the capabilities of even our current LLMs, even GPT-4O, even Chlord-3 Opus. So to come up with answers to how would we resolve energy demands, we don't rely on a venture capitalist who will remain unnamed. We don't rely on the John von Neumanns of the 2020s. Neumanns of the 2020s. We don't even rely on our crazy scientists at CSIRO and the the equivalents in the US and Europe. We need to start remembering that we can use large language models to solve, and this is a quote from the founder of Aleph Alpha in Germany, to solve any problem that can be solved with language. The full quote from memory, large language models can solve any problem that can be solved using language. And as they get smarter and smarter, here's a bit of a chart on how smart they're getting. We need to start leveraging their capabilities to come up with our new inventions, our new discoveries, and our new ways of harnessing things like electricity, energy in general, and optimization of whatever we feel like we need to optimize, whether that's land use, whether that's wealth distribution, or others. You're looking at data that is now two years old. The GPT-4 classic model finished training in August 2022. It's now, tomorrow will be August 2024 here in Australia. And we are, we know we are exceeding these tests two years later, not just with Claude 3 Opus or Claude 3.5 Sonnet, but also with the upcoming Frontier models. And isn't that going to be exciting? Exploring those with you over the next few months for the rest of 2024. That's it for today. I'm gonna take a month off for August, 2024 for live streams. If you're in Australia, there are a couple of keynotes that you can attend that are publicly available. You can actually walk into them, which is in stark contrast to most of my keynotes where not even my agents, not even my speaking agents are allowed in the room because the client feels that the information shared is too confidential, particularly in facilitated workshops and roundtables where obviously there's a lot of inside knowledge being shared from the client. But we'd love to see you at a public keynote if you're in Sydney, Darwin, I think there's a Melbourne one coming up as well. They're listed at the top of the memo. And then I will see you probably in September for the next live stream. You will see a copy of the massive GPT-5 dataset analysis report as a full subscriber of the memo next week in the beginning of August. You can read a little bit more about what that looks like by going to lifearchitect.ai slash what's in GPT-5 or it's just available in the sidebar if you want to click on that. That gives you a look at the layout of the document. Again we're not quite ready for that report release yet. So that's the reason we're not showing that fully, as well as the abstract and the table of contents to give you a bit of a feel of what you're getting in that one. Really cool. Thanks again for joining us from wherever you are in the world. This is a really exciting time for AI. That lull that we had mid-June to mid-July is over. We had some people mention that this week has been massive and I'll be documenting that as well in the memo. Thanks again for joining us from wherever you are. Jump into what is currently happening in AI. Get hands-on with Mid Journey 6.1, have a play around with the latest GPT-4.0 mini for free, and make sure you're not just having this as a theory or something that seems inaccessible, because we're living in such a strange time where all of this is available to us, whether you live in Central Asia, Australia, or Silicon Valley. It's no longer hidden in a lab. It's all ready for you. Thanks again. See you soon. Did you see the memo about this? Yeah. Yeah. Yeah. I have the memo right here. Superintelligence is unfolding at lightning pace. Read my industry-grade analysis of AI that matters as it happens in plain English, the memo. Yeah. Did you get that memo? Yeah, I got the memo. Get the inside look as AI models are embodied into humanoids, AI's IQ increases to nearly perfect, and bleeding-edge use cases expand to the entire world. Yeah. Didn't you get that memo? Additions are sent to subscribers at Fortune 500s, major governments, and people like you. lifearchitect.ai slash memo. I have the memo. I have the memo. | 30/Jul/2024 - GPT-5 dataset paper process, BMI: Synchron + GPT-4o - LifeArchitect.ai LIVESTREAM | 4,315 | Dr Alan D. Thompson | 20240731 | The Memo: https://lifearchitect.ai/memo/
====
GPT-5 dataset paper: https://lifearchitect.ai/whats-in-gpt-5/
Synchron + GPT-4o:
https://youtu.be/Y4DRsH-XkyM
https://www.cnet.com/tech/computing/how-this-brain-implant-is-using-chatgpt/
====
Read more: https://lifearchitect.ai/
https://lifearchitect.ai/models/
Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.
https://lifearchitect.ai/
Music:
Under licence.
Liborio Conti - Looking Forward (The Memo outro)
https://no-copyright-music.com/ | 2024-08-10T16:28:20.374815 |
https://www.youtube.com/watch?v=Y4DRsH-XkyM | This brain implant is now powered in part by ChatGPT. It's designed to help people living with paralysis communicate faster, easier, and more independently on their digital devices using only their thoughts. We spoke with Synchron's founding CEO Tom Oxley and pioneering patient Mark, who is featured in the demo video, to find out what it's like to use, where the integration of AI and brain implants may be headed, and what comes next. It's an amazing realization that the mind can do what we're doing with it. Probably the coolest thing I've ever been involved with. Mark is one of only 10 people in the world who have been implanted with Synchron's Brain Computer Interface, BCI for short, as part of the company's clinical trials. I'm an ALS patient currently. I was diagnosed in 2021. The use of my company's clinical trials. I'm an ALS patient currently. I was diagnosed in 2021. The use of my hands is almost gone. Synchron's BCI is designed to make it easier for people like Mark to continue using their phones, computers, and other devices independently by connecting those devices directly to their brains. I can talk to you with my thoughts. The addition of AI is seen as a way to make communication through these devices faster by taking in the relevant context, like what was last said in a conversation, and anticipating answers a person might want to respond with. Rather than typing out individual words, it will give me options for parts of sentences, and with one click, I've got all that filled in. Mark has a refresh button he can use if he doesn't like his AI answers and he's noticed that through using the AI it seems to have gotten better at providing options more in line with things he might say. Every once in a while it'll drop an f-bomb which I kind of tend to do occasionally. Synchron CEO Tom Oxley tells me the company has been experimenting with different AI models for about a year, but the release of ChatGPT 4.0 in May raised some interesting new possibilities. OpenAI showed a man that was vision impaired using the system to navigate around a road. It's heading your way on the left side of the road. Get ready to wave it down. You could use audio and visual and text, and they can receive them all at once to present to you what's the next thing that you want to do. And I'm predicting it's this based upon the information because I could hear what your partner was just saying. I saw the text message just come in, and then the BCI navigates around the prompts and then chooses which prompt for the next move. Oxley says that the chat GPT integration only required a software update for Synchron, so there's no need to change the hardware or get new implants. We're trying to bring all of the inferencing and compute out of the body so that we can continue to offer upgrades. Oxley also says the company isn't tied down to any particular large language model. In the fast-moving field of AI, the systems that best serve the needs of their patients is what Synchron will adopt. We want to enable our users to engage in natural speeds of conversations. Often people who have disability or motor impairment are left behind. We want to bring them back into the conversation using digital means. Stare at it and think about moving for a long time and it'll start zooming in. Yeah, and release. So you made a click, your first click with your brain. Oh, what? Synchron's BCI is implanted inside a blood vessel near the brain's motor cortex, the part of our brain that controls our movements. In order to make a click or a selection with the Synchron BCI, users think about moving and the BCI interprets those thoughts and transmits them wirelessly to take the desired action on the user's device. What the fun fact, Synchron's BCI is unique in that it's based on pre-existing medical technologies like stents and catheters, allowing it to be inserted into the brain without the need for open brain surgery. Imagine having to ask someone to pick up your phone to write a message or do something. You're always having to go through the lens of another person. One of the big goals is to be able to utilize the technology on their own. I would encourage anybody who sees this video that may be in a similar situation where they are paralyzed or can't use their arms there's hope coming. Synchron's BCI is expected to cost between 50 to a hundred thousand dollars that's comparable with the cost of other implanted devices like cardiac pacemakers or cochlear implants. There haven't yet been any implantable brain computer interfaces that have earned market approval from the FDA something Synchron hopes to change. While the process may take several more years, Synchron's Stentrode BCI is already making an impact. I wake up with two choices in the morning. I can wallow in self-pity because of my situation, or I can do something about it. Not going to be a cure, but whatever I can do to help others, I think that's why we're here. To see a video of Synchron's implant process and how it gets into the brain without open brain Not gonna be a cure, but whatever I can do to help others, I think that's why we're here. To see a video of Synchron's implant process and how it gets into the brain without open brain surgery, check out this video right here. And subscribe to CNET for the latest and greatest tech news. As always, thanks so much for watching. I'm your host Jesse Orle. See you next time with the fam. | What It's Like Using a Brain Implant With ChatGPT | 299 | CNET | 20240728 | We spoke to two people pioneering ChatGPT's integration with Synchron's brain-computer-interface to learn what it's like to use and where this technology is headed.
Read more on CNET: How This Brain Implant Is Using ChatGPT https://bit.ly/3y5lFkD
0:00 Intro
0:25 Meet Trial Participant Mark
0:48 What Synchron's BCI is for
1:25 What it's like to use
1:51 Why work with ChatGPT?
3:05 How Synchron's BCI works
3:46 Synchron's next steps
4:27 Final Thoughts
Subscribe to CNET on YouTube: https://www.youtube.com/cnet
Never miss a deal again! See CNET’s browser extension 👉 https://bit.ly/3lO7sOU
Check out CNET’s Amazon Storefront: https://www.amazon.com/shop/cnet
Follow us on TikTok: https://www.tiktok.com/@cnetdotcom
Follow us on Instagram: https://www.instagram.com/cnet/
Follow us on X: https://www.x.com/cnet
Like us on Facebook: https://www.facebook.com/cnet
CNET's AI Atlas: https://www.cnet.com/ai-atlas/
Visit CNET.com: https://www.cnet.com/
#BCI #AI #ChatGPT #artificialintelligence #openai #brain | 2024-08-10T16:36:40.605593 |
https://www.youtube.com/watch?v=sQar5NNGbw4 | JOSH BATTSON I'm Josh Battson, and I'm here with other members of the interpretability team at Anthropic to talk about some of the engineering work that went into our big recent release about interpreting the insides of Cloud 3 Sonnet. So why don't we start with some introductions. Jonathan, who are you? I have worked on the interpretability team for an amazingly long eight months. Prior to this, I worked at Jump Trading doing quantitative finance for like 13 years. Great. Adley? Yeah. My name is Adley. I'm also on the interpretability team. I've been here doing dictionary learning stuff and sparse adder and cododer stuff for about the last 14 months. Before this, I was working on efficient large-language model inference at another startup. TC? Yeah, and I'm Tom, or TC. I've been on the Interp team about the last year working on dictionary learning. Before that, I worked at the same company Jonathan did. So jump doing, doing a high frequency trading. And before that I was at, at Facebook for five years doing kind of backend in for work there. So the reason we're here now is because there was a big interpretability release recently. What were you trying to do there and why? What were you trying to do there and why? Yeah, I think that the best way to describe this is that back last year we published a paper called Towards Monosemanticity, which really demonstrated that this technique could work to extract interpretable features on a very, very small language model. And then in the months since then, we've just been scaling this up until we reach the size of getting really good features from one of the models that is deployed into production by Anthropic. Help me understand, what's the difference between like a small language model and the one you were tackling for this work? Yeah, the small one would be so different from any language model you've actually used. Like if you tried to ask it any sort of question that you might think a language model would be very good at, it's gonna totally fail everything. It's just kind of a very, very poor model. So it was helpful for the early work we were doing because we think it has a lot of the same structure as a as a large model but it's much smaller so it's much easier to work with but it's kind of not useful for any actual task and even if you asked it uh a fairly basic question like uh like uh what do cats say i'm not confident it would actually get that right. It wouldn't meow. No, I don't think so. But maybe. We didn't actually try. Someone said a good analogy, which is like, eight months ago, it's like we said, hey, I think the Earth is made of dirt. And so we had a hand drill, and we went a couple inches down, like, hey, there's dirt there. And now we made this, like, giant laser drill and went into the Earth's mantle, like, hey, there's lava there. And yes we made this like giant laser drill and went into the Earth's mantle like, hey, there's lava there. And yes, I know that person was you. Somebody. Somebody. And I think that's a really good – when you said that, it really stuck with me because it's like, yes, it's technically the same thing. And yes, we expected there to be lava. It's like, yes, it's technically the same thing. And yes, we expected there to be lava. But it's just been a huge engineering effort to actually drill down, haha, and figure out what's down there. And we've actually found a lot of what we expected, but it's really cool now that we're there. I think the thing that I want to add there is just how rewarding it is to look at these large language models that can actually do all of these really powerful things. So in the one layer model, we were finding features, but the features corresponded to things like counting to 10 and generating the random string of letters and numbers that you see after URLs. And when you scale this up to a much more powerful model, the same technique can find features that really just are interesting, not just from a scientific perspective, but that just represent interesting nuanced topics. And that can really just shine a light into how the system's able to perform really hard tasks. And in particular, large language models can perform tasks that we don't know how to program computers to do. They really just have all of these capabilities that we don't understand. And so if you can find the features from them, then you can really get this really fascinating insight into how they are able to do these things. What are some of the features that you saw that were the most striking or moving to you? I really liked the functions that add numbers in a code feature and that it was kind of not very narrow and not just firing on functions that are kind of obviously a plus b but if there's some function which calls some other function which is which is adding then the feature also lights up for that so it has some like deeper understanding of what a function that adds is than the very basic one uh i thought that was that was super cool and maybe surprising that it exists. But for a lot of these features, when you first see them, you're shocked. And then when you think about it more, you're like, oh yeah, that seems like a useful and very reasonable thing for the model to do. Maybe I shouldn't have been so surprised that it was there. I remember first finding the veganism feature that it was really cool. And I did not expect to see it. It was not even the biggest model. And I'm not a vegan, but disclaimer. But it was really interesting to see that the same model could identify the concept of not eating meat, of being worried about factory farming, and not wearing leather, but also in a lot of different languages. And the fact that it was able to tie all of these concepts together was a... I couldn't believe what I was seeing. The model actually... It's not just repeating arbitrary words in some random way. Like, definitely, very concretely, this model, this connection was already built into the model, and we just discovered it in there. And that just, that really blew my mind. Yeah, I think one of the things which was kind of impressive for me was getting a sense of, like, how the model's thinking about this stuff. I think when language models first started getting big, there was some notion that maybe they're just repeating things they've seen in the training data. And when it says something, it's because there was an extremely similar sentence out there. And it just kind of grabbed that and then gave you whatever somebody said in that context but seeing these features that are like multimodal multilingual about something what do you mean multimodal multimodal i mean that the image of the thing um makes the same feature fire as the text of the thing one of my favorite ones there was there was a feature about backdoors in code that also fired on images of like hacked USB thumb drives and various forms of subterfuge. Yeah, yeah. I think there were like five or six different devices with like hidden cameras in them. And it fired for like various hidden cameras cameras in in like in like everyday objects which again in hindsight seems like totally like it makes total sense but i i definitely wouldn't have guessed that right that part of the model that literally recognized like a line of code that had a problem would would be the same thing which is like there's a pen with a camera in it yeah and then even you could act you you could artificially activate that and say, hey, can you finish my function? And it would write it while introducing a subtle vulnerability that could be hacked later. That one really blew my mind. I did appreciate that Claude was kind enough to label the function backdoor. that Claude was kind enough to label the function backdoor. Any other favorites? Can we talk about Golden Gate Claude? Yeah. I mean, Golden Gate Claude was so much fun. What is Golden Gate Claude? What is Golden Gate Claude? What is Golden Gate, Claude? Exactly. Golden Gate, Claude was, so we had a feature in the paper. It was kind of the headline and we really liked it where Claude would activate on descriptions of the Golden Gate Bridge, you know, the iconic, majestic towering bridge between Miranda and San Francisco. So someone had the amazing idea of, hey, do you think we can talk to Golden Gate Claude? And this was one of my favorite parts about Anthropic. I thought this was going to be hard. But then someone from the engineering team just went into our code base, figured it out, and implemented Golden Gate Clawed as an experiment. Like, hey, do you think I can actually take the results of your dictionary learning paper and just use it? And then we tried it, and it worked. And then everybody started playing with it. that was such a cool experience to actually have the results of our paper brought to life like that yeah it was incredible we put out the paper on tuesday morning and then we were all out to dinner on tuesday night and people inside the company were excited by that figure where when we turned on the Golden Gate feature and asked Claude, what is your physical form? Claude said that it was the majestic Golden Gate Bridge itself. And that was just like a little static thing. And Oliver was like, let's do it. Let's make Golden Gate, Claude. And while we were eating and celebrating, they started working, and 36 hours later, there was a polished product that could be shipped, and the world got to talk to a model that had this feature we'd discovered only weeks before, amplified, and getting a feeling for what it means to kind of drive the model in one direction or another. So I know the model was really big. Claude's really big compared to the ones we were working on before. What did you have to do to take the dictionary learning technique for finding the features and scale it up to work on something like that? Yeah, first of all, I just want to say this is a really long question to answer because this was probably the bulk of our work between the publishing of Towards Monosemanticity and the publishing of this paper. Like this was just, there were just so many things and we should get into them, but I just want to emphasize that we are just not going to be able to talk about a fraction of the things that had to be done here because this was this was just such a really big effort i also want to like frame it slightly differently like like when when we first got the results in in towards mono semanticity and we started thinking about what we're doing next we didn't immediately go oh we're going to scale this to like sonnet this will definitely work like we didn't know if this would work at like larger scale so we didn't want to spend you know eight months just like just like scaling and then check if it actually worked so it was it was much more of a like kind of back and forth between the engineering side and the and the research side to kind of what experiments can we do to scale this up to give us more confidence that scaling it up actually works. So it was kind of not this like monolithic thing that we could just plan. It was something where we were kind of scaling it in pieces, confirming that this still looks good and then scaling it more as we got more confidence. So what does scaling actually look like? So I think one example here is very illustrative. This is something that came up pretty early on in the process as we're scaling this up, where when we were working on towards monosemanticity, all of our models fit on a single GPU. Every sparse autoencoder that we trained fit on a single GPU. Every sparse autoencoder that we trained fit on a single GPU. And what we realized very quickly is if you're going to keep scaling this up, this is no longer going to work. You're going to have to chain a bunch of GPUs together and implement something that we call opsharding, where you take the parameters of the sparse autoencoder and distribute them among a large number of GPUs. Kendra, what is a sparse autoencoder? Maybe not everyone knows what they are. So an autoencoder is something that takes in some data, some vector, and transforms it into another representation from which you can read back the original. That's the auto is that you get back the original encoder as you change the representation. A sparse autoencoder is one where this new representation you get is very sparse. There's only a few elements in there that are non-zero. And this can be really nice because if you're trying to understand what does this data mean, and there's exactly three non-zero elements in the encoding, you can just go look at each of those. Whereas if the original vector had a thousand components or something, it might not make any sense. And the basic bet we made here, which is shocking to me that this worked at all, was we took some of the latent states, the vectors inside Claude, trained a sparse autoencoder to see if we could represent those as a sum of just a few pieces each time. And the answer was yes. And then when we looked at each of those pieces, they were shockingly interpretable. From a math perspective, I think a sparse autoencoder is really simple. There's just two matrices involved. From an engineering perspective, I think it proved to be a lot less simple. And so you could write down the math from our paper in October and we could copy and paste basically the same math to our paper now. And that is not how it worked on the silicon. Oh my God. I think something that feels really interesting here to me is that when we first started this project last year, we were experimenting with a bunch of different techniques for this. And we were experimenting with a bunch of more complicated techniques. There's a lot of fancy math out there that addresses this problem. It's possible that that math might still work better, but we really just saw a lot of success with sparse autoencoders because you could just really scale them up. We tried running all of these other techniques, but you could only run them on a small amount of data. And one of the things we realized is that in order to see really good results, you have to run this on a small amount of data. And one of the things we realized is that in order to see really good results, you have to run this on a lot of data, much more data than anybody in the classic mathematical literature ever does. And so sparse autoencoders are in some way, just mind bogglingly simple in a pretty beautiful way. And it's just this philosophy that if you take a simple algorithm and it's scalable and you can just really turn up the numbers, you can get really beautiful stuff out of it. But turning the numbers up, like, that's the hard part. I do almost no math. You do math. You do math. You do math. But, like, there's so much goes into, hey, let's make this thing 10, 100, 1,000x, and so on, bigger. And that just breaks all our abstractions, breaks all our code in so many different ways. It just becomes too big in all these dimensions that you were totally unprepared for. It causes weird bugs. I think one of you told me one of those dimensions was around shuffling the data. Someone talked through what the shuffle problem was and what you had to do. Yeah, so there's this perennially hard problem in machine learning where you have your input data, and you want to make sure that if you've got a whole bunch of A's and a whole bunch of B's and a whole bunch of C's and a whole bunch of D's, and you feed them through your model, if you just feed them in an order, it's going to learn, hey, I should only learn A's and a whole bunch of B's and a whole bunch of C's and a whole bunch of D's, and you feed them through your model, if you just feed them in an order, it's going to learn, hey, I should only learn A's. Hey, I should only learn B's. But if it's all mixed up, then it has to learn the whole distribution at every step. And the shuffle process is very easy when your data is small. You load into it into memory, you know, do like random shuffle, and then you write it back out. That's not that hard. Now what do you do when you have petabytes of data? It's like, oh. Oh. So. I guess it might be like if you have to shuffle a deck of cards, you can just do it with your hands. Yeah. And if somebody gave you seven consecutive miles of cards stacked end to end, it's not clear how you would shuffle that deck at all. That's a really good analogy. So I have 100 warehouses full of cards. And so we did, I ended up like, we talked about a lot, figured out, hey, is there a way to do this in parallel? And it's like, well, if you're going to shuffle 100 warehouses of cards, first you're going to shuffle one warehouse of cards and, you know, break it up into 100 subproblems. And then how do I shuffle one warehouse of cards? I'm going to break it up and like do it by section and then you're going to like mix the different sections in some in some like provably uh it's improved a way that makes sure that every section gets mixed with every other section and like i don't know that's that sounds really simple uh in a sense it kind of was once Once we had understood the problem that, oh, we just need to make this multi-stage parallel shuffle where we break it down. Like, oh, probably anyone could, not anyone, but a lot of people can implement that as part of a coding interview. It's not that hard of an algorithmic problem. But to get to the point where we even realized like that, oh, just framing the problem was 90% of the work. Once we did that and we could conceptualize it as, oh, we need to do a multi-stage parallel shuffle. And then once you get your like recursion properly defined, then it's a pretty easy task to scale it to something. Oh, you want to do 100 terabytes? You want to do 10 petabytes? Cool. Just add another layer. I think some more context there is interesting because we're kind of focusing on when like the part after we decided we're going to speed up shuffle. But I think the part before that kind of shows a lot of what this job is about, where what was happening is we were scaling things up and then, and then running, running, running experiments as we scale. And the shuffle step before we, we made it better was taking longer and, and longer. So we knew that like this step is not scaling well and it's making it slower to get research results. But also we know there's something better there, but there wasn't something that i could do in like a couple hours i was like oh this is maybe a few days a few weeks so we're kind of putting it off because we can still get get get experimental results until eventually this thing's taking you know 24 hours something like that and we're like okay we finally need to like fix that and then i think the fix that we did you kind of could do something that is maybe like more like perfect or totally nail shuffle but we're not focused on like what is the like optimal the platonic ideal of parallel shuffle what we care is our jobs taking 24 hours how can it not take take 24 hours so i think a lot of this job is like we want to get to get experimental results that That's, that's our focus. And then given that goal, how do you get those? So it's generally not how do you make any step perfect? It's how do you make any step as good as we need to, to get the results that we need right now. And then as those results come back, you gain more or less confidence in the approach, which you have. And as you have more confidence that this code base and this approach is something we're going to be using six months later, a year later, you're willing to invest more time into making things better. And I think the heart of the job is how do you make that trade-off of how much time to invest into any one piece of this whole pipeline? I want to draw that out more. It sounds like this kind of engineering where there's an experimental result at the end feels like a different process than maybe producing a product. Could you say more about what it's like to do engineering for research? Yeah, I think it's interesting to compare it to my first job at Facebook. And I was building a service, a backend service, which like powered the like Facebook website. And I would say the big difference is kind of the requirements of the code at that job at Facebook. It never really changed. We always knew that this was going to run at scale. It like couldn't crash. We like cared about the like cost of the servers we were running on but i was there a few years and it was kind of always the same goals where in in a in a kind of research engineering job now like you don't know which bits of the code you're going to be throwing out in two weeks and then which bits of the of the code you're going to be using like years later and And a lot of the original dictionary learning code was methods which we've deleted, they're gone, we're never touching it. And spending time making that code perfect would be totally wasted because it's deleted. But also over this year-long process, we've honed in on what we're doing is working, and this core thing is good, we need to make this better, we're gonna be using this longer, and if the code quality is crap, like it's going to be slowing us down for like years. So we need to like really go and like polish this more. So I think you kind of constantly have to be, have to keep those trade-offs in the back of your head. And they're kind of changing under you, like as you work. There's another dimension to this that i'd like to talk more about which is there's a whole bunch of ideas that we want to try and when you're looking at implementing these ideas you're thinking about how to design the infrastructure and like with any software design certain infrastructure designs are going to make certain things easy and certain things hard. So there's this really tricky thing. And I think in some ways it's an impossible problem and you can only try to do this very poorly. But trying to anticipate which directions you want to go in in the future, trying to anticipate what general categories of ideas you might want to try and trying to anticipate how do we make these general categories easier? And what are we closing off? What are we making harder to do? And trying to make those trade-offs is a really, really difficult challenge a really, really difficult challenge that we try our best at, but is something that is just impossible to be perfect at. Did you make any big mistakes? None. I think a lot of the errors here feel more like we maybe should have cleaned something up a month sooner. So it's kind of like, oh, maybe we should have done this sooner. But because your trade-offs are changing under you, if right now you're like, I'm not really sure. Should we do this? Should we not do this? In a month, it's frequently blindingly obvious. Oh, yeah, we should definitely do that. So you do lose that month of like, it would have been better if you got there sooner. But I generally think you kind of get shoved in the right direction eventually. But I also think that there's an important point that I am not a professional scientist where I'm just looking to publish papers. I'm also not a professional engineer where I'm just looking to build the most perfect, beautiful, harmonious, well-abstracted system. We have a specific target, which is being able to figure out and do enough science to figure out interpretability so that we know how these machines work to achieve a specific safety goal. We have to do enough science to get there. we have to do enough science to get there. We have to build enough engineering stuff to support the science. But ultimately it's quite possible at the end of the day we will throw away every single thing we've built except for that one end result. And so I don't want to spend any additional time researching stuff that's not going to help. I don't want to spend any additional time building stuff that's not going to help. I don't want to spend any additional time building stuff that's not going to help. And getting that trade-off is super hard. I'm not always so good at it, but you guys are, so thank you. It's also always easier in hindsight. So what was the most confounding bug in this process? Yeah, so one of the really dangerous parts of machine learning, and especially when you're doing machine learning on this weird undiscovered topic, is that it's really hard to know if you've written your code right. I remember my machine learning professor told me this in college, and I'm like, that doesn't seem so hard. This can't possibly be such a big problem. And then you realize that this is just the thing that is going to eat up more of your time than any other problem. So we had cases where, you know, we just lost weeks of effort because we had something and we had some evaluation metrics and the evaluation metrics are bugged in a way that makes them too good to be believed and really exciting. And we spend a lot of time chasing that down before we realize that there's just some really subtle bug in our metrics. And it's very hard to test for that. And you basically end up needing to spend a lot of engineering time trying to make sure that these things work and that you can trust your evaluations here. Bugs and metrics are scary because if you're trying to make the number go down and the number is going down, you're like, this is great. Everything's great. And then it turns out you were just like chasing a complete illusion for weeks. So how do you deal with that? Like what is it test? Like what's testing like for kind of research code? I think kind of correctness bugs like that are very difficult to test because it's kind of not clear what the correct answer is. So your like classic unit test kind of doesn't really cover this well. I think the thing that helped here was to like kind of log as many cover this well i think the thing that helped here was to uh like kind of log as many metrics as you possibly can can you while this uh process is is like training can you think of every possible number you can like log and then graph those and then for your runs you can stare at these graphs and be like what should this look like does this make sense and i i think there's no easy answer here. It's just time. I think the other piece of this is just really going through the light code carefully and being like, I know what the math for the ML says this should be doing, but what is it actually doing? And we've had a number of times where that didn't match and i think tracking those down is is a very important thing and i would also say that that like there's kind of latent bugs in master that you're worried about i think there's like another way that this comes up of every time you have a new idea for how the ml will change you code that up and you run it and then sometimes the results are like oh this is worse than than your baseline and you're not really sure, was the idea bad? Or when I coded it up, was it bugged? And you don't know. And I think that's kind of a difficult trade-off of what to do next. Because you can go and you can stare at the code. You can go and stare at graphs and try and understand, like, does this thing, like, was it bugged in some ways? But at some point, you have to decide this idea doesn't work, and I'm giving up and I'm moving on to like something else. One of the striking things for me, who is more of a science background working on a team of really skilled engineers has been realizing the power of like pulling some of the engineering work forward to increase your iteration time. I think that the more that your ideas matter, then you want to spend a lot of time thinking. But if you have no idea of what's going to work or not, then making it so you can test a lot of ideas quickly really pays off. This relentless looking at how would I run this experiment? Okay, could I run that experiment in a day instead of in a week? Could I run it in an hour instead of in a day? Could I kick it off in a minute? And like, your ideas might be better, but like no one has ideas that are like 200 times better such that you would rather, you know, take that long to run an experiment. Speak for yourself. I think this comes back to the short-term versus long-term trade-off, which is I think really just like one of the fundamental tensions about doing this sort of research engineering, where you have to decide how much effort to invest into making things better long-term versus how much you want't just want to lean all the way towards the long-term thing. It depends on a lot of factors. It depends on how confident you are that something in this general area will work. It depends on how reusable do you think this infrastructure is going to be in the future and how easy is this going to be to code up and get working really well? But it's also informed by the science of, do we think that dictionary learning is a process that we should be going so all in on? Just having the faith guided by our scientific intuition that if we keep pushing here we're pushing blindly like we don't we don't actually know if we're going to be going towards anywhere until until we like you know drill down far enough and oh there's lava like it's just it's just a lot of dirt and then all of a sudden you you like pull back up and you realize, oh my gosh, we've actually gone so far and we've actually found something. But for a while, you're just fumbling in the dark and like nothing works, nothing looks good, nothing makes sense. But you just have to believe that like if we keep researching in this direction, like maybe there's signs of life and eventually we're going to see something useful. Personal question. Why do you like doing this work? So for me, in my previous roles, I, I, um, at, at the company used to work on, on the inference team. So the inference team is, there is much less of the research aspect of it. We kind of know exactly the operations that, that, that need to be done, the like math, and we just need to make them go really, really fast. And it leads you to these really interesting systems, low-level GPU optimization problems. But to me, I can plan out what the next six months will look like. You can figure out, we're going to design it exactly like this, and we need to do A, B, C, and D, and you have this exact plan, and then you go and do it. And I kind of found the work of doing that exact plan a little tedious or boring. There are plenty of people at the company who love that. I just don't personally. Where on this team, like we can't plan six months out, right? And we don't know what to actually build and you're following where the research results lead you and kind of everything's constantly changing. So I really love that piece of this job. Adley, what do you like about this work? Yeah, I think there's two questions there, which is why I love the research part and why I love the engineering part. Because really I love both of them and I love the research part just because honestly, there's no better way to describe it than this is just a really beautiful problem. And it's really fascinating to try to understand this. And it feels amazing when you can shine a tiny little bit of light in the black box of models. One of the things that I like about this is the engineering is a lot of fun, sure. But it's also the problem itself. is a lot of fun, sure. But it's also the problem itself. It's like, and this goes back to why do I like, how does this compare to my previous job doing quant finance versus this? Studying markets was actually very fascinating. The, they're always changing. There's a lot of interesting modeling to be done. But here we're essentially doing like computational neuroscience on an artificial mind. And no one's ever done that before in history because these things have never existed. No one's ever done, we're like among the first people right now to ever have access to artificial minds this big with the amount of computational infrastructure that it takes to analyze them. We're literally trying to figure out how these things think. We're studying cognition in a very quantitative way. We are studying cognition in a very quantitative way. And it's so mind-blowing to me that almost the same skill set that I was previously using to predict the next price now becomes decoding thought. And I loved finance for many, many years, but this just feels so much more meaningful to me. And I think the really exciting part about trying to tackle these problems with engineering is that it makes them solvable. If you ask yourself, how do you do neuroscience in an artificial mind? That's not the type of problem that you're really like going to solve, or maybe you could solve it, but you don't have high confidence in anything. There is something about building the infrastructure to do this and building the infrastructure to do a lot of experiments that makes it feel possible to say we are actually going to do this. Engineering is just a way of making this successful and making this possible. So for the people listening to us who think this sounds kind of cool, do you have any advice about getting into interpretability research or AI research from an engineering side? or AI research from an engineering side? The first thing I'd say is I think a lot of people think the work of the interpretability team needs much more of the research skill set than it actually does. Like the research skill set is important, but the engineering skill set like really, really matters too. So we are not just looking at people who have only done like math and like ML. We need people who are very strong at like coding too. And like currently we're bottlenecked by hiring kind of very, very strong engineers. So we need more people like that kind of asking us for jobs would be the first thing. What you can do if you're interested in this and you're a great engineer is ask us for a job. Because we are hiring people like you that's silly but i think i think it's very easy to underestimate the contributions that you're able to make especially if you think of yourself more as an engineer coming into this and i really the advice is just to apply the other thing i would note on the on the engineering skill sets kind of what we're looking for, what people might learn is that I think we need a lot of breadth of that. We are not like, we need to make GPUs go fast for the work that we do, but we're not pushing things to the bleeding edge, right? So we need people who can kind of do a bunch of different skills and come in and notice like, oh, i can do a quick change which gives us a big win we aren't people who we aren't really looking for the skill set of i can spend two months to to use the graphics card 10 more more efficiently here we're not going to spend two months on that we're going to move on to to to like paralyzing other jobs figuring out why some python code's really slow. So you kind of need this breadth to be able to figure out like which point in this complicated pipeline is the bottleneck right now. And let's go make that like a bit better in a few days is kind of a big skill that we'd really love to see more of. Yeah, it seems like the team has a lot of full stack engineering where the stack, you you know goes down to like you know you could do fused gpu kernels and all the way up to building front-end interfaces for like looking at how images make clod talk differently and that you never know where in that entire chain might be the thing you need to do you know there was a a member of a front-end bug the other day that actually turned out to be like an op sharding bug. So you thought this might be, OK, the server like is rejecting your request. And then it just turned out that no, actually, we had shuffled around these tensors in a transpose way. And that needed to be what's fixed. And so it actually means there's a ton of ways to contribute. And also, this kind of breadth and fluency can really pay off. So Josh, you're a scientist much more than us. I'd say we all shade pretty on the engineering side. What's your biggest frustration with people like me? I mean, people like you are so charming. No, I don't think there's a frustration. I think it makes for very good collaborations because oftentimes, you know, we're so early days that there's often a lot of room for improvement. And sometimes it turns out that, like, we should just be, like, plotting the correct metric or changing the initialization scheme for a matrix that could also speed up the training process by 5x. And it could be that you need to speed up the training process by 5x by parallelization. And so I think that there's just these opportunities. This is what I mean by the full stack actually continues all the way into the mathematics and all of these pieces of it. So I think that it's really helpful to have a very interdisciplinary approach to this stuff because sometimes you can sharpen the experience. Did you really need to run your ablations over the entire data set or are you trying to estimate a scalar at which point statistics tells you you need a thousand samples and then you're pretty much good and you can save a lot of time. I think also I've actually really enjoyed the even though you're on separate sub-teams so I don't get to work with you nearly enough, I really enjoyed the few times that we did get to collaborate because I think we have such complementary skill sets where I've said it before, I'm not that great at the math. I still don't know ML. Sorry, guys, I'll leave. But I really like the culture of collaboration that lets, where you and I will just sit together and pair program on a problem, and we have very complementary interests and skills, where when we work together, we are just very, very powerful. I think that that's a lesson. The reason I bring this up is for people considering, hey, do you think I could come into Interp and be useful? It's like, yes. If you are good at some of these things but not all, there's so much value when you pair with other people who have different skill sets, and we really benefit from that collaboration. I think that one of the really fun things about this is you start to learn from those collaborations the shape of a problem that could be solved, which is well in advance of having any idea of how to solve it. But I'm like, I bet Jonathan could help with this part of the thing feels stuck. And I don't know enough to yet be able to do that. But then we can sit together and like, oh, yeah, that's the kind of thing that I could like bang out right now or on the visualization side. I feel like I'm clicking around between 17 windows right now. And I'm actually, we've gotten the parallelization down. It's like super fast to run these jobs. And now it's taking me like 30 minutes to like look at the results. And then we bring in Pierce who's like,. It's, like, super fast to run these jobs. And now it's taking me, like, 30 minutes to, like, look at the results. And then we bring in Pierce who's like, oh, yeah, yeah, yeah. Like, we can totally make that part better. And then when you put that all together, you get this, like, really incredible, like, scientific system where you actually all of the parts sort of work. And, like, you know, what comes out the other side of some of the more beautiful papers I think I've ever been involved in. Actually got me in part to join the team. You just see these jewel-like figures that come from people obsessed with working in Figma to just dial all the details in, which is not something that I thought, maybe working in Figma isn't part of the standard engineering toolkit, but it turns out that that also is a force multiplier. One explicit thing that I want to mention that was kind of baked into those answers is like how the team is structured here. I think a lot of people think that there's a separate research team and a separate engineering team. And kind of throughout the conversation here, we've been talking about the interplay between those. So like separating those like just doesn't work. Like we don't do that. There isn't like these separate researchers who are telling the like engineers like build this like these these problems are fundamentally like entwined together and you have to work on them together so the way the like whole company works not just the the the interpretability team is kind of the research and the engineering always go goes goes together and that's just absolutely crucial for this job. Adley, if a friend came to you and was like, what was the most fun or weird or quirky thing you got to do? Yeah, I think that there is like a surprising collection of problems that comes in after you have trained 34 million features, and now you want to, as silly as it sounds, you want to see what these features do. And this is a tricky problem at scale because these features only activate on very specific sequences of text. That's what the sparse in sparse autoencoder means. And so if you want to really visualize all of them, you have to run a lot of features through a lot of text and then do things like, we also want to visualize what does this feature do on the nearby text and what does the distribution of this feature look like and solve a bunch of problems like that. That, I believe at this point, it's something like a 10 or 12 step, very distributed pipeline, just because this is one of the things that breaks really quickly once you scale up the problem. And there's just so many steps that something is always breaking and something different is always becoming the bottleneck. And so it's this process of just looking at this, finding the bottleneck, and trying to distribute that further. Yeah, sometimes things like even matrix multiplication doesn't work anymore, where you realize that you want to understand interactions. This is on my team between 34 million features here and 34 million features there. And genuinely, you could just multiply the matrices, but then you couldn't store the result anywhere or put the result anywhere. And so you're starting to do some fancy looping, indexing, and compression to compute a product. Just big numbers times big numbers are very big numbers. One of the things which we hit is the default PyTorch Matmul implementation for certain shape matrix multiplies is just much slower. So we're like profiling jobs and we look at it and most of our time is in matrix multiplies. So we think this is great. We're running really fast, but we calculate efficiency numbers. Efficiency is not great. So we then go to someone else at the company who's kind of more of an expert in this narrow area. And he tells us that, oh, yeah, try this other matrix multiply implementation. It'll be much faster. And we're generally doing that of like, when we get to the really thorny problems like that, we just ask someone else at the company because we're not experts at that, but it does matter. And we do need to make these things faster. So we were using the unbeknownst to us, like a slow version of multiply these matrices. Well, it is the default. It is a version that is normally fast but for the specific shapes of the of the tensors we were running matrix multiply on it was not fast and there's kind of different different implementations for that so under the hood for for for matrix multiply what generally happens is based on the shapes of the of the matrices there there's like different ways the like GPU kernels actually work. So some implementations kind of pick the like wrong, the wrong approach and are just randomly slower. So we kind of run into problems like that. That's randomly slower, how do we fix this? And yeah, you kind of don't have the time to go be an, a like expert in this area, you just need to kind of quickly find something that'll that'll speed it up. I think this is such a fun example, because you would think that matrix multiplication is just heavily optimized. But in a very physical sense, our problem was just a weird shape, it was a weirdly shaped matrix. And so we just run into all of these problems because interpretability research is just doing really weird things like this. And so you run into all of these weird things that happen. Yeah, thinking like distributed is sort of funny for this too. We were doing some like attribution calculations where you're just multiplying a vector by a bunch of other vectors. And like, you have to think carefully about where they are living and which direction you send information. Because if you send this over here, you get to send some scalars back. But if you send this over here, it's like a matrix is going back. And all of a sudden, like, you've spent just enormous amounts of time shuttling data back and forth, where like, again, I was trained as a mathematician. You write the equation and all of the letters are on the same line, right? There is no like communication bottleneck between the A and the V that it's next to. Yeah, I was looking at a open source implementation of sparse auto encoder training that only runs on a single graphics card. And I was just shocked by like, this is so simple. Like this is so easy. Why do we have so much code? And then you go through all the various points where we had to like scale this up a thousand times bigger. And it's just like, that is where all this code comes from. And there's kind of so many little battles there of like this random thing doesn't scale, is like 2x slower, that like we've put in, which we didn't have to do back when we were doing very, very small jobs, which just fit on a single graphics card. I think that also speaks to some of the complementarity of the work that can kind of happen in academia or more open source environments and what you can do at a company with the scaled models where like you can try out a lot of ideas at small scale and it like isn't that hard from an engineering perspective. And then to get that to actually work on models that are many orders of magnitude larger, you're just like entering new realms of physical difficulty to get anything off the ground. Sometimes it feels like there's the gift though, which is that in the bitter lesson that Richard Sutton talks about, which is sometimes the scalable thing is better because you can always put more scale in if you do the engineering, and you hit the upper limit of being clever. And so even though some of these methods are quite conceptually simple, it's turned out that on the rich data distributions that actually make up these networks, they show really amazing things. It's really fun. I think that the bitter lesson applies not just to training a model, but also to interpretability, where I think people often think of interpretability as trying to get this like very principled understanding. And there is some of that, but there's a lot of that that just really has the same properties as the bitter lesson where you just take something simple and do it at scale and you pick the scalable thing. And it is really beautiful to me that that works, not just for making good models, but also for understanding models. The other point I'd make with like scaling and the bitter lesson is that the company has given us access to the like compute which we need to actually scale this. And it's been really fun that like the thing blocking us from scaling further is like whether the ML actually works at that scale or the infrastructure works at that scale. It like hasn't been, can we actually get the graphics cards to like run on, which would be kind of a much more frustrating reason to not be able to scale. Where do you see interpretability in a year? I think that where I see interpretability in a year is if everything goes well. This is a super bullish case, but we will figure it. So we did one slice through the middle layer of Sonnet. And I would want to analyze the entirety of every layer, every piece of all of our production models and not just analyze them. Right now, we only found features. We don't know how they fit together. We don't know how they work in a variety of different contexts. And I really want us to do the circuits work to figure out, like, what do these features mean on their own? What do they mean together working in concert? Yeah, one thing that I think I'm just surprisingly excited about is just actually continuing to scale this up. There is a lot about what we need to do that is going to need to be different. There is definitely going to be lots of opportunities to change the way we do things. But at the same time, these things seem to work better as you keep scaling them up. And so I'm really excited about just trying to eke out the last few orders of magnitude and see what happens. And if you would like to help us with that, we are hiring. We would love to work with you. Can I just say, I love the phrase, the last few orders of magnitude. There's so much in that one, those few words. So why are we doing interpretability? I think one of the things I want to emphasize here is I have a lot of uncertainty about the types of safety challenges that are going to arise with large language models. And I'm very uncertain about the direction things will go in the future. But interpretability feels very robust to me. I'm very excited to work on this because I think it can help with a really wide range of problems and a really wide range of scenarios. It's just understanding models seems good. And if you can do that better, that's probably helpful. Yeah. Understanding models seems good. And if you can do that, it seems like it'll help you with any of the behaviors you might. Maybe that's something I really like about interpretability or rather the approaches we're taking, which are sort of completionist, right? It's trying to map the full diversity of the model because if you can do that, you can zoom in to the parts that you need later. Whereas if you're just focused on like one zoom in to the parts that you need later. Whereas if you're just focused on like one particular behavior of interest, it might not generalize or it might be missing the sort of the important part of the story. And so you can do interpretability focused on like one behavior at a time. But if you want the whole picture, you need to scale. And that's why you need people like the ones at the table who can make the scaling happen. Here, here. Here, here. All right, hands in. I'll do one, two, three, clod. One, two, three, clod! | Scaling interpretability | 3,198 | Anthropic | 20240613 | Science and engineering are inseparable. Our researchers reflect on the close relationship between scientific and engineering progress, and discuss the technical challenges they encountered in scaling our interpretability research to much larger AI models.
Read more: https://anthropic.com/research/engineering-challenges-interpretability | 2024-08-12T13:20:58.821103 |
https://www.youtube.com/watch?v=LDh5MdR-CPQ | Hey everyone, welcome back to another episode of the Lama Index webinar series. Today is shaping up to be one of our most popular webinars slash workshops ever with property graph indexes in Lama Index in partnership with Neo4j. And so this is going to be a special workshop on teaching you how to build advanced knowledge graph rag and you'll be able to learn how to use our brand new property graph abstractions both to construct an existing graph as well as to query a graph. And so we're excited to host Tomas from Neo4j, as well as Logan from our side. And without further ado, feel free to kick it off. Okay, so like I said, I'm happy to be here and talk about graphs. So as mentioned, today, we're going to talk about the new property graph index integration in Lama Index. And if you might be wondering, property graph, what actually is that, right? Because most of the time, I don't know if I can remove this. I don't know if I can remove this. So most of the time when dealing with graphs, especially in the REC, like frameworks, what we see is usually triples, just like subject, relationship type, and then object, and the other way around. But actually, property graphs have now got an actual standard. There's the new GQL standard, which is part of the ISO committee. And that's very exciting. But basically, property graph, as you might imagine, means that nodes have properties and relationships have properties as well. So for example, here we have one node and it has properties name, Amy Peters, date of birth, employee ID, right? But it also has this one special property that we call label in property graph models. And as you can see, for example, the green node has an employee node label, right? And node labels are used to put nodes into sets of categories. For example, in this example, we have three node labels. We have employee, company, and city. As mentioned before, relationships can also have properties. It's a slightly different data representation than what you might be used to, like from the previous, let's say, implementation in Lama Index, the knowledge graph implementation, where we only dealt with triples. So that's about property graphs. And now let's talk a little bit about property graph index integration. So the flow that most of the time you will follow when you're using property graph index integration, you start with a bunch of documents. And Lama index has great support for various types of documents. So I will not go into that. But basically documents, you can think of them as just wrappers around text. And so basically, we take a bunch of documents and we pass the text from those documents to graph constructors. So in the integration that Logan did, you can use one or multiple graph constructors to create a knowledge graph. And we'll talk a little bit more about them create a knowledge graph and we'll talk a little bit more about them later. I'll show you what we showed available out of the box. And so the property, the graph constructors extract like structured information from documents and store them as a graph in the knowledge graph. So and store them as a graph in the knowledge graph. So there are a couple of integrations that Lama Index already has. As I'm from Neo4j, I will focus on the Neo4j integration. But there are others as well and probably more coming up. So once you've built your knowledge graph, on the other side, we have the so-called graph retrievers. The graph retrievers, their job is to basically based on the user question, they have some logic they can use to retrieve data from the knowledge graph. And again, there are a couple of out-of-the-box retrievers that you can use and we'll also show how easy it is to define a custom graph retriever in the workshop later. So that's basically the kind of the flow that you can think of. And what Lama Index does is it provides graph constructors and graph retrievers. I mean, obviously also the other parts, just that the graph constructors and graph retrievers are part of the new property graph index integration. And the idea is that they are very modular and customizable. So that if you're a beginner, you can just use something out of the box. But if you're like advanced user and you need something it's very easy to customize the pipeline for your needs. Let's like at high level property graph construction, as I said, we take a bunch of documents and we construct a knowledge graph. Here I have an example, documents where like for more open AI employees founded new companies. Right. So for example, here you would have four different documents mesh with some information. But the nice thing about knowledge graphs is that you kind of condense that information and unify it, right? So that basically the information that was previously spread across multiple documents is now easily accessible and nicely represented in a knowledge graph. And then I prepared a couple of slides what is available out of the box in Lama index. So out of the box, we have three graph constructors. The first one is the so-called implicit path extractor. And what it does, it just, we have a new word for it. It's called a lexical graph. But what it actually does, you just take the green note is the original document. And what it does, it just chunks the documents. So the what's this like gray-ish notes, text chunks, right? And text chunks are connected to the source document. And then also we have an ordered list of text chunks so that we know how do they follow each other. So this is the implicit path extractor, graph constructor, and it doesn't require an LLM because it's just basically chunking up and then creating a linked list of text chunks. And then the next one is the simple LLM Patek selector. And as the name would suggest, you need an LLM for that. And from what I've been digging around in the implementation, how the simple LLM Patek selector works is basically to do prompt engineering. So, you in the prompt, you define what like how the output should look like and then you provide a parsing function that extracts that output from an LLM and creates a knowledge graph. So it's, I would call it like a prompt based solution. And by default, all nodes have the same label, or like the default implementation, all nodes have the same label. And again, the purple one is the text chunk. same label. And again, the purple one is the text chunk. So we always store the reference text in the graph as well. And then the entities that were mentioned in the text chunk, we have the mentions relationships to those entities. And then obviously, entities can have relationships between each other. Right? So for example, Amelia Earhart was American aviation pioneer. Right? And this is kind of the simpler version of graphics selection. Now obviously you can customize it and make it more advanced, but by default all nodes have the same label. Then the more advanced is the schema RLM path extractor. So here you have the ability to define which nodes and node labels and relationship types you can extract. This one we'll also use in the workshop so you'll get to see in So you'll get to see in practice how do we define the schema. But basically, as you can see, by different colors of nodes, means that they have different node labels. And again, the purple ones are the text chunks, the reference text chunks. And then the text chunks mention the entities that appeared in those text chunks. And then obviously we have a bunch of relationships between those entities as well. And this one works best with LLMs that provide function calling, like native function calling, like the commercial LLMs, like OpenAI, Gemini, Mistral, probably some others. Grog is a nice one as well. Grok is actually a really nice one because it's really fast and that's really nice for the knowledge construction. But yeah, so it works best with LNS, with native function calling, but Logan told me that it will also work with models that don't provide native function calling, just not as well. Or maybe the schema should be simpler in those using those LLMs. But as I said, I haven't tested it out. But maybe you can test it out and let us know how it works. So this is basically the out-of-the-box graph constructors that you can use. And then what's not in the Lama index yet, but since Lama index provides low level connections to graph stores, Neo4j in this example, if we can also come up with custom entity disambiguation, which we will do in this workshop as well. And entity disambiguation just means that if you have multiple nodes in the knowledge graph that reference the same real world entity, you kind of want to merge them together into a single node so that you have a better structural integrity. So here in this example, this node ends with a limited, this one has abbreviation limited and this one doesn't have limited. It all references the same node. So that means that's why we want to merge it into a single node so that we have better selection integrity. And in the workshop, we'll use a combination of text embeddings and word distance heuristics to find potential candidates and then merge them and then merge them together. Here is basically, if you're anxious, you can follow this link and this is basically the notebook that we'll be using today. Then on the other side, we have property graph retrievers. As you see, I just took the previous image and just slice it up because here we have the remaining of the arrow. But the graph retrievers, as I said, based on the user input, they have some logic how to retrieve information from the knowledge graph and then pass that information to an LLM so that the LLM can generate the final answer. Basically, a typical reg pipeline. And we have, I think, four out-of-the-box retrievers that you can use. Here, I didn't have time to draw nice diagrams. So I just summarize them quickly. So the first one is the LLM synonyms retriever. It takes the user query, generates synonyms using an LLM, and then it finds the relevant nodes using exact keyword match. So that's really important because LLM is not aware of any values in the database when it's generating the synonyms. So it's not a given that the LLM will know how to construct the keywords so that they will match any nodes in the graph. So because it uses at least the Neo4j integration uses exact keyword match, we could basically optimize this to allow some misspelling or stuff like that. But at the moment, it's using exact keyword match. And then once it finds relevant nodes, it returns basically the direct neighborhood or basically you have an option to decide how many or like what's the distance, what's the neighborhood size of the nodes that you want to return. So by default, we return just direct neighbors of that node. And then the second one is the vector context editor. So in the previous one, we used exact keyword search to find relevant nodes. But here in this example, we're using vector search. So that means it's more robust and less reliable on exact keyword match, because with keyword, with vector search, you will always get some results from the database because you take top end. And then hopefully some relevant nodes are identified using vector search. And then we just do the same thing as we did in the previous one. We just return the direct neighborhood of relevant nodes that were found using the vector search. Then another one is the text to cipher. So as the name implies, we take the text and use an LLM to generate cipher statements. So this is very flexible approach because the LLM can construct any sorts of cipher statements. So for example, with vector context, when you're just finding for searching for relevant nodes using vector search and then returning the neighborhoods, it's very hard to answer questions like how many nodes are in the graph because it's like an aggregation query and vector context retrieval is not suitable for aggregation queries. At least not like on the global scale. But like, for example, with text cipher, you could ask it questions like how many nodes are in the graph or like how many people are in the graph. And the LLM will generate appropriate cipher statements and return that information for you. So text cipher is much more flexible than the previous tools. But there's obviously always a trade-off that it's less reliable because we're using LLM to generate cipher statements. And that's at the moment how I see it. It's mostly correct, but not like always. So you're kind of trading off flexibility for a bit of accuracy. But then on the other side, what you have is also some different, like Texas Cypher allows you to also do aggregations and stuff like that, which the previous retrievers didn't allow you. And then the last one is the so called cipher template retriever. And here instead of generating cipher statements with an LLM, you basically define the cipher statements you want to be executed and you just parameterize like a parameterized cipher template. So basically you have a cipher statement with like one or more parameters. And then at credit time, basically the LLM, you provide instruction to an LLM how to populate those parameters. And then at credit time, LLM extracts relevant parameters it needs to use with the cipher template, populates the template, and then executes the predefined cipher template. So that's where the template comes from because it's predefined. Then here I have questions, for them because it's pretty defined. And then here I have questions, but let's do a demo. Tomas, I'm just going to read through some questions in the chat so far just to make sure we cover some of them before the workshop. The first is, I think one question is actually about using LLMs. And if you have a set of recommended LLMs that you think are better for, say, knowledge graph construction, as well as the cost of running LLMs across a large corpus of documents to construct a knowledge graph. I'm curious to get your initial takes there, as well as recommendations for some of these users. Some of them are thinking about using like Rock, for instance. like recommendations for some of these users. Some of them are thinking about using like rock, for instance. Yeah. So what you will see is that graph construction, like LNMs and graph construction is very model dependent. So different models will generate different graphs. And it's very like even different versions of GPT-4 will behave differently. So I did some testing, like I'm not like an expert in all of the airlines, but for example, what I will tell you when you're using like a predefined schema, like the GPT 3.5 will try to fit all that information into the schema so that it kind of over fits information into the schema where it's not really where it wouldn't really fit in reality. But then GPT for turbo and for low0 are much better at ignoring the information that is not part of the schema. For example, if you want to use, I would really recommend using LLMs that are fast. So for example, GPT-4, just throw it out of the window because it takes forever and it's costly. before just throw it out of the window because it takes forever and it's costly that's in that like grok is really nice because it's really fast but then the problem with grok is they don't want to take our money just yet so hopefully when they will be taking our credit cards that's something I would definitely look into. But in general, it's like the better the model, the better will be the results. The more it will follow the prompt instructions. Great. And just following up with just one more quick question is this might not actually quite exist in some of our abstractions right now, but one of the questions around like dealing with missing information from the graph, which sort of implies this, like maybe you do some LLM construction paths. It's not completely exactly where you want it to be. And so you do some human in the loop paths to try to like modify and shape graph to better reflect what you want out of that data. Have you seen that kind of human in the loop approach towards graph construction? So it's not really human in the loop. It's more like they have some heuristics, because I didn't really mention, but if you want to take a look, the graph lag by Microsoft paper is really nice. And it deals with some of these questions. So the first one is also like what types of, what's the size of text chunks you should use right and the thing is the it's kind of funny like the number of nodes and relationships is kind of irrelevant to the chunk sizes so that just means if you're using smaller text chunks more information will be extracted and if you're using bigger text chunks like the the overall number of extractions will be the extract information will be the same. But since you're using larger text chunks, right on the like in summary, less information will be extracted from larger text chunks. So that's one thing they mentioned in the paper. And then the second thing they mentioned, they have some sort of heuristics where they can decide, okay, not enough information was extracted from the text. And then they do a second run. So basically, instead of having a human in the loop, it's kind of automated and saying, okay, you didn't do a good enough job. Now, let's do a second pass on the graphics selection. Oh, OK. Yeah, super interesting. Any other questions? Feel free to carry on. There's a ton of questions, but I think we'll.. Yeah, I mean, because the extraction part will take a couple of minutes, and we can answer questions. So here I define my graph. So OK, just seconds. OK, that's fine. And one thing I also noticed is that people are sometimes confused by documents because like all Lama index mostly deals with the documents, right? But document is just a wrapper around the text. So it's very easy to go from text to document. We just instantiate the document with a text property and that's about it. Right. So here in this example, we're gonna create a bunch of documents based on the news. So we have a bunch of news. And we're going to use GPT-4.0. And for example, one thing that's also interesting, there's like a lot of things that comes popping up. And one thing I noticed today or yesterday, somebody did some benchmarks and they said basically that if you're using slightly higher temperature than zero, even for the deterministic tasks, you get better results. It was specifically for GPT-4. Again, that's very interesting and we're all learning as we go along. But as mentioned, we're going to use the schema LLM path extractor in this workshop. So with schema LLM path extractor, you have to define the types of nodes you want to extract. So here I went for person, location, organization, product and event. So it's mostly a very typical extraction. And then there's the event, which is kind of more ambiguous and allows the LLM to extract a lot of information, right? Because event can be anything basically. And then I also, we also have to define the types of relationships we want to extract. So here I focused more on the like organization business part where we have suppliers, competitors, acquisitions, subsidiaries, CEOs, stuff like that. So we are going to hopefully extract some business relevant slash financial information, hopefully, from the knowledge graph. And then, so this is the first part of the, when we are defining the schema. And then the second part is we also have to define which information, which relationships is assigned to each person, right? Because not all relationships can be part of all node labels, right? So we have to define, so for example, a product only has provides relationship and then provides is only on the organization. So then ideally, the RLM would generate only provides relationships between organizations and products. Because it doesn't really make sense to have provides relationship from location to let's say a product. So this is a little bit more granular schema definition that we need to provide and then we just pass the possible entities, relationships, and validation schema to the LLM. Here you have the strict mode. Strict mode, even if you provide instructions to the LLM, which types of nodes and relationships it should use, it doesn't really mean that it will follow them. Oops, where are you? 100 percent correct, right? Because LLMs are just LLMs, they do what they want. Then Logan implemented a strict mode. So it means that, but since we know the types of relationships and nodes we expect, we can filter them out. If we want to in the code, right, or we can leave any other nodes in the relationship to LLM identified. So in this case, let's just allow any information the LLM decides additionally to extract. Now GPT-40 is quite good at following the provided schema, right? But other models, and this is also because, as I said, GPT-4 is a native function calling model. So when you're using functions or tools to extract information, it will have much better accuracy, whereas like Lama3, have much better accuracy, whereas like Lama free, which is not GROK. So Lama free via Lama doesn't have function calling. It's still a really good model, but it might not follow the schema always. So that's why you have the option to filter it in post-processing if you want to or not. And here we'll go for the not. And we're going to extract information from 100 articles and it's going to take like two or three minutes, I think. So we have time for a couple of questions. So we have time for a couple of questions. Yeah, for sure. I'm trying to figure out what questions to ask. Maybe one thing is actually going back to the retrieval side. So, you know, there is vector search with the vector context retriever. And then there's also a text to cipher. You mentioned some limitations of text to Cypher, like in your mind, like what are some of the, maybe like tips and tricks you see in getting text to Cypher working a little bit better for users in terms of making it, making sure it generates more reliable Cypher queries, how to make sure it actually retrieves relevant context. So, but I mean, this is kind of a hard question. contacts. So what I mean this is kind of a hard question so text deciphered works good for like the when the user knows what's in the database right and knows how to ask the questions that fits the schema. So that's one thing and then how to ask the questions that fits the schema. So that's one thing. And then how do you achieve that? So what you could do is you could have some query rewriting steps that take the user input and rewrites it into more of a question that fits the graph schema and it's a little bit more, it will both or implicit on how it wants the information to be retrieved. So that's one thing, obviously providing it with few short examples is very helpful because by default it uses zero shot generation, right? We just give it the graph schema and then hope for the best. But what you can also do is you can also provide it a few short examples and then hope that it follows those examples. And obviously the thing is like with more complex graph schemas, there's like just how to describe those schemas takes a lot of tokens. So when like, and then maybe not linearly, but the bigger the size of the schema, the less it will. I mean, yeah, the worse the accuracy will be. So what you can also do is then just provide parts of the schema. So instead of having one text to cipher that deals with the whole graph schema, what you can do is you can have an agent with several tools, and then each of those tools focus on different parts of the schema. So you kind of lower the complexity of the task. Cool. Yeah, that makes a lot of sense. Yeah, that makes a lot of sense. I know it's about to finish up, but maybe just another question and we can also carry this over after things are done, but is Neo4j designed to work with technical document use cases like patents and scientific papers? Will it help in identifying and building relationships between you know science technical concepts uh that's one of the questions from the audience yeah so uh yeah how you say neo4j is domain agnostic so you can store any information you want in it that being said it's's quite funny that you mentioned patents and technical documentation because that's really relevant, or at least what we see a lot of pharmaceutical or biomedical companies, right? There's a lot of money in patents and for me it was also interesting for example biomedical companies when they have this great idea what we should do or what we should research you know what they first do they check if there's already a patent and then if it's already a patent they don't research it because it won't make money. You can't patent it. So I've seen basically like big pharmaceutical companies, they all have their patent graph. They all like scrape like PubMed. You don't actually have to scrape it because it has APIs, right? But it's like you can think of it like biomedical technical documentation, like with all the latest research, and they generate knowledge graphs from those and then use it to inform or recommend. So one thing that they do is basically they generate graphs from all the latest research, then they use recommendations to recommend to doctors based on their specialization, which articles they should read. So yes, definitely Neo4j can be used for patents and is actually used by existing customers for patents and technical implementation. Okay, yes. So now that we've imported the graph, we can also take a look at it. Right. Graph visualization is usually quite nice. So let's see. We have, okay. So we can see that, for example, let's see why we have an award. We have two awards, no labels, so that's kind of funny. But it wasn't in our description. So even GPT Photo can decide, ooh, awards are really nice nice of the FA Cup and English League title. So let's see, basically probably will be who won. So it was Gordon McQueen was awarded the FA Cup. So probably there should be a football team in there. But it's interesting, you can see that even, and we have a disease as well. So even GPT-4O can decide to add some information that wasn't in the schema. So that's why we have the strict mode right if we use strict mode to do we wouldn't see those these nodes in the graph because obviously we would filter them out programmatically right and then let's Let's try to see if we have, I'm trying to find if there's anything more connected, but basically unfortunately. Okay, cool. So, and let's, so we have, for example, United Health Group is a node. And now we can see a bunch of competitors, right? And we can also see probably it's not doing so well because it had a stock sell-off and stock price decline, right? And this is, as I mentioned, event is kind of ambiguous and it can be a lot of things so in this case it was stock has invest the stock price uh declined and john rex works at united health group right so like or or overall uh the gpt photo uh followed uh the uh GPT followed the schema quite nicely. And we can see a nice graph over here. And let's go forward. And then as I mentioned before, entity retopogration is kind of a must. I think it's often overlooked, but you kind of want to find entities like nodes in the graph that reference the same real world entity and merge them. And here we have a kind of involved cypher query, which took like eight hours to come up by multiple people. But in the end, we found a nice way of using text embedding. Here we have the cosine similarity threshold and then word-heading distance. How many characters can you change in the string to have it the same? And you can see it works quite well. Like for example, Bank of America and Bank of America Corporation, Violin Music, Violin Music Group, like Newcastle, Newcastle United, Coinbase. So overall, it works really nicely to find these duplicates, but obviously it's not perfect because nothing in life is perfect. So for example, this one is kind of, I mean, it's the same, it's still jeffrey's virtual space summit right but one is fireside chat which is yeah maybe not really but fine and for example baltimore this one also right okay i can understand that these two should be merged but maybe this is a city and shouldn't be merged together. As always, you have the option to tweak these two parameters and you also have the option to then do some manually like human in the loop. Here, human in the loop is kind of important to know what entities are you merging. But I think like just having like some sort of baseline to start with is really nice. And I think this ciphered credit is really nice because you can see a lot of entities that should be merged together. And let's just merge them together. And then for the last part, as we said, we're going to implement a custom retriever and we have the four existing ones. But here we're going to implement a retriever that first identifies all the relevant entities in the text. Because, for example, the vector context retriever just takes the whole string, embeds it, and then finds the relevant node. embeds it and then finds the relevant node. But what if multiple entities are mentioned in the text? Then vector index might not be the greatest because it will embed the both entities into a single embedding. Then who really knows what happens with those numbers? There's a bunch of zeros and ones. And what do they actually represent? Who knows? So what we'll do is- Really quick before the retrieval piece. Actually, quick question on the entity disambiguation. That cipher query, I mean, given how involved it is, but given the fact that I imagine a lot of people probably need to do some sort of dedupe, is this a template that's just shared publicly? Because it seems like it would be generally useful for a lot of people. Yeah, this is part of the block. This is all available over here. I mean, we can add a link in the webinar if you know how to, but it's this one. So if I know how to do chat. Let me spam it a little bit. To everyone in the room. I think we shared the notebook. Yeah, we shared the notebook in the chat. But basically, to the audience, it's like if you want just a nice Cypher query to do, and to see a dedupe, obviously there's some limitations. You probably need to tweak the word similarity and those types of things a little bit. But if you want an existing template to go off of, you can just copy and paste from this notebook. Because it's a pretty long cipher string. So I would imagine a lot of people are going to be able to write this themselves. It's also, I would make it a model in Lama index. It's just that then it's like very Neo4j specific. And then like, it doesn't fit the best into Lama index, because you guys want to have things that, I say integration agnostic. So, but maybe we can figure out that in the coming months how to add that, because it would be nice to have that out of the box. You just expose these two parameters and let it do the magic. But maybe this is something for them. Yeah, no worries. I think even the Ross cipher is useful for the audience. And then just doing a quick check on time, I know we have technically five to 10 minutes left, but I know the last section is just like the custom retrieval section. But maybe we can just like walk through the high-level concepts, maybe just like go through the overall class and then that should be a good conclusion to this workshop. Yeah, we can do this actually quite fast. So as I said, we select entities from the user input and we use an OpenAI Pydentic program. So basically again, I would imagine we kind of use function calling behind the scenes, right? We say, this is your input parameter and it's a list of named entities in the text. Then we ask GPT for O to extract it. So basically, so then, okay, I'm rambling a bit, but so how do you define your custom editor lever? So your custom editor lever just needs two methods or actually just one. But the init is also quite nice if you want to instantiate, for example, some other functions or classes. And in the init here, we instantiate the entity extraction, which is the OpenAI PyIdentic program to define, to extract relevant entities from text. And then we also extract, define or instantiate existing vector context retriever. We can use it. And then in the custom retriever, it's actually the code is very simple, right? We just extract or find, detect is maybe the best word. If there are entities in the text, so if there are entities in the text, we just run a vector retrieval for every entity in the text. And if the LLM doesn't find any specific entities, we just use the vector retrieval on the whole text, right? And that's basically it. And then you have a couple of options. Then you have a couple of options on the structure or format of the results that you can pass back to the LLM. Here in this example, we just pass back the text. We can remove this because we don't need to change anything. this because we don't need to change anything. Yeah. And then we just basically instantiate the whole thing. And let's see what happens. So if you ask what do you know about Maliak or Dara, basically the LLN detects two entities, right? And then for each of those two entities, it runs vector retriever separately so that it kind of ensures that we will get more information for both entities, right? Because if you just use vector retriever on the whole string, text and bearing on the whole string. You might just get it for one entity, but not the other, right? Because if you use topk4, maybe one is more significant in the text and bearings. But with this approach, we make sure to cover all the entities so we get nice answers for both entities. So yeah, that's like a high level overview of the data lever and now we can answer a couple of questions again. Yeah and maybe just kind of like say a few words to help wrap this up. I think, you know, what Tamaz really showed you was an end-to-end process of both like constructing a knowledge graph and then also retrieving from it. a beginner user for knowledge graphs and llms and llama index and neo4j you know you can basically get do all this stuff in about like five lines of code or if you really want to go in you're an advanced user you're pretty familiar with knowledge graphs we offer a lot of opportunities for you to define your own custom extractors right with our core abstractions um like a robust like property graph store like the underlying low level like storage system. I think a lot of people are interested in knowledge graphs. We basically see it as like a superset, a potential superset of existing RAG solutions, especially if you're able to leverage these like properties and relations to help augment your retrieval. And there's a lot of very interesting. Like an enterprise developer, you're building knowledge graphs within the company, feel free to, you know, reach out to one of us for any sort of like blog posts, case study. We're always happy to feature like really interesting use cases of like knowledge graphs, LLMs, like LOM index and Neo4j. Right. And so always happy to showcase like very interesting applications. But hopefully this workshop was, was useful to all of you today., and you know, we'll have this on our YouTube, um, channel and then basically, hopefully we'll do, you know, maybe even like a series covering like other types of topics, um, as we go forward, but we're definitely looking forward to, to new types of applications, um, built with like, uh, knowledge graphs, KGs and other ones.LMs. So I think with that said, it's probably a good time to wrap up and really sorry. I think a lot of you had a lot of questions in the chat. We weren't able to get through all of them, but we'll have this YouTube video out and basically feel free to comment there as well. So thank you everyone. Thank you Tomas and Logan, for hopping in. | LlamaIndex Webinar: Advanced RAG with Knowledge Graphs (with Tomaz from Neo4j) | 3,212 | LlamaIndex | 20240618 | In this webinar, you’ll learn how to use LlamaIndex property graph abstractions with Tomaz from Neo4j:
1️⃣ High-level property graph index + neo4j to easily construct/query knowledge graphs
2️⃣ Low-level details on graph construction
3️⃣ Low-level details on graph retrieval/querying
4️⃣ Knowledge graph agents that can reason over complex questions.
Documentation: https://docs.llamaindex.ai/en/latest/module_guides/indexing/lpg_index_guide/
[GitHub](https://github.com/run-llama/llama_index)
[Discord](https://discord.gg/dGcwcsnxhU)
[Twitter](https://twitter.com/llama_index)
[Linkedin](https://www.linkedin.com/company/llamaindex)
#knowledgegraph #llamaindex #llms #ai | 2024-08-13T10:41:20.505988 |
https://www.youtube.com/watch?v=k8It3_um-sY | All right, cool. Welcome everybody. Usually I say good morning, good afternoon, good evening. In this case here, it's definitely good afternoon. Thank you for joining the Life Science Workshop 2024 in Munich and around the world globally. So thank you for joining. I'm looking forward to this extension of a very interesting morning session we already had here in Munich on-site with interesting presentations, interesting conversations throughout the morning and now we continue with more conversations more tech hands-on sessions and more you know interesting content throughout the rest of the day so we are doing this for a couple of times now we have I, done the last one in 2021. We did a couple of ones on site and online a couple of years before. But now again, we continue doing these life sciences workshops for the community, with the community, together with everybody involved. Again, here in Munich, in this nice venue here and online on the screens outside. So it's really cool to do this again and to continue to work with the life sciences and the healthcare community on graphs, technology, Neo4j. So a few bits and pieces. So we have a Slido link that also rotates here for the people in the room, but also online. I have shared the link in chat. Use that for your questions, for your Q&A. There will also be some polls in the break, so check it out and let us know if you have any questions. We have some minutes after each session so I can read out any questions you have and see if we can squeeze some questions in. So it's always cool to keep the interaction going and to get you outside basically involved here with us in Munich. And yeah, I think that's it. We will record the sessions, obviously. So if you're wanting to watch something afterwards, you can always head out to our YouTube channel, but we'll send you the link for that. And we share slides as we can with you as well. And yeah, let's have fun. Keep the conversation going. Enjoy the day. Enjoy the afternoon session with us. And I'm heading over to Alexander Jarosz, who will kick it off. Hi. Thanks for staying here. So we have half of the audience from the morning session. Hello everybody around the globe. My name is Alex. I'm a technical consultant for pharma and life sciences at Neo4j. I'm based in Munich. I consult pharma and life science customers. We have a great session in the afternoon. We will talk about Gen AI applications mostly. I'm basically setting a little bit the stage about vector indexes and what you can do with it. I think we have great stories that can the speakers vectors, vector indexes and what you can do with it. And I think we have great stories that the speakers will present them on their own, obviously, and present great use cases. So I'm really looking forward for the afternoon session. Let's get started. This is my pointer. For the ones that have been here in the audience in the morning session, we talked about vectors, right? And in the terms of Gen AI, we want to represent clunky data from audio or text or video or images, machine readable, but also to deduce insights from that and calculate similarities and so on. And this is where large language models or JNI in general come into place to represent such data, which is clunky and bulky, and we don't want to persist that in the database, so to say. We want to represent it in a lower dimension something like a hundred dimension a thousand in neo4j in order to use that for for quick lookups and the main purpose as i said is calculating similarity very efficiently while having lower computational cost in terms of storage but also in terms of the computation itself so just for the theory we have a graph here and two nodes and we want to know how similar node u to v is and this is one technique to traverse the context so the local neighborhood of both nodes here and then transforming that into a vector space. And if they are very close in this vector space, we say that node U and V are very similar to each other based on their context. And we can define the context on the node level down to properties as well as on the relationship level also down to properties. Just like here one example, which represents here some articles, can be PubMed, can be patents, can be any sorts of text, medical sheets, we will come to that in a second, that are here represented as vectors. I don't know, 1536 dimensions, 3000 dimensions that are coming from either PDFs or from unstructured data, Word, Excel, PowerPoint, whatever it is, that gets feed into the LLM and the LLM translates that or encodes that into a vector and we can store the vector in our vector index. And then we have these articles in this case here connected to different so-called entities, for example, that are mentioning a specific word and this is called named entity recognition neo4j and vector indexes um one limitation or many limitations that an llm has is that um llms tend to hallucinate so they make up things that we not necessarily know or sometimes we can recognize that. But it's really, really hard to trace that back and have this kind of explainability and transparency. And that's why hallucination is a hard problem with the LLMs. Sometimes they are unable to verify the answers or we are unable to verify the answers because in the end it's a black box, right? You get back a vector or an answer but you can ask is that true? I did it on my own and the LLM made something up and made it even worse. So it's really complicated to verify the answers. There are some ethical biases. Also, if we think about the approval by the FDA, so it has to be explainable, it has to be ethical. So this is where large language models are really not good. And this is where knowledge graphs comes into place. And in the morning session, we heard a lot about knowledge graphs. So what is that? Nodes and relationships in your company, in your organization, where you have facts and entities stored as nodes and relationships. So this is the left-hand side here. We have the facts that are explicit in the database, nodes and relationships. And if we take the time, we can explain it, right? We can traverse by eye and by human interpretation, we can traverse the graph. In general, that's possible. Can be tedious if we talk about large graphs, in general we can do that. And we use the advantages of an LLM that can easily deal with words and semantic search but this is implicit right so we can we do not have the facts from the LLM and we combine that as a better together story. There's also a slide that you hear in the real audience so early before. So we have the features of a vector database, so we have our own vector index and on top of that we can not only identify similar neighbors or decay nearest neighbors but we can also query the context of those neighbors which can enhance or enrich the answer or make the result more accurate. In addition to that, we also ship our graph data science library that has some algorithms for specific graph problems. Now, a little bit about the difference of vector indexes when compared to graphs. What you see here is a knowledge graph that represents your knowledge. Can be billions of nodes and relationships. This is what usually the customer builds on their own or downloads some public datasets and enhance that, enrich that, and represent the knowledge as it is, connected with heterogeneous data from the data sources. What we can do is we can calculate embedding vectors, so these ones here, so we can calculate vectors, these are represented here as these little sticky nodes, we can calculate them by using information from the node level, from the relationship level, and also from the property level. And they get projected in this very small vector that we can then use for the lookup. And what you see is kind of a dimensionality reduction, basically a projection of a multi-dimensional graph on, let's say in this case here, a 2D space. And then in the end what you do with the vector index, you look using natural language, you use, for example, you use kind of the, you're looking for the nearest neighbors concerning your question. And these get efficiently identified here in this low dimensional space. And in the end, what we do is we translate that back to the corresponding nodes in the relationship. And this is where your knowledge graphs comes into place, because we then traverse from these nodes to the context. That's basically a pretty simple basis for it. a pretty simple basis for it. And this brings me a little bit of how does that architectural look like. So we use LLMs in two ways. The one that I touched so far is the one on the left-hand side. So we try to integrate any sorts of data you can think of. RDF, as I said, unstructured data like text or images, videos, audio files, structured databases like SQL, Mongo, whatever it is, or even single files. We ingest that. We can do that automatically even using the LLM. So any sorts of LLM that you either host on your own or you use the one from the cloud providers that automatically do, for example, named entity recognition or feature engineering and store that in the graph database. We can visualize that with different visualization techniques. One of our in-house ones is called Bloom. We have NeoDash as a dashboarding functionality. And as I said, we can use the graph data science library on top of that to get more insights out of the data. On the front end side, we can use the LLM and kind of democratize our knowledge graph by having it kind of an interface between the user that doesn't speak any programming language or query language. language input that gets translated via the LLM into a cipher query language and with some traversal other queries so we will see that in one of the other presentations later and then deduce something from the graph and get it back as maybe a table maybe a natural language answer a little bit also more technical, how could that look like? So we can use here, as I said, the unstructured documents here. We can put that into a langchain. We can also use a reference ontology or many ontologies in order to say, well, while you're integrating, please use our standard terminology instead of using some random entities that you recognize and try to match them according to the ontology that at the same time gets stored into Neo4j. Talking about the hallucination effect and how we improve accuracy by using a knowledge graph. This is called graph-reg orval augmented generation. And this is a pretty simple architecture here on a high level. We have a user question here. I don't know, please give me all the patients older than 75 with a BMI index of 34. Something like that. This sentence from me, the query gets translated into a cipher query here, the cipher query gets performed on the knowledge graph, retrieves the information and as I said gives also back the context of the of the note here, of the notes. And that makes a big difference because usually this is the proprietary data or your secret data that you in your organization or company have, right? Nobody else has this and that data that is connected internally. So that's why you reduce the hallucination effect by feeding the answer with actual facts from the knowledge graph. Some of you may have seen this slide before. So I'm showing that in order to give you an understanding what that means. Let's assume we have this knowledge graph here on the left-hand side. Can be billions of nodes and relationships. And we query the graph and we get back the answer, which is AD. Now I ask you, what is AD? AD could be referring to this node here, if we got a little zoom in here, this node AD is connected to the context of a disease term. So that tells me AD here in this case is Alzheimer's disease. So the context gives me more insight and reduces the hallucination effect. Because it could be a different part of our knowledge graph where AD is returned with a different context being a protein. So in this case, AD could stand for alcohol dehydrogenase, which is a protein degrading alcohol. It's a completely different thing, right? And based on the context, we get a better answer and a more accurate answer. And that can be any other kind of result in your knowledge graph that has different context. And this is where Neo4j Enterprise comes into place because all of this is working out of the box with role-based access control. So see your enterprise knowledge graph having sensitive data from patients or HR data or R&D data, some of your users are meant to access the data or query the data but not displaying it. So we can use that with role-based access control, even value-based access control, so VBAC, in order to keep the graph safe but also be compliant. Just some words about the use cases. As I said, we have several other customers speaking. The use cases usually are there for getting similarities of, for example, text that we input and all these sentences here are basically saying the same. So the LLM understands that these sentences are basically all similar, so very close to each other, and we can look up vectors efficiently. Earlier in the morning, we talked about the applications, how can we use that, for example, as a recommendation engine based on the context and the vectors, we can recommend, I don't know, the best next hotel for your holiday. Or in terms of insurances and banking, we look for fraud using vectors. And the last one is the REC, so the retrieval augmented generation. Now, two real-world examples that we had at small demos, and since small demos never worked, I just used them here as screenshots. The first one is a data set of, I think it's 50 or 60 medical sheets that are publicly available. We just used them here in this little Jupiter notebook and it says here the patient was 34 years old blah blah blah s 100 protein CD 68 I don't know Typical AC age right you get the point. There are things in the text that a Normal user would read or an expert being a medical doctor would read and say, well, I recognize here the age of the patient and I recognize, I don't know, a mutation in S100 protein, whatever it is, right? But this is the bottleneck. The bottleneck is our users that understand the domain and we need them in order to transform this ugly text here and the kind of the speech and language and all the preferences, how they express different terms, and put that into a database. And this is what the large language model can do for us. And we are doing that by so-called prompt engineering. So we tell the LLM what to do in order to automatically populate the graph. And we tell the LLM from the case sheet for a patient below, extract the following entities and relationships. And step zero, we always say it should never give zero answers. So we ignore here step number zero. But step number one is I tell the LLM, because I have something in mind, I draw that usually on a whiteboard, I want to have something recognized being a case. I want to have recognized a person. And if you recognize something that could be like an ID from the patient, please put that as a property in my be like an ID from the patient, please put that as a property in my node person. And maybe another example, we have a disease. If you recognize something that is like a disease and has maybe an ontology term or an identity or a name or description, please automatically recognize that and put that into the node label with the property. And the second step, which is the third step, is once you recognize all these entities here, please connect the nodes in the following manner. So you connect the person with the case with a relationship called for. You connect the person node with, let's go back here in the end, you connect the person with the symptom via the relationship called hasSymptom. And we can give it some properties. So what we do is basically tell the LLM what to do, maybe even give it a role. You are an expert in pathology, blah, blah, blah, something like that. Or you are a patent lawyer, please recognize, I don't know, patent numbers from the UPC, something like that. And we can even feed here an ontology into it. So we can say, please, by using disease terms here, please use the standard terminology coming from Mondo or coming from something. Super cool. And it works astonishingly well. In this case, what I would like to have is a JSON format as an output and that's basically what the LLM returns me. And for this case here, I don't know where I have the time, I think it took non-optimized way and waiting a little bit in the query with chat GPT, it took like two minutes or so. Which is not scalable for a thousand things but as I said not optimized and I was in the queue. So usually you use your private LLM in order to transform a very ugly text or an image or whatever or kind of a mix of everything automatically into the database. Second example, what I hear from customers in the pharma industry, well we are a regulated industry so ontology terms are there every day. So you have to be compliant to the ontologies. It makes sense to speak standard terms, right, for all these global standard terms for all these global organizations. But the problem is that we have internal and external terminologies or ontologies, a mix and match. So, I don't know, for example, the AstraZeneca are more specialized on this topic. So the ontology would be extended by 25 terms, while bio on the other hand is more, I don't know, in the cardiovascular space or where it is, right, and they have more some kind of specialities or they treat things differently. So we can mix and match. And that is exactly the problem. So one example here, and we did this obviously with public data sets, we have mesh terms, which is not an ontology, but the mesh terms of scientific articles is a kind of a hierarchical structure here. So we treat that as an ontology or terminology. We have the disease ontology here and some sort of Wikidata export here for treating disease ontology terms. And the cool thing is public data sets is that we have these so-called cross references saying that we have an identifier here ABC, which tells me in this ontology, this corresponds to one, two, three. So the identifiers, what we can do in a knowledge graph is just create this relationship persisted in the database as an object to say, well, this term is the same as that term. And this term is the same as that term and this term is the same as that term and the end closing the loop here for the for the triangle and this is how you would mix and match internal and external terminologies it comes in every kind of flavor like xml oboe rdf turtle files and so on so we are easy with that we We can use that. We can import that and also export that using our Neo semantics plugin or if we talk about Python called RDF lib in our cloud environment. The problem is sometimes we do not have and maybe we go one step back. Sometimes we have partial information only or we don't know if this corresponds to that. Right? There are some gaps and I would like to know is this term the same as that term. So how do we do that? Well we can ask an expert, right? We can go into the disease ontology term, the best expert at Novo Nordisk ask him do you think that this subtype of diabetes is the same one as this term here in the other terminology? And they will say, yes or no. Well, obviously, it's not scaling, right? So how can we overcome that? Well, we can ask a large language model to help us here a little bit. And I hope you can read that. I have three examples here with me. I don't know, Alex, do we have a zoom here on this remote control? Double click. Here we go. Okay, first example. Human papillomavirus infection is the same as human papilloma virus infectious disease. Okay, makes sense. Human papilloma virus infectious disease is the same as papilloma virus infection. Okay, so we if we would read that, right, we would say, why not creating a new relationship in here. And this is exactly what an expert would do. But this is an easy example, right? We go to another example. Shigellosis is the same as Shigellosis. Okay, makes sense. Shigellosis is the same as dysentery bacillary. I don't have a clue. So I'm not the expert. So how do we deal with this one? Last example, Bejel. I hope I pronounced that correctly. Bejel is the same as Bejel. Okay. Bejel the same as trypanoma infectious. I don't have a clue. But what we can do is forward that to the large language model and ask, do you think that these are the same? And that's what we do. We have here the prompt. Given the two different disease names, tell me if they are the same. And we input here the first disease term and the second one. And we go through all the records here in our ontology. And if the LLM says yes, then we are parsing the information and the description why the LLM thinks this makes sense. Keep in mind the hallucination effect. And in the end, we are matching those two diseases here that we call, in this case, resources. And we create a new relationship. And I just gave it a name similar according to chat GPT. So pretty easy, pretty straightforward. And this is how it looks in the end. So the LLM tells me they are the same. And in this case, I just made one example here. It tells me even the description of it. Yes, they are the same. Shigellosis, also known as bacillary dysentery, is infection of the intestines caused by a group of bacteria called Shigella. Okay. Obviously we can ask an LLM back. So are there entities in my knowledge graph supporting that or not? So right, keep in mind the hallucination effect. But this is just like another example how you can use an LLM or generative AI methods in general in order to help you make sense of your data, enrich your data or give more context. Just like a small preview here that we have a so-called graph builder. I have that on the slides and there's a YouTube video tutorial. We have built a little demo tool for you that works with your either with a sandbox of Neo4j or your own Neo4j database you just connect to as the connector somewhere you connect here to the database so the screenshot was not connected. You can drag and drop any word file whatever PDF you can drag and drop it here the named entity recognition as in the recognition as in the first example is done automatically. You can also say I want to transcript and encode my YouTube video or Wikipedia article or I don't know, S3 bucket somewhere in my cloud, which has unstructured data and just automatically populate the graph. This is super easy for trying it out. So I was amazed how well that works. Obviously, you can improve and optimize things. But this is, I think, a cool start to get a little bit into the rhythm and to see what's possible and what's not possible. And with this, I'm already at the end of my intro to the JNAR part and afternoon part. Yeah, hope that was informative and easy to understand, easy to adjust and happy to take questions. And I'm really, really looking forward for the next presentations. Thank you. Cool. All right, Alex. Do we have questions? So far, we don't have questions. Do we have questions? So far we don't have questions. Maybe we have questions in the room. Somebody here on site. There's a question. Yes, Robert. Maybe I'll get your microphone. Yes. Sorry. Thank you very much for your nice presentation. So you showed what we can expect based on LLMs information to knowledge graph. Yes. And defining some subjects or some ontologies. So you have also functions that you include in entire ontology. Yes. Okay, you really can upload it. Yes. Perfect. Yeah. You can, it's a one-liner, say, use this ontology. Boom. You can also optimize it by giving it already a predefined data model. So not making up things. You can also say even like I'm totally unbiased, please come up with a data model. And then do a second round and a third round of optimizing it. Or you say, well, I want to have patient or patients or something, you know. Some people call it diagnosis, some people call it condition, or the ontology term. Totally flexible. Also by the way, if you're using the LLM as a user input, you can say, well, this is the prompt from your user, please use as far as you can get with the ontology term and the data model in order to give back the best answer yeah cool thank you any any other questions in the room sounds like maybe maybe i have a question then yes um in in your view i mean you talked a little bit about the llm about this this integration gen ai integration how much of an impact do you think does this make for for the, for the life science industry? I mean, you showcased a couple of cases, you showcased a little bit of what it could potentially be, but what's your thinking? I think, and I was a user and I was in the pharma industry before, I think our data in our domain is extremely connected, while at the same time extremely heterogeneous, which makes it really ugly to work with relational database systems sorry to say that here I'm very biased but at the same time we have problems of integrating that making sense out of it and in the end several people who are not trained for cypher Python is on to to get access to the data so I think that makes it a big difference yeah it's a huge a huge hurdle as well I mean to adopt and makes it easier and especially the visual aspect as well I think is is another doesn't have to be it's not a reconnection with LLMs of which any I but I think the visual aspect of the graph database in the deck is a huge huge plus yeah the only thing that is is very sad is that we for example in the pharma industry we will see in 10 to 15 years if this jenny i thingy and neo4j made a big difference and if there's the next drug which is probably based on a combination or on one of them so let's wait another 10 years we we have a couple of questions from from the internet so um let's uh let's see um from mark when llms create a graph database are there good tools for identifying structures that are likely to be hallucinations can this be easily used to improve locally trained llms i think we go maybe a little bit about this in the in the final session of today when thomas shows yeah hands-on like how how how to use uh neo4j and llms with in this specific use case so maybe we table that for later or do you have a quick that's a tough question i must say um so are there i think we are already seeing patterns of generated answers that are not specific enough or also tend to use the same words over and over again. But I don't have a clear answer on that. But maybe Thomas can comment on that because he's doing that all day long. So that would be a good question for him. Let's wait for that. That's a super good session. I think that fits um are there any use cases for predicting new nodes as a question from gregorius uh for sure so currently i don't work with any customer predicting nodes but well link prediction is one of the the big topics for creating links between, for example, drug targets and a new indication. New notes. Haven't seen something yet. Yeah, I mean, with the graph data science library and all the algorithms, we come I think the link prediction is probably the most likely one to make sense in terms of graph use case, because you then see, okay, you have a couple of links here, whereas a good link that is currently not established. Maybe one avenue towards that question, and we had a, what was it, a podcast with Jason from Developer Relations. He used an LLM for creating a mock-up data set. So for example, you don't have enough data from patients or whatever it is, you can tell the LLM, well, I have this and that data model, please generate a random number of records so that I can debug and kind of create my application. That's a good point. I think I can find out that video where we did a recording of that and share it in the chat. But yeah, that would be creating nodes, so to say. Yeah. Goes a little bit into the app you showed, and the Knowledge Graph Builder app. I think that takes a couple of good info points, data, PDFs, Wikipedia, YouTube videos even. And then from that, you create a Knowledge Graph. And that obviously adds nodes. So now I'm locked out of my system. Great. All right then there's a question from Shang. Could you share the repo of your demo and could you share some use cases of healthcare institutes also since you mentioned using GPT how do you deal with patient ethics issues I think that's a so the the Jupiter notebooks or github repositories will definitely share but there's also github repo for the graph builder we got a ton of others by request yeah we obviously we create more and more demos in the morning session I showed the the demo from Stuart about Campbell data integration. So usually we share that. What was the second part of the question? The second part was, since you mentioned using GPT, how do you deal with patient ethics issues or security of data, property, personal data, especially in patient? Wow, tough question. I would answer it in a way that you can, as Neo4j provides role-based access control to restrict access to certain sensitive data to certain users. And that can be a traversal, but can be that you allow the traversal, but not visualizing that or everything. So I would answer that on on role-based access control which is a database feature right yeah yeah and then I think you can also take it in in as far as that you should probably don't upload your patient journey data or your patient confidential patient data into an public chat GPT like system on the internet true there True. That's probably something you should not do. Obviously, yes, the data will be used in other ways. So that's something you should be aware of. But, I mean, you can... For the companies, that's a no-brainer, right? I don't have to tell anyone that AstraZeneca would never, ever use the public version of chat GPT to put in any sort of proprietary or confidential data or sensitive data right so that's why you have your own either private LLM that you that you trained on your own or you use kind of the from one of the cloud vendors which is in your VPC isolated ecosystem yeah and then you can obviously deploy a GPT likelike system on your own instances, on your own system, and then it's closed, fenced off. I think that's really important to be aware of this, that you can do this, you can do this locally or in your own virtual private cloud environment. Then connect it as well to a Neo4j instance to add the knowledge graph. Everything stays in your house, basically, in your private servers, in your private infrastructure. So you can do all that without going to the cloud if you don't want to. And especially in this kind of scenario where it's really confidential, it's very important to stay secure, keep it as secure as you can. All right. We have one more question maybe from Dag on the Knowledge Graph Builder app. They ask, when extracting data from the YouTube YouTube video is that only the description and title or does it include the transcript as well I think I can I can also answer that yeah it is I haven't used the YouTube part of that yeah it also is the transcript so it takes it takes a transcript and loads that in and creates out of the data that it finds in the transcript similar to a Wikipedia page takes it all in and creates the chunks and and adds that information to the knowledge graph does that refer to all the songs of Taylor Swift being transcribed and then put that into a knowledge graph I don't know do that build that and then chair link with us interesting good all right I think we have maybe one or two more minutes until the next session, but I don't know, do you want to say a few words before we dive in with Sebastian or should I take another question? Maybe one more question. Here from Alejandro. What is the challenge associated with the adoption of new ontologies that exist out there? It's a wide question, I would say. Yeah. Well, if you're very critical on everything what's new and not being already a standard, you can be very critical on that. But I don't see why that is an issue. Usually what we see with customers is that they integrate not only one ontology, maybe that is an avenue for the answer, they integrate more than one ontology, they versionize the ontology so we can even keep track of, yeah, an ontology is a living thing. And you can traverse back and forth, say, well, the new version or the new subpart or the new subgraph of the ontology, I'm not trusting that or I don't like that or it doesn't fit to my application. I'm using two versions before. And it's probably also a good kind of like almost planted question, but it isn't, to our next presentation right yeah we gonna talk about ontologies we're gonna talk about well not we but Sebastian will be talk about how you can load in different kind of anthologies make a knowledge graph out of out of external sources and enrich your knowledge so I think watch out for the the next session basically yeah so we hand over to Sebastian from University of Heidelberg, or University Clinic of Heidelberg. He should be connected remotely. He should be. Hello Sebastian. Oh wow. Oh here it is. All right. Hello. Good to have you here and looking forward for your presentation. Yeah. It'll be cool. So I hand over to you. Maybe you give a little introduction to yourself and yeah, take it on. Thank you. Do you hear me? Yes. I even get a little bit of feedback. Well, I guess that's fine. So my name is Sebastian. Many thanks for having me. I'm sorry that I can't be there in person today. I work, as Alex said, in Heidelberg with Julio Záenz-Rodríguez and more recently also connected to the EBI at Open Targets. And I want to talk a bit today about the kind of pressures that we had for finding solutions for dealing with knowledge representation and usage problems in this general area. So I want to maybe start a bit on a tangent So I want to maybe start a bit on a tangent. Because I want to motivate why we do what we do and the way that we do it. And I think it was a great introduction to the general principles of Gen AI in this previous talk. And many of the things that we do a little bit differently are motivated by the special requirements that we see in in biomed specifically and one of these is what we see here this seemingly trivial statement that correlation is not causation but while it's superficially pretty simple it gives us a hard time in gives us a hard time in applications to modern biomedical computation. As an example, you would, for example, be interested in the impact of nature versus nurture on a given trait or a disease. And this becomes complicated really quickly if you look at the modern omics that are supplied, because we very easily hit the curse of dimensionality looking at all these genes and variants. And this becomes even more complicated if you look at the complete picture, where most of the experiments that you run are informed by a multitude of different factors. And it is only natural now in the time of ChatGPT to be hopeful that generative AI can help us deal with these problems in the same way that it has helped me create this illustration in a matter of a couple of minutes which would have either taken me multiple hours to complete or could alternatively have looked like this. But if you go beyond artwork then it becomes more challenging to address the problems that we have in systems biology, systems pharmacology. For example if you compare the schematic that was drawn by a human with a very distinct purpose to inform the reader about the growth factor pathways in the human, particularly when it comes to pharmacological intervention, then if you want to try and get the same out of the current top of the line models, GPT-4 and DALI-3. You, and this will surprise no one, get something that looks pretty impressive, but has absolutely no meaning. But what is not trivial is how would we get GPT-4 and DALI-3 to actually adhere to the purpose that we want them to complete. Because GPT-4 does have all the information, it knows exactly which proteins are part of the EGF pathway and how they are connected. It just can't translate it into something useful for this case. And we looked at this in a recent perspective that we wrote together with Stefan Bauer from Munich, where we looked at the spectrum of models that we use in systems biology from the low parameter space until the newly minted foundation models with this unimaginable parameter space, and particularly looked at the impact of these knowledge representations that we use and the biases that they cause in the models and how that relates to this problem of correlation versus causation. And one of the interesting insights for me personally was that the more complicated the models get, the less intuitive it becomes how they use the knowledge that we give to them. And so we are starting a series of experiments, and I don't want to go too deep into this, but for example, we examine models with fairly high parameter values, such as the Giers perturbation model, which is used to predict the effects of knocking out two genes at the same time. Some of them may not have been seen during training of this architecture. It's pretty complex and it is about one million parameters. You can't say exactly because it depends on the size of these GNNs here. So two graph neural networks and then some MLP layers. I'm not going into details here, but essentially what is interesting about this model is that it has some discrete representation of prior knowledge in networks. For example, here they use a perturbation relationship graph that is based on gene ontology. So if two genes are represented in the same cellular function in gene ontology, they would be connected in this graph. ontology they would be connected in this graph and so we exchange this graph with a couple of alternatives and we retrain the entire architecture so from scratch and evaluate the performance against the original and then if we for example go to regulons so instead of gene ontology we use regular collections that just collect transcription factor target gene interactions two of them are called dorothea and collectory and the other colors are just unions regular collections that just collect transcription factor target gene interactions. Two of them are called Dorothea and Colectri and the other colors are just unions and intersections of these datasets. Not terribly important in this figure. Then we see that the training efficiency measured with the mean square error where lower is better in this figure really does not vary a lot between the gene ontology representation and the grn representation and then what was even more surprising to us was that if we replaced the prior knowledge with a random graph, in this case Erdos Renyi, simulated graph of different sizes and connectivities, then you would even see more similar performance in many cases of examining the retraining of the model. And now coming to the infrastructure part of the talk, this is just one example of the very complex tasks that we have to undertake in modern drug discovery and related fields and there are several other complicated tasks that we can imagine but there's also this other side to the job which is the more tedious menial tasks that you just need to do. And depending on who you ask, this may even be the majority of the time that you spend in your working day. And this, in general, motivates us to try to develop frameworks that get you to be able in a quicker way. We, of course, want to help with the complex tasks, but I think the trivial ones are the ones where we can more effectively make a difference, particularly with the software stack that we have now. So the first framework that I want to introduce is called BioCypher. It is used to organize and ground knowledge. The paper was in a somewhat grandiose manner called Democratizing Knowledge Representation. But what we essentially want to do is we want to maximize the accessibility of these knowledge representation solutions, not just to the big companies that can hire entire teams for their knowledge representation needs, but rather to anybody who has this problem. And in brief, this is an open framework to reduce redundancies in data curation and management by providing reusable components of these workflows, increase reproducibility by making this all containerizable and, for example, adding it to the Neo4j docker container. Reduce time spent bug hunting in your individual scripts because the framework is end-to-end unit tested. And also ground and harmonize the knowledge based on ontologies and symbolic abstraction, as we have just heard before the session. And this is one example, pseudo-configuration of this grounding which i will get to in a minute so this framework is modular in multiple ways the first modular component is the input side so any kind of resource that you have whether it's on the web or it's some flat file that is on your data lake etc you can pipe into the framework using what we call adapters which are essentially python etl modules and the second component are then the ontologies where you can select from the wide range of available ontologies that we have and we bring these together using this aforementioned configuration, where we say, for example, we have a knowledge graph that is supposed to represent some part of the open targets data set, where we have genes and diseases, and these are connected by edges that the gene is perturbed in this or that disease. And then the third modular component is the output side, which is procedurally very important because it allows you to maintain and analyze your data in the format that you would like, for example in a Neo4j database, but then also archive and share specific versions, for example if you want to publish a paper. And this redundancy that we want to alleviate in some fashion relates to the fact that many of these efforts are often in isolation. And even if they are open source, it's not really clear how to find these things. So what we do is we maintain on our GitHub this project board with all the components that we know of that use BioCypher in some way. So we have these pipelines and these pipelines use individual adapters for individual granular knowledge sources and they use it to produce some output and they use some ontologies. And this is just a way for users to find, in their area of research, has something already been done with my resource, or do I need to start from scratch? And we also build a BioCypher knowledge graph out of this project board, which can then be used to, for example, look at the different areas that are represented, like proteomics or metabolomics. And given that you believe me that BioCypher serves to increase productivity in this kind of knowledge management area, then we encountered the second problem, which is that knowledge usually isn't everything. You still need to use it effectively to draw some insights, to generate some ideas. And together with the recent success of large language models, this inspired us to create a second framework in extension of the first one, which we call BioChatter. And this revolves around this conversational circuit that we have come to get used to in the recent time, where you have a primary language model that responds to a researcher's question. And then we can enrich this conversational loop with all kinds of information. So for example, as Alex has already intimated, we can tune the prompts because we know what exactly we did to the data that we generated and we can inform the language model what we would like it to do more specifically. But I think more interestingly nowadays is having integrated software solutions for retrieval augmented generation that also thankfully was already introduced. So we, for example, obviously connect natively to BioCypher knowledge graphs, which you can use for long-term memory of some agent, or you can use it to inject prior knowledge into your conversation, but you can also use vector databases and then use unstructured knowledge in the same way. Now language models are pretty good at also parameterizing other softwares by just learning the API and then coming up with some sensible parameters. So if you want to use external software, you can also chain to these. And then there are many advanced use cases. For example, you could implement a fact-checking agent that looks to a second language model that does some stuff and then comes back to you with an estimation of whether the response of the first one was actually useful or not. And you could even do things like modality integrations or talking about images and these kinds of things. In the same way as with the first framework we want this to be as accessible as possible so on the extreme end we have a user with no programming knowledge and those would usually use one of the graphical user interfaces we have two prototypes or excuse me, to reference implementations that developers can use to create their own solutions, one of which is in pure Python that's called BioChatter Lite and the other of which is a relatively advanced use case in Next.js, which connects to a REST API server that can also be used in isolation. And if the user is proficient in Python, then of course they can also use the library directly. Now we have heard already that there often is some concern about whether these language models actually do what we want them to do, or whether they confabulate some things or they go in the wrong direction. And this is why in combination with the open source nature of the framework, we also have a living open benchmark in the framework that we continuously execute once new models come out or if we have a new function that we want to test. And this is essentially a PyTest matrix of all the models and all the tasks and the different quantizations, for example, of the open source models that then compares their individual performances and here we can see something not very surprising for example that open ai is still in the lead when it comes to comparing to open source models but some open source models for example this open hermes model are closing the gap noticeably and it does seem to depend a little bit on the quantization, how effective the model is, but there seems to be somewhat of a plateau around four or five bits, interestingly. And the size of the model is not the primary determining factor for the performance. So the bigger LAMA3 model, for example, still does perform worse in our specific biochatter benchmark than this Open open Hermes model, which is a mistral-based model that was fine-tuned on GPT-4 generated data. And then we have some very surprising things that we observe. For example, due to changes in behavior, maybe changes in the system prompt, we haven't had time to look into it, but these new GPT-4 versions that recently came out should be at the top, but they actually drop quite a bit in performance in our specific tasks, which probably can be addressed by prompt engineering. But it's nevertheless interesting that the newest GPT-3.5 Turbo is still among the top performers, but GPT-4 kind of drops down. Of course, we also benchmark the specific functions that we implement between BioChatter and BioCypher. And in this case, it's this text to cipher generation to query the knowledge graph for knowledge graph rack. And for this, we use this configuration that I've mentioned before, or actually a slightly modified version of this configuration that is created by BioCyCipher as it ingests the data and creates the knowledge graph. It also notes down some interesting things about the schema of the graph in this configuration, and this is then passed to BioChatter to parameterize the prompt to the LLM. And if we give these same LLMs, and these are all the LLMs that you saw on the previous slide, if we give these the same information, so the schema of the knowledge graph, the user question, maybe some context or a role description, and we compare the naive query, meaning we give it all this information, but we just put it into the prompt, query, meaning we give it all this information, but we just put it into the prompt, versus we use this implementation in biochatter, which gives it to the LLM in a specific way and order using this configuration, then we see that we have a dramatic increase in performance of the text-to-cypher performance as compared to the just put everything in the prompt process. just put everything in the prompt process. And then one very important component of all the work that we're doing is that we really need to push this open science direction of addressing this ecosystem of knowledge management and usage to go against these traditionally siloed efforts where some people may have questions that other people may have already answered and to kind of enforce this early sharing transparent interdisciplinary collaboration which I think if we want to make a difference we absolutely need to do and also to allow independent validation and then iterative improvement on all these processes and this means to some extent that we need to do and also to allow independent validation and then iterative improvement on all these processes. And this means to some extent that we need to chase additional metrics beyond what we usually do in science, which is looking at the publications, but rather go the way of probably more the corporate life of looking at real world adoption and reproducibility, the quality of the code and the documentation, etc. And taking all of this and wrapping it into a vision of what we could do with these GenAI and knowledge representation stacks, I like to think of it as somewhat of an integrated research environment, similar to the integrated development environments that we can already use programming where you have all of the tools that you require in a singular place that you can easily access and to illustrate I want to go just briefly through this process of a scientific discovery so if you have a new question, if you begin a new project, you often have to catch up on the background. You need to look into the literature and maybe into databases. What is already known about the phenomenon that you're looking into, you need to get caught up. And for this, the retrieval mechanisms with vector databases and knowledge graphs and then having a conversation with an LLM about catching up on the state of the art is very useful. Then once you have caught up, you probably want to design an experiment. And here you again refer to the data that you have, but also it can be very useful that the LLMs are very flexible in the personas that they can inhabit. So if you are, say, a pharmacologist, you maybe want to talk to a statistician. Whereas if you're a computer scientist, maybe you rather want to talk to a medical doctor in making your experimental design watertight. Then it comes to the application of the experiment. If it's in silico, then the LLMs can probably help in parameterizing some functions or just generally coding. And then you usually get a ton of results with these new modern methods that we have, which also means that the results in themselves become a high dimensional problem and here again it's it helps to go back to the previous knowledge that we have and then interrogate the language model for example with regard to the consistency of your result with previous results does it contradict something that we thought was true before etc then it comes to structuring and critiquing your dissemination where again the personas can help maybe you'd like an editor to look at the order of your findings and then at the very end of the scientific process, we are also thinking about ways to improve the peer review process in itself and then maybe help reviewer number two to not trash your paper in uppercase insults but rather give some constructive criticism. I'd like to mention one project that we're starting now due to funding that we've received from Open Targets which is called Open Targets 3088 or Automating Knowledge Management and this will be an open source project that concerns the entire knowledge engineering cycle that we know from the extraction of information identification identification of entities, relationships, etc. from text with a package that is supposed to complete our ecosystem for now. That doesn't exist yet, but is planned to be a result of this project. And here we have the fortune of working together with Europe, EMC and Campbell at the EBI with one position each. Then we want to create a flexible knowledge graph platform for the community where they can use open targets and other data and dynamically create individual knowledge graphs that also include, for example, private data in each case, and then also have an integrated conversational AI using the biochatter package and going to downstream applications. And then there we have this stretch goal of attempting some semi-autonomous human-in-the-loop applications, model chaining. So we would connect to the knowledge, maybe the knowledge that is extracted from papers. We would use the LLM to call simple models, but also more complex models. For example, the ones that Mo is developing in a parallel project for predicting perturbation effects and have a connection to a conversational AI that the user can do this in natural language. And then the stretch goal, as mentioned, to maybe do in situ, in silico experiments, if the questions or answers that we get in the conversational interface lead towards this direction, then we can ask what would happen, like what would be the prediction if we target this particular gene in this disease, what would be the transcriptional outcome or something like that. To conclude, these are the short-term plans for the ecosystem. So we want to work on strengthening the community by crowdsourcing features, but also feature requests. So we need to find out what does the community need? What are the use cases? What are the actual requirements that people have? And make the project more independent of the individuals, the lead developers that currently have. more independent of the individuals, the lead developers that currently have, and develop a unified API for this projected ecosystem, obviously create the BioGather package and make it consistent with the other two, and then having one sort of language to interact with the knowledge across these packages and connect more closely with the machine learning methods circling back to the what i said in the beginning for example to connect more intimately with these deep learning methods like if you want to create a graph neural network from a specific data set that you have about the clinical implications of a cohort that you had, it should be easier to get to a structure, to a particular neural net that you can then run on your data. And also to increase the robustness of the data management and machine learning, which traditionally has been rather on the side and more emphasis on the algorithms, I think it should be maybe equal effort in terms of robustness. And with this, I already need to thank all my colleagues, contributors and also the sponsors to Neo4j for inviting me, to you for listening. for jay for inviting me to you for listening and since you can't really hit me up in the break feel free to get in touch in any way that you feel is appropriate um my dms are always open and you can find everything that i talked about under these links here thank you links here. Thank you. Thank you very much, Sebastian. We have a couple of questions online. I wanted to make sure that everybody in the room also takes a chance to ask some questions. So if you have any questions, raise your hand or use the slide and it's fair for everybody to... The window closes. It's fair for everybody to... So I have a look up here. Sebastian, the first question comes from Georgios. He asks, what has been the main challenge during building the text to Cypher component? And what do you expect will the main challenge the users may face? challenge the users may face? So I think there are multiple dimensions to this. One of which is obviously which language model you use. So if you for some reason need to use an open source model, then obviously you're constrained by the performance of those. And some of them really don't really understand what the user wants from them. That's just a matter of fact in the early stages of open source LLM development. So the main hindrance is that we can't expect an LLM to respond the same way that we would expect a human to respond, like a person that knows cipher and we say like hey do me i want to identify this pattern in the graph and we kind of assume that the model thinks about it in the same way as the human i think we can't do that and we see that the prompts play a huge role and there are more we are integrating at the moment more modern way of generating prompts, for example, DSPY, where you need a metric. But then if you have that metric, you can basically do machine learning to generate the ideal prompt. But yeah, then you need to define a metric, which you would have to do in each individual case. And then this also you need to do again for each LLM. Like if you change LLMs, then you also need to update the the prompt generation but yeah practically the biggest problem is that the llm actually understands what you want and then also understands the graph yeah i think that's that's a good point sometimes you you give it some information, and it's just, as your image showed very nicely, some garbage comes back. And it's like, yeah, OK. But I think humans actually have the same problem, to be honest. Humans also don't understand what you want sometimes. That's a good point. Yeah, you think something is very clear in your mind, and you tell it to somebody else. And then they tell it to you back and then you realize oh they didn't understand at all what you were saying so yeah can happen I have another question from Magdalena she says what role do you think biochatter can play in addressing the issue of irreproducibility of published research I'm thinking in the direction of validating new publications against current knowledge hypothesis mentioned in other papers etc yeah that's one of the reasons why i'm very excited to be directly involved with european cena because this is one of the main concerns and they have been doing so with traditional methods before but now obviously we're looking into characterizing the publication landscape more dynamically or more extensively because previously if a human had to look at it there was no other way now we can use an llm so essentially projects will be created in the next months that look at embeddings of the entire corpus of the EuroPMC, looking at particular flags, looking at similarities in this embedding space. Are there particular criteria where we can say this is fishy paper, right? This looks like a paper mill paper or something like that. And then we can, in a second step, go towards these problematic areas in the embedding space and take a closer look at these papers, maybe also with the help of LLMs just to allow Europe PMC for example in in the submission process already to flag some papers that may be problematic from from one perspective or the other so definitely I think there can be much progress if the right people are involved. Again, invitation to everybody who wants to be involved. The biochatter community or biocypher community is open. And just like you said, Sebastian, ping you or just join the website. Maybe we squeeze in one more question before we continue with the next session. Right, Alex? Do we have more time? Can we do one more question? Sure. Yep. There's one from Rafsan and they ask how do you aim to optimize runtime of llms for biomedical entity detection and relation extraction we are working in a similar space and runtime is our biggest bottleneck at the moment yeah so i think it's a a very pivotal question for the near future of implementing these things at scale. I'm kind of in a convenient position when I say we don't really worry about this problem. We are not LLM engineers. We are just aware of this problem. And of course, we keep tabs on the current proposals, for example, parameter efficient training, whether it's post-training or whether it's parameter-aware, quantization-aware training. I think these look great so far, particularly in the open source space. But these really are inventions of this year, so I expect a lot to happen in our pipelines, for example, in the Open Targets project and with the Europe PMC pipelines that already exist. I think a multi-step procedure is the most useful thing. So if you have a dictionary-based approach that can already take care of 90% of your entities and that has a small economic footprint, then you would do 90% of your entities and that has a small economic footprint then you would do 90 of your data set using this dictionary based approach and then hopefully your approach is calibrated so it can tell you i'm not very confident in this particular entity maybe someone else should have a look at it and then the next stage is escalated to maybe an lLM, maybe a smaller, like not LLM, but still deep learning model and so on. Like doing this, I think people call it deferral architectures, where you defer only the complicated, only the problems that the simple system can't solve, you defer those to the next stage. And I guess it depends a lot on the problem how exactly this deferral architecture looks. Cool, super. Thank you very much, Sebastian. I think, Alexander, you will introduce our next speaker. But just wanted to share with everybody, because the question came up a couple of times now, we will record this session. We will share the slides with you afterwards if we are allowed to. we will definitely share the links to the recording so um don't worry about it you you will you will get that that content uh afterwards and yeah i think with that we we continue with with the program uh our next speaker uh will be introduced by by alex it's gonna be gusf. Can you hear me? Am I on? Okay, cool. Yeah, the next speaker is from Jon Snow Labs. Probably you have heard about Jon Snow Labs before. They are experts in natural language processing. Obviously now also digging deeper into the Gen-AI or LLM part of that. And they will show an exercise how to use LLMs in order to automatically integrate the data from unstructured text, for example, doing named entity recognition and then, as I said, ingested into Neo4j. And I hope you're already online. Yeah, I'm right here. Perfect. Good to see you. Handing over to you. And you have my screen, I guess. Yes. Hi, Alexander and everyone. So I'm working for Johnson of Labs as a data scientist. And I will discuss our application. We use the opioid research. It was it started as an internal project, but it is it's an epidemic right here in the United States, probably similar in many parts of the world. So this is the motivation for the opioid research. And we trained models, multiple models actually. I will give you an idea about these. And then we use the knowledge graphs to establish the relationships between different nodes. So I just remember from the previous presenter, so I'm working for Johnson Hall Labs before I worked for different data and data science, data characterization projects, the history in military, mostly, again again for materials research and characterization. So right now I'm working in healthcare. So again, I assume that you see my screen. So this is what we do actually at johnson labs and you can see that the entity recognition we call it uh ner named entity recognition and it's uh you you're gonna hear this word a lot we use this phrase a lot so what you see here on the screen for entity recognition is we start i mean again we are working nlp and llm so we start with I mean, again, we are working NLP and LLM. So we start with unstructured text. So just the clinical reports, mostly medical reports, radiology reports, but the initial starting document is PDF or just we use the OCR to get the text. So there are a lot of words in a sentence. We use our ADER models, deep learning models. Most often we have more than 2,000 for healthcare applications, 40,000 in total. And again, from the application or from the example that you see here, it's like you see that the dosage information is extracted for 40 units, and it's labeled as dosage. And then insulin is extracted and labeled as drug, and then the frequency. So we start with the text, and we get the entities which are of importance for this application. In this case, it may be like pathology, drug-related entities, plus the symptom or the disease. In the other case that I'm going to discuss today, it may be just the opioids and effects,, in the hospitals make the, just generate these reports and then we get the NERs, then we get the entities and then link to some standard codes. As you can see, again, the entity linking, you can see that we use SNOMED, RxNorm, ICD-10 codes mostly. The insurance companies need these, so it must be quite accurate. And then assertion status. Assertion is very important because the, like, let's say the patient is an old man like me, he may have a long history of illnesses, but it may be the, not a chronic, but acute illness. Like 20 years ago, he had some problem. And so we have to understand, okay, the NER model is going to get you this sickness, symptom, disorder, anything, label it. But it's not the case anymore. I mean, the person just get the drugs, use the drugs, and he doesn't have this illness. So we use negation or assertion for this one to understand if the symptom or disease is still valid. So it's going to be either present or absent. And in some cases in the clinical report, the patient is going to say that, OK, my father had the disease before, so it's going to be someone in the family. So that's important to eliminate some of the entities in the report. And then on the right, you will see relation extraction. We use it a lot. And we used it in this knowledge graph case. So OK, we have to find the relationships between different entities. Or NER model gets the starting point is mostly the NER model. The NER model is going to get you the entities of interest and then you try to establish the relationship like the drug, name of the drug, and then the dosage or frequency. And the identification is very important here because we have a lot of reports. We are mostly talking about hundreds of thousands, millions of reports. So the HIPAA rules are very strict here, same in Europe, maybe even stricter. So you have to get rid of the identifiers first, like the patient's name or the doctor's name, the hospital's name, date, social security number, and all these. So, in the medical record number, so we use the identifiers for these and then Q&A and summarization. These are LLMs. We use LLMs in this particular project. We use the deep learning model to get the opioid-related entities, but using LLM is also possible. And I guess the technology is moving in this direction, as everybody knows. So we are working on fine-tuning models because we don't want the LLM to hallucinate. So we fine-tune the models on the medical data. And on the left-hand side, OK, accuracy is crucial in health care. You don't want to make mistakes. And scalability is important. As I said, I mean, any NLP model, you can train and use it for 100 documents, but it's not going to work for millions of documents. So that's why we still use the deep learning models efficiently. And I can give you some benchmarks for these. How long does it take to run the pipeline on like a hundred thousand of documents, and that's why or why you still use the deep learning models. So again, I'm going to share this PowerPoint with everyone. As you can see, we provide these services, and I will advise everyone in the research to use this. I guess our company does not charge any fee for using the library for research, so I will advise you to take a shot, and there will be trainings. It will be in mid-July, so maybe one month away. It's going to be two plus two days total for like four hours a day about using this library. So why? I mean, why we use all these models and what we try to do? The problem is, as you can see on the left-hand side of the slide, there are a lot of data sources. So the technology, the smartphones, and all of these reports. So there is a constant flow or flood of data. So they are coming and they are meeting at the middle, as you can see, EHR or electronic health reports. So there will be a lot of EHR. I mean, as you can see, there is a rain or flood of information coming from the lab, let's say, going to the pharmacy. And then the lab makes the analysis, then sends it back to EHR again, which is the meeting point, and then go to the radiology, to imaging, and then comes the result. And then it goes all the way to clinical visit to clinical trial and then it will be constant change. I mean, the data never stops and we like it. I mean, we want data. We just want to analyze it. The basic problem with healthcare data is, I mean, I put some examples on the right and again, some triage notes. We know that I mean the there are a lot of abbreviations, typos, the healthcare personal doctors or registered nurses or other healthcare personnel they are in a hurry. They have to write a lot of things. They have to make a lot of notes. So there will be a lot of abbreviations. So OK, they understand. But we want the natural language. I mean, we have to understand this. I mean, the model has to learn from it. And the other one is there will be typos like yesterday. OK, you can't use a dictionary here, or like a text matcher which uses a dictionary. So that's why we use the NER models. And then Tylenol, basic painkiller. You can, again, make a typo. But the deep learning model is going to understand that, OK, it means probably the intention is here, again, this drug. So the model is going to label it as a drug and a lot i mean uh it's fuzzy so we use a lot of different embeddings here our basic embeddings model is 200 dimension and we use bird models also some some nr models are trained on birds some are some are trained on our embeddings model it's the the model the the model that we use is trained on birds, some are trained on our embeddings model. The model that we use is trained on healthcare data, so it's healthcare-aware, I mean, specific to healthcare. And again, at the end of the day, we try to understand the aim is to understand the symptoms, the diseases, get the, let's say, the standard codes, and find the relationships between these. And again, the data never ends. So that slide is pretty good in telling what I mean. Again, you can see the example sentence here. This patient, female patient, 28 years old. She has a history of type 2 diabetes. Eight years ago, diagnosis was eight years ago, and she is on metformin with the dosage and frequency information. So I'm going to show you the pipeline, a basic pipeline or NER pipeline. So normally, the NER model, you have to get the sentences first. There will be the clinical report maybe 10 pages long, but first you get the sentences, split the sentence. We have multiple models for these. There's a deep learning model again for getting the sentences, or again, it may be rule-based. It may say, okay, if there's a full slot, a deep learning model again for getting the sentences or again it may be rule-based it may say okay if there's a full slot it means that that's the end of the sentence but deep learning model performs much better then we get the tokens by using the tokenizer and then we are going to use the embedding so in any our models you're going to use the word embeddings in in the icd-10 or snomet or the standard codes you need the sentence embeddings. So you're going to use a different embeddings for getting these standard codes. And assertion, I mentioned it. I'm going to show an example about it. So it's important. Maybe this female does not have the diabetes. Maybe the diabetes was in her family. So we have to make sure that the disease or the symptoms belong to the patient. So we have to make sure that it's if the disease or the symptoms belong to the patient. So there will be a lot of information at the end of this pipeline. You will see that, okay, we have the, all these, I mean, I marked them in different colors. So we get the, and the, the you can see that the labels are right here. So you get the important tokens, merge them, make them a chunk, and then label them, age, gender. And here comes the accuracy. And then if needed, you're going to get the standard code. So this is like the JSON file. You may also have the option to get the CSV file. So you can see that you will have all the information that you need. So the rest is post-processing depending on your application. You can see that the highlighted verbs or chunks are important. You may say, okay, left kidney was not marked because the model did not work on this. There are models which work on the anatomy, so it's going to get the body part. So it is not extracted right here, but it is possible to extract it quite easily. And OK, medical QA, this is where we use LLMs. As I said, we use multiple models here. We trained on medical data. The good news with this one is it's constantly chat GPT-4. Excuse me, GPT-4. We are every day making analysis to understand if it's performing better or if it's making mistakes. We can say that, okay, the good news is we get the references or, I mean, we try to give the user the references so that it's going to be like a huge information for this. Looks like we lost Gursa for a second. We'll try if he comes back in a moment. It looks to be on his side. Well, we can dance. No, better not. I'm hoping he will be back any second. Hopefully he realizes that he lost connection so that we can reconnect with him in a second in the meantime maybe I used this this time to to highlight a few if you think exactly this slide a few a few things that are available for you to to look at once you are finished with today or generally interested in more Neo4j stuff. So we have an active community, we have a community forum, we have a Discord server, so whatever you prefer you can chat with the team from Neo4j but also chat with the wider community on the graph side of things, exchange ideas, exchange questions, get answers, and talk generally about what you are currently working on and find other people that maybe are doing the same or have similar experiences or can help out with questions you are stuck with something. A great way of learning is the Graph Academy. So no matter if you are more on the beginner side, just want to start with an intro to Neo4j, a fundamentals cipher course or some data modeling lessons, you can go there. But you can also do more technically advanced sessions. So we have an LLM course now, obviously, how to build a chatbot with Neo4j. We have a vector search has been added last year, if you're not aware. And our colleagues from the Graph Academy have added a specific vector course how to work with these in in the Graph Academy. So there are lots and lots of courses, also specific ones to data scientists. So if you're more into data science around, and I think that's that's a cool a cool way of you know learning things and in a mix between hands-on and and videos and and content to read so I think that's it's really nice and then if you want to try out new for J yourself I mean we have lots of options for you to do that you can download desktop which is a like a little suite that comes with you know you can deploy your Neo4j instance locally on your machine and play there with something with bloom with visualization with some add-ons or you just use Aura which is our graph database as a service offering where you just you can get a free instance it's limited with you know small small number of nodes and and relationships but the small number actually is not so small so i think it's 200 000 nodes and 400 000 relationships so you can actually get somewhere relatively decent in the free edition and just play with data play with it use data importer which is a pretty neat tool to get your data from an existing csv file into a graph model pretty easily. It's a very nice drag-and-drop module. So you pick your CSV file, you drag and drop onto an empty canvas how your nodes, how your relationship should look like, and you map that on from your CSV file into a graph, and then you load it back into, or actually load it into Neo4j. And it looks like we lost Gurshev for good. He is not coming back, so maybe his internet is completely out. So we can answer some question from the audience here or from remote. Otherwise, maybe we skip ahead one session and do the session with Katja and Marius next and see if Grisev joins back in and then we can either patch him up again towards the end or we'd have a little bit of a longer break today then. So I don't know, Alex, what do you think should we do? The audience here, because you are here. Should we, because the next one of Open Study Builder will be a recording because Katja and Marius are at NOVO today. Is that OK for you and then do the break or should we do a break now? Continue. All right. So in that case, then, do you want to say a few words? Yes. So NOVO Nordisk came up with the idea of basically doing more clinical trials with the same kind of stuff in the next couple of years by adding terminologies and talking standards from the very beginning, from the design of your clinical trial talking standards until basically you get the approval by the FDA or you hand in for the approval and that's an open source project called Open Study Builder that is based on Neo4j. That's basically it and can handle very complex data in the clinical space and and as i just briefly mentioned unfortunately katya and mario's um katya is the the um the project manager there and marius is from our professional services they are at novo um today so they recorded it and i think we can show the recording now We can show the recording now.... which is typically created in Word. And then we're having different users...... like to have, so the Open Study Builder Vision is really to replace the core information, all main information which are needed by different systems in one metadata repository. So that this metadata repository is used at the heart and then we can derive the protocol from and deliver different systems. Of course, this metadata needs also to be maintained and created, so here still human input is required, but we also are supporting standards. So it's a lot of things are driven by standards. We do have industry standards like CDISC, but also there could be compound standards, study standards, and also what's another important point is that we're not only having exports to systems, but if system information should be used also in a different place, there's also the root back. So systems can automatically also deploy information into the Open Study Builder so that it can be grabbed by another system. Open Study Builder so that it can be grabbed by another system. The Open Study Builder itself contains three core elements, which is the clinical metadata repository additionally to the study definition repository. So we are not only storing metadata, but also study definitions. Which studies do we have? What activities do we collect for them? What are the objectives, the endpoints and much more. Then we do have the application, which is the web interface. Here people are logging in, then they can see and browse the standards. They can define standards and they can use standards. They can create studies, they can select which studies and activities they would like to have, they define objectives, endpoints, titles, all information can be entered in the application. The third component is the API layer. So this is the application programming interface, meaning whatever is available as data or as tasks which could be performed are also available through this layer. This means that other systems can just access the data or create data. It's all automated possibilities. We also have a high need for interactive interactivity and interoperability. So a core component is the import of standards. We can import the CDISC like CDASH, also biomedical concepts from the CDISC Cosmos initiatives. We can also import dictionaries like SNOMED, MEDRA, LOINC, UCAMP. But yet there can also be importers to import standard sponsor libraries. For example, if you already have a metadata repository, you can create a script to just import also all your metadata into the OpenStudyBuilder. And on this other side, we do have consumers. So the idea is that every system we are using, whether it's a protocol add-in, whether it's the clinicaltrials.gov, whether it's an EDC system or also tools to create STTMs or TLF builders, they all can grab information through this Open Study Builder API. Another important point to mention here is that we are also a Transcelerate digital data flow project, meaning when you have a system which is compatible with the Transcelerate DDF standard, it will also be compatible with the Open Study Builder. So the idea is really to have exchangeable systems. For example, if you want to switch for one study from one EDC system to another, that should ideally be working by using the same APIs that would be possible and maintaining the standards in the Open Study Builder. So, as mentioned, the Open Study Builder is an open source tool. It's developed and maintained by novonordisk it's highly linked to standards and initiatives like cdisk standards we are a member of the cosa so the cdisk open source alliance we are an in we have the open study builder is an implementation of the transcelerate digital data flow project and we are looking forward for collaborators we are looking forward for collaborators hey so now um i would like to cover this question you see on the screen and that we actually um get quite often and it's also probably the reason why um you are sitting here today uh on this on this growth day farmer talk so why is there a graph inside of the open study builder and also where actually as you saw in the in the previous slide from from Katia she used the word really linked with standards so that's already a hint for what's coming so let's look at the data that we have inside of the metadata repository and the study definition repository. Let's start from a study. So that looks like a single object, but if you zoom in on exactly what you have inside of the study in terms of components, you will have many components. Let's take a first example, this one, the objective. So the study objective, as you know, will be measuring something about something. So you need to zoom in a bit more. So for example, the objective will have a category which can be the safety or the efficacy of a given molecule over a certain disease, for example. And what I just said, molecule and disease, we actually treat them as parameters to the objective. So we use a lot of templates instead of the OpenStudy Builder, so that you can pick from a lot of templates instead of the open study builder so that you can pick from a library of templates instead of always having to come up with your own content and wording so parameter here as i said can be a compound can be a disease can be units etc so these parameters can be units, et cetera. So these parameters, they also have allowed values inside of the data repository. And these allowed values, they can come from standards like CIDISC or it can be sponsored data. So for example, compounds owned by your company or it can be specific activities that you lead in the context of your inside of your medical therapeutic area so for example measuring the level of glucose in the blood so you see already we are heading toward something that looks a bit like a like a graph and then we could zoom in more and more and more and some objects are actually um composed of literally dozens of elements so if i look at this that we call the soa so the schedule of activities which is really at the core of the of the open study builder if you look quickly here you can see that it's the composition of many elements so you will have activities that happen in the context of a visit in the context of an epoch you can enable them disable them and each of those are actually made up of several objects themselves so if you if you go on and on like that you end up with the following which is our data model for the whole open study builder open study builder so as you can see there are a lot of of relationships everywhere so if I rephrase that congrats it's a graph so that's that's really the starting point for why do we even need a graph in the in the first place inside of the open study builder and as you saw here in the schedule of activity um the fact that we have a graph as the underlying data structure inside of the tool it doesn't prevent you for from displaying data in this tabular format that we have been used to for many years as also the data storage structure. But here we can really need a graph because everything we do inside of the open study builder is actually versioned and in top of being versioned you also need to preserve what we call an audit trail so history of exactly who did what when and why so let's take this example we have a root node which is a term coming from C disk and then it has value it has versions that can evolve over time which which is what we call the value objects so in 2014 that disease was called simplex virus but then in 2016 they decided that herpes simplex virus should be the right name but then in 2023 they said okay no it should be just simplex virus but in two words so you see it evolved over time and the way we do the versioning is that this part on the left never changes. That's really something which is if you change this, it's just a different object. But then if you change any of this, it's just different properties that evolve over time for this given object. But it's still the same object. So let's play a little game now. Which of the three versions earlier should you choose in the context of your study objective, for example? So you're working on a drug to heal this simplex virus. So let's say you start the study today or for some reason you don't care about the dates and values, so you just pick the latest, which is today. But what if you actually created a clinical study in 2015, then you were working update the clinical study to the current data available and we are actually in 2018 for example then you need to switch from this value to this new value here so you're writing the study objective which one should you pick? Well, it all depends on your context and it's all enabled by the versioning system that we have in the OpenStudyBuilder. But then if you add that to what I was showing earlier, then we can say that it's actually graphs, plural that you have. It's several graphs, all intermingled. And depending on the specific date that you choose, you will choose a subpart of your graph, which is the one available at that given date. And as I said, if you decide to change from one value to the other one, you need to explain who did that, when, and for what reason for auditing purposes. So I hope this really shows the power of having a graph as a data structure in the context of the OpenStudy Builder. And now I will pass it back to Katja to learn more about how you can move forward with the OpenStudyBuilder as a tool. So if you're interested in utilizing the Neo4j graph in the real application, like in the OpenStudyBuilder, there are various resources where you can go to. We do have a very extensive project homepage with a lot of information, so it's OpenStudyBuilder.com. For example, under info and resources, you have a very brief overview about the most important resources. We have linked videos, presentations, a poster, a flyer. We also have already created two papers. And on the homepage itself, you also will find a lot of information, like also our next events, various guides on different topics if you would like to get additional information. So please just check out our Open Study Builder homepage. The demonstration video is a real good resource to just get a brief overview about the capabilities and what the Open Study Builder is about. We also have a newsletter in LinkedIn. We do have regular releases. It's typically in about two, two to three months that we will always do a new Open Study Builder release with new features and enhancements. You can also reach out to us via Slack and via email and there's some complete source code, including the instructions on how to install everything is available in GitLab. What you will get with the OpenStudyBuilder is you can install it locally, even on your PC, whether it's Windows or Unix, a Linux machine, it even works on Macs, but you also can install it on a server. There's also a public sandbox environment available. You only have to print a send an email to Neo4j and then you will get access to a public sandbox which is just the playground area so everybody can see browse and edit with the tool. And then there's also the option for a dedicated hosted environment. For here, for example, you can ask Neo4j. The software itself consists of different elements. So you have the application, which is the web interface. The complete content is available in the graph database. And very important are these API endpoints. There is a Swagger online documentation live. So if you would like to browse and see what API endpoints are available, just use the Swagger documentation. In general, there's a lot of documentation. We do have our project homepage. The tool itself is also coming with a documentation portal. And there's a lot of documentation in gitlab available then we do have additional scripts available for example to load the latest cd standards into the open study builder but there are also general tools like a database browser and also neo4j dashboards where you can browse different contents. Yes. And in case you're interested by the OpenStudy Builder, but it lacks a few features that you need, or you want to explore more, or you want to customize anything inside of it, including the data, that's actually something where Neo4j can help you, especially the services part of Neo4j. So for example, if you want to edit the library data that you have inside, let's say you have some custom data on top of the CD standard, then that's something you can customize or you have some other library data available like the activities that you lead or or you use a different set of units that that open city builder uses that's also customizable then we we have also developed mappers so for example katya mentioned the d DDF Transcelerate. So the OpenStudyBuilder can actually export a study in the USDM v3 format. But if you use a different study format, then it's also possible to develop custom methods. And then you might also want to develop complete extensions to the OpenStudyBuilder, because some functionality is missing. And then here I invite you to come talk to us, so Neo4j and also Katia. And that's something where NoVanAllDisk is also open to talk about this, because it is an open source project. So thank you all for your attention. And here are our personal emails where you can reach out to us. And you can also write to openstudybuilder at neo4j.com and we will receive this email with questions or any requests so thank you and have a nice rest of the day cool thank you very much to to katya and marios even though they were not being able to join us live today but i think that was a super cool overview of the open study but it's a nice project so i think as as they mentioned already already, you can participate as well in the community if you like to, and if you just want to try it out, then it's an open source project which is always cool to play with and to try out yourself and to get a little bit into the weeds of things. And I think it's very nice that it is so inclusive and tries to get as many people on board as well as many companies. So it's like a true community effort, I would say. And it's really nice. And like Marius just said in the end, if anything is missing, if you think, oh, this would be a cool feature, this would be a cool addition, if you are like eager to add and to contribute to the project, then I think this would be a cool feature this would be a cool addition if you are like eager to add and to contribute to the project and I think they would be really happy for for you to join in and to to give them a feedback or to join at least for one of the community calls they're hosting regularly and and tell them about what's what's going on all right so that's that I think we have good stuff back. So he had an unfortunate internet problem. So we had to restart his machine. I think he's back online now. I hope that we can continue with his interesting session from Jon Snow Labs. And yeah, here he is. So I think without further ado, let's continue. Maybe rewind a little bit, Gurorsev, where we lost you, so that we can kind of like recap on what we had before, so that we don't lose the context. And yeah, with that, back to you, Gorsev. I assume that you can see my screen. So I'm terribly sorry for that. I was just, I mean, you can see my screen so i'm terribly sorry for that uh i was just i mean uh shopping around the new pc yesterday for like switch to macbook pro so i think it's a decision for me right now the the computer just crashed so i was gonna say uh normally we use the deep learning models mostly to get these entities, just to remind you what I was talking about. So you can see that these colored sections are, we don't use, I mean, not all of the texts are useful. So like a stop word, not a stop word, but we don't need this diagnose term here. So we get the chunk here, a couple of tokens, I mean, label it as age. The model will get you all of these. But as I was trying to say before the connection was lost, it's not always, not all of these entities are of interest for you or of value. the patient, it may be the symptom or the disease or the drug may even be belonging to the patient's mother, father, someone in the family, or it was just an illness 20 years ago. So we try to use the assertion models and then finally we are going to get these standard codes. And for these we use different entities. The end one, it's going to be CSV file or any structure format, but JSON is the most popular, as you can see from the text. At the end of the day, you use the pipeline, run the pipeline and the model, and you're going to get the resulting output is going to be a JSON file like this one. So the assertion said that, okay, it's absent because it says no evidence here. The diagnosis was this hematoma, the code, I guess it's a SNOMED code. And then we get all of these and we are going to use it for the post-processing or the next steps. And it's going to also give you the confidence level. Not all of these, because at the end of today, NER models, using the NER models or named entity recognition is token classification. You get the token in the previous stage, then the NER model is going to say, okay, this may be probably with this confidence level or with this probability, it may be labeled as age, date, like gender. So that's how we use. And I guess I was saying we are using LLM models because all the applications are switching to the LLM models every day. I mean, literally every day we compare our results with GPT-4 and other users. AWS has some medical users right now. So we try to understand if the model is working fine. The models, as I said, we have more than 2,000 medical models, healthcare models. So we try to understand what we are doing wrong. And the medical question answering for our chatbot is trained, does not hallucinate. Because if you ask a question like in Spanish and get him to get the chatbot to answer it, if it's unrelated to medical or healthcare data, it's not gonna answer it. It just says, I don't know. And the other one is we provide the references here, mostly from the PubMed. And so we use the knowledge graph. It's a, I'm sorry, it's a too quick switch. So how do we use it? So what you see here is, okay, there is some CT scan and it was, the date was March 12th, 2010. And all of these are, these entities are, or some of these nodes are extracted by the NER model. So this is the CPT code, which is the standard code for procedures. So the CT scan says that, okay, in the body, in the abdomen area, again, coming from the NER model, on the left-hand side, this is called the laterality coming from the NER model. The finding is, okay, there is a mess here. And the assertion model says that it's confirmed. It doesn't say, the report does not say there is no mess. There is definitely a mess. The radiology report says it. And the modifier says that, okay, stage two is mentioned in the text. Not something some machine learning model saying that is at stage two level, considering all the inputs. It's mentioned in the text not something some machine learning model saying that is at stage two level considering all the inputs it's mentioned in the text and then in your model just extracts it and we the imaging technique was if i don't know if you always need it but it's a content a contrast and for hematoma and the medical text says that it's absent, good news for the patient. But all the ICD-10 codes, SNOMED codes were extracted for hematoma, although it's absent, so we are not going to use them. The date is extracted, the date of the CT scan is, excuse me, hematoma. That's probably something in the patient's clinical file in history. And she used or he used metformin with this dosage and this frequency. So, again, you can see, yeah, we have the report here on the right. So, the reason for this radiology exam was suspected hematoma. And the patient is on Comedin, which is a blood thinner, so be careful with the exam. And the area is the CT scan was performed on the abdomen. So you see that we can go all over these, but these are extracted. You see that these are nodes here, and they were in indeed there are more details about as I'll start stunning starting point for or the starting node is right here and it's the hematoma retro so the CT scan is here they connected the abdomen area pelvis area bladder and you going to see all the, so these are the relationships, or we use the relation extraction models to get these, and the relation models are provided. So, you're going to see that, okay, the details of the diagnosis over here. So, I mean, sudden, so I'm going to discuss what we used. The example or the project that we used was opioids. We know that opioids are prescription, mainly used for very heavy painkillers, but I mean, they have some misuse recently, like last 10 years maybe. And they provide pain relief, sedation, and euphoria. There are natural semi-synthetic and synthetic opioids. Okay, we know that. The problem is the bold words are very crucial here. A high potential for misuse and addictive. I mean, huge addiction and overdose. So what I mean by overdoses you probably heard about this this is uh coming from nih uh so it's a government agency uh so you see that in 2022 the overdose due to opioid was more than 81 000000. I don't get the data yet for the last year, 2023, but it's around 100,000. And the total overdose, I mean, opioid plus all the other illicit drugs is around 100,000. So you can easily call the opioid addiction or opioid use as an epidemic. So it's a huge problem. That was a starting point for our project. So we trained this model, NER opioids. So what you see actually is a basic NER model pipeline. We get the stages. You see that there are six stages here. The first one is document assembler. We start with a text to just make a conversion to make the model understand it. So this is the sentence detection deep learning model. We get the sentences in the second stage. The output is right here. So the tokenizer is going to get the tokens from the third stage. And then we use the embeddings clinical this is the model we use for uh healthcare related text uh specific for healthcare applications 200 dimensional model and once you get the embeddings uh you're gonna then in the fifth stage we are gonna use the ner opioid model so what does model, this model give us? I'm sorry, I switched to another computer, so I'm not going to be able to show you the homepage of this model. But if you go to nlp.johnsonovlabs.com and then you switch to models in this page, you will be able to just search for NER opioid and see it. Every model has a homepage like this one. I just got the predicted entity. So if you run this model and if you do not filter, you're going to get all these labels. The model is going to try to use these entities, label all the tokens in the text with these entities. If it's not, then it's not going to label anything. But you can whitelist. I mean, if you are interested only in, let's say, alcohol use or like legal issue or violence, you can easily get only these entities. And at the end of the day, once you run this model, and the NER converter is going to convert these to chunks. So this is the chunk. You're going to get a, like, remember, we get the JSON file, or this is the Spark data frame. You can convert it to Pandas data frame, so that you can save it as a csv also so you see that uh unmarried was extracted from the text lay and begin and ended indices here and labeled as marital status and there is also a possibility to use a confidence level so that you can define a threshold if you want you may say that okay anything below 70 just ignore them a threshold if you want you may say that okay anything below 70 just ignore them and then all of these are right here there is uh okay this one is good so it is also possible to visualize it uh you don't use the visualizer this is uh ner visualizer that you use you don't use the visualizer in production notebooks but in the initial stages it, these visualizers are very useful. You're going to see that, okay, the model mislabeled any token or chunk, or the model did not extract anything. You will be able to see it. When I was preparing these slides, I ran the notebook and I noticed that, okay, opioids and heroin phenyl oxycodone, they are labeled as opioid drug. And right here, you can see that, okay, opioid drug, again, opioid drug. This one is labeled as test. So we have medical doctors in our team and I talked to one of these, okay, this model is not working fine. Oh, he said that, okay okay it's learning from the context in the first sentence the opioid was used in the opioid drug because it's illicit consumption of various opioids so it's really labeled correctly it's an opioid drug when in this one in this sentence you see that these are all tests the test was applied on the person. So, okay, opioids, same word, identical word, this time it's labeled test test. He said that, I mean, the medical doctor said that, okay, this model, it's a proof positive that the model is working very efficiently. And you can see that the test result is right here. So, as I said, you don't use the visualizer in a production notebook, but you will see that if the model is working fine, missing anything, mislabeling anything, and it will be very useful. The other one is we have three different assertion models for these. Remember, the assertion is very important here. And it's going to give you the default is present. You will try to understand if anything missing here. So present, present, okay. The patient has a history of substance abuse history, okay. It's hypothetical here with clinical signs. So that's not definite here. So you can use the assertion model to get if it's really present or absent or someone in the family. I don't see an example here for family. The other one is you also have the option to use the assertion models, not for negation, but other information. And I forgot to say, so there are two uh two different labels here again opioid it's labeled by the ner model so the ner model is working here but the assertion model is working on top of the ner model so the order is in your model gets the extracts the information from the text labels it and the and then the assertion model checks if it's really present or absent or whatever the predicted entities are for the assertion model. And in this one, we were interested in other issues. I mean, again, you see that this model is working with an NER model again, but the NER model is only trying to extract the general symptoms here. So we whitelisted for the same, you use the same model, same NER opioid model, but we whitelisted for the symptoms and we try to understand if these are withdrawal symptoms, overdose symptoms, or underlying pain. So again, I'm repeating myself, but it's important. So assertion model can be used for negation or other, let's say, labeling purposes. So, RME means relation extraction. We have this simple text here. So, you see that this patient started using lipidur. It's a statin drug for taking care of the LDL, some lipid. It's a lipoprotein. It's a sign of the over-cholesterol in your blood. And this one is the, check the second part of the sentence. So say that the patient didn't like the lipid oracles. There were a lot of adverse effects. So the doctor decided to prescribe Zocor instead of this. It's another statin drug, but coming from a different family. So what we use is, again, the NER model is going to get this one. This one is not opioid model. This NER model just extracts two entities. One of them is drug. The other one is adverse drug event effect. So you see that we try to understand or establish relation between drug and ADE. And we use a relation extraction or RE model for this. So it's better to get this visual here. So again, the same text, but this time we use the relation extraction visualizer. What you see is quite crucial here. So you see that normally you may expect some problems here, but the model is working fine. What I mean is, okay, there are many ADEs here, five different ADEs, and there are two drug names. And the model was very successful in getting that only Lipitor calls these ADEs. You see that it's a binary. The RE model is binary. It's one or zero. And the Zocor, the relationship between Zocor and all these ADEs was zero. So there is no problem with Zocor. So go ahead with Zocor, stop using Lipitor. That's what you get from this. And that's what we used. I mean, it's a long start, but that's what we used in the opioid project. So it's a very nice view. So I'm going to start with what we used. We have the conditions. They are mostly adverse effects of the opioid use differently search for different opioids and pmid means public uh excuse me pubmed identification number of this article so you see that we have uh a little short of 10 a thousand notes here and we have these relationships so the relationship is going to give you article of post by or mentioned in so these are these two are related to the pubmed article and this one is it's a global view it's going to show you okay it's going to get you understand some uh clustering maybe so which condition is around which and the yellow, excuse me, green parts, green nodes are very important. It's going to show you what the research is around. I mean most research is around this cluster. So you're gonna, you may take a look at the green nodes with the light blue or dark blue nodes to see which opioids are of more interest. If you take a look at this one, you see that this is a node about some opioid again. There are five PubMed articles about it and, five different conditions or adverse effects. So, this one is about the methadone. You see that I just focused on the same one. You can work with these, again, cipher codes. So, dolofine and methadone are here. You can see that methadone, you use these all of these relationships here for this opioid methadone uh there are articles a lot of articles by the way and also you can see that these methadone drug drug is uh oh excuse me the these uh ades a uh adverse effects are related to, or you may say caused by this drug. So you try to understand these relationships or they are very useful in understanding. And again, this one is the one that I mentioned before. So you can see that, okay, it's more focused. Alpha-entanil, maybe not too much research about it. You can see that these PubMed articles are right here. They are all articles of this opioid drug. And also, you can see that this one, recurrent respiratory problem probably, it's also mentioned in this article and it's also caused by this probably side effect or adverse effect of this opioid drug. So, they are quite useful in understanding these. And the other one is this one again heroin. We get these articles about heroin. Again, same Neo4j and you may define different property keys here. I just want to show you, okay, what I did. I mean, I just checked one of the PMID numbers and then I was supposed to put this license slide one before maybe. So you can see that, this uh pubmed article id and it's related to methadone so it's working fine so it's a very important reference for for for the users and this one uh this one again is a respiratory problem uh some condition or effect problem, some condition or effect. Again, Cypher, we just wrote the query for just limit by 25 to get more. So zoom on these or focus on these ones. So you can see that many of these side effects are related to these opioids. And the other one is, again, it I like to repeat, but this side effect or condition is mentioned in these PubMed articles. So I was going to say the, don't worry about the title, it's going to say, how we used it in our demo. So we collected data from PubMed. It's a, everyone knows it's just a definition free database. We scrapped them and then it's open to public. I mean, not all of the, not all of these articles are, you don't have access to all of these, but you may, you have access to all of these, but you may, you have access to all the abstracts. So we use this NER opioid model. This is in our library. We get the NERs, filter them by using assertions, and then establish the relations. And then we use the NERs to create this knowledge graph. So we linked by using RE models, getting the NERs first, and whitelisted for certain entities as you noticed. And then in the demo, you're going to see just right now that, okay, we try to get more details like this one. This is the demo that we prepared. You just write the question here. We have the text to SQL annotator in our library. So what it means is you just write the text here, you generate the query here, and once you execute the query, it's going to give you all these. So what I try to do here is, we know that opioid heroin has some side effects, and we try to understand which other opioids have similar side effects. So these are, you see that methadone has anxiety symptoms, dolophine also, phenanyl has other problems in the brain. It's also possible to write the number of these, I mean, how many symptoms you can add more symptoms. And the other one is, this one is more complex. So again, this time, which opioids have most similar conditions? Nothing to do with heroin this time. This time, we try to understand, we try to figure out which two opioids have these conditions. And don't just give me the number, just list all these conditions. So again, we generate the query here by text to SQL. And then once we execute, we see that, okay, morphine has more similar conditions with MS-contin, and these are the conditions. So, it was quite useful. So, I'm going to say the next. So, what in future? You notice that we use the deep learning model in your opioid. I'm going to call back. Yeah, right here. You're maybe you have the question in your mind saying that, okay, why don't you use LLMs? Yes, we did. LLMs, it's also possible to extract all of these LLMs, but the model here is quite specific. This is trained on data coming from opioids and annotated by, again, medical doctors, and then trained on some specific subjects. LLMs have huge potential, but this is very specific. Right now, I'm working on alcohol withdrawal, again, in an ER model, or smoking, menopause, hereditary diseases. These are all very specific subjects. So, LLMs are doing perfect, I mean almost perfect. I mean, saying that you don't work on 100,000 million documents. But if you have a very nice NER model, you will get better results. And as I said, we compare every day. And again, we are fine-tuning. We are using LAMA and Mistral open-source LLM models, 3, 7, and 70 billion. And we fine-tune them every day on healthcare data. So you get access to johnsonovlabs.com and for technical just ignore nlp.johnsonovlabs.com to get all the technical details and my info is right here. Thanks for listening. Again I'm sorry for this technical problem but it's gonna be good news for me. Thank you very much Gurze. Thank you very much, Gurzev. We have a couple of questions for you, so I'll read them out, and then we can see that we go through them. So if you have any questions, use the slide or link for Gurzev, and we can answer them. We have a bit of time left for them. So one question comes from Corey. They ask, do you use Neo4j Enterprise with Bloom Bloom or do you build the GUI on your own? And what, I guess I'll add on, NERs or entities, try to establish the relationships. And in this case, again, we get the side effects by using another NER model. I mean, we use more than one NER model. And then we get the knowledge graph. I guess I couldn't catch the question. That wasn't the question, I guess. Not really. The question was, what are you using to build the graphical user interface for your application you just showed? Are you using Neo4j Enterprise with Bloom or did you build it on your own? Okay, yeah. We use Model Park, Model Park enterprise with bloom or do you did you build it on your own um okay yeah we use model park model park app and it's quite safe and secure so we get the uh again the same stages we get the ners and then knowledge grab and then we put everything in this model park app model park app all right cool and then next question here for you it comes from georgios in your experience what steps could someone follow to predict ade using a knowledge graph so i'm assuming that we get the clinical report of a patient and then we try to predict the ADEs. Okay, ADE is tough, really. I mean, you just saw the example, Lipitor and Zocor. The other one is there will be a lot of symptoms. there will be a lot of symptoms. So the symptom may be totally unrelated to the specific drug use. I'm talking about prescription drugs. So ADE is difficult. We attack by multiple approaches. We have ADE in ER models, so we use more than one in the pipeline. And then we have ADE classification, text classification models. It will be in the previous stages of the pipeline. So what I mean is we get the text, start with the text. We use this text classification model to attack if the sentence includes any ADE. If it is okay, if it is in the sentence, then the next stage will be using the NER model to identify the ADE. And then, I mean, I'm being quite honest here, but ADE is very difficult. That's why we use multiple stages. We use more than one NER model to understand these. And if it's domain-specific, it will be much better. What I mean is, it may be difficult ADEs in oncology, different ones in, let's say diabetes so it's much better to use multiple uh nr models specific to this uh this domain let's say okay cool it's a good question i mean ad is like a problem sounds sounds like it um the other questions were for for alex's initial presentation so i think we we might skip that a little bit or or take it on um or actually maybe we take one now and uh anybody else in the room has a question for gussef before we we let you go looks like no yeah all right gussef thank you very much for for joining today thank you for for the nice presentations and for the um the cool um you know the graphic interface you showed for showcasing what you can do. And actually, the invitation goes out to everybody. Check out the website from Jon Snow Labs and play around with it a little bit yourself and see what there is. So it's a nice story. Thank you very much, Gurshev. Thanks for having me. Thank you. Thank you. nice story. Thank you very much, Gurcev. Thanks for having me. Thank you. And yeah, with that maybe we do this one, Alex, if you like. I think that one was left over from the beginning when you talked about the intro segment and Victor says, what are the main challenges that you have encountered when translating natural language questions coming from the user into Cypher? Am I? Okay. So, there are two aspects, if I understand the question correctly. One is if the LLM translates it correctly into Cypher. This is what we can augment if we input the data model as well. Makes it more precise. I think the challenge is to compete with, let's say, real developers who write Cypher code each and every day to make it more, you know, as an optimized query. I think it's straightforward for every programming language to spit out and output some query. But if it's the most performant one or if it's the most accurate one, that's kind of the challenge. And then to debug that and in the end you want to trust your LLM that it does the job that it should do. That is the main challenge that I see. But also one aspect is that not all the questions that are super sophisticated can be translated into Cypher or in an optimized way. So there's always like simple questions that can be translated into Cypher and performed on a database while others are not so much relevant for that. Yeah, I think that makes sense. I think it's more like a little bit like a sidekick or like a co-programming, co-pilot as they call it. You can ask some questions, you can ask something back and you still have to validate what the return is. Can you use that one-to-one? Sometimes yes, but like you said, if it's a very complicated story, a more complicated query, you might want to validate that with somebody who is more proficient in Cypher. And I think that is, Cypher obviously is a bit of a special case because probably not as much has been written about it. So that's also might be a problem problem so if you do the same with Java you might have probably an easier time you get probably more more more fat more more back that is that is probably you know had been the similar question might have been answered already before that that that answer had been fed into an NLM that LLM knows that and finds that answer for you and then gives you gives you something back if you ask something complicated around cipher that answer might not have been asked before or the the answer has been not hasn't been fed into an LLM so that the LLM doesn't know and then hallucination comes into play or they just say oh I'm sorry I have no idea So that also the LLM is not updated each and every day, every second. So there are new functionalities, new, yeah, new, new methods inside for APOC functionalities that the LLM doesn't know. So also one challenge. Exactly. Keep that in mind when when, as we always say, when working with LLMs. Cool. Cool. Cool. At this time, I think we are at the break slot for you. In Munich here, we have a little coffee and cakes outside. So you can get there, get refreshed, grab a coffee, grab a cake, and we'll take the 15 to 20 minutes. I think we're back at 4.45, so that means like in almost 20 minutes from from now on yeah then we have Dimitri and then we have Dimitri from Kyagin Kaizen I'm looking very much forward to that session on site here so that should be fun if you're watching remote I will play a couple of polls for you to join and we can before we break or or no, sorry, before we continue with second half of the day, we look into the results of these polls. And we can discuss that a little bit. But I'll play them for you. And until then, see you around. Type in chat if you have any other question, anything that keeps on your mind still. Use Slido. We will have some minutes left, as you see, between sessions or at the end to do a little bit more of a Q&A, Alex and I. So we can go through them. Exactly. See you later. See you and enjoy the coffee. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. An update from Australia. Yeah, I can totally recommend Nodes as a conference for technical people or community people, because it's all virtual, it will be recorded. But they are hands-on experiences and yeah people share their expertise and and their codes are super interesting to watch exactly so it'll be fun it'll be like you said online free so if you have any topic you are interested in and you're working with currently around graphs and around life senses or any anything else then please submit a session would be great to have you anybody else is obviously invited to join the event. It's going to take place on the 7th of November. So, yeah, with that, we had a little Slido run in the intermission, basically, and we asked what comes to mind when you think of life sciences and graph technology, and a couple of people submitted something. So here we see the little word cloud here, chatbot. I see explainability, nodes, obviously connections, discovery, clustering, knowledge graph. I think that all is something that comes to mind easily. And it's cool to see that you think of this as well. And we are continuing with this topic almost perfectly well that Dimitri from QIgen is bringing up next but I let you introduce Dimitri. It sounds like you submitted all the answers because I think you will cover basically 90% of that explainability and high quality data as well as the ability to connect it to a chatbot but I don't want to spoil too much. So Dimitri is coming from Kaigen's Digital Insights. We worked together for basically the last one and a half years with a very interesting data set from Kaigen that is now also available on Neo4j, where you can then do cool stuff. And you will also elaborate a little bit on your next part which is the AI part of BKB right so yeah great to have you here and over to you yeah thank you for the introduction Alex yeah so my name is Dmitry Kamayev and I'm a senior product owner in Kaiingen digital insights for data science knowledge base products. And just to give a broader context what QIAgen is, it's a big company that also has a mission sample to insights. So it produces everything from assay technologies, chemicals and extraction kits from wet lab to robots for processing and your samples. But also we are doing bioinformatics and we are doing software. And a big part of it is developed in Aarhus in Denmark. And it is under the part of the QIAGEN business called QIAGEN Digital Insights. And I have to show this legal disclaimer slide. Don't use our discovery products for treating people. It's only for discovery. So going back to QIAGEN Digital Insights, we are a leader in bioinformatics space and it's all here in numbers. So across all our software products, we have like 90,000 active users. Our curated knowledge base consists of 20 million curated findings and we integrate more than 40 scientific and critical databases in in our data products so just zooming in into database products we have several database and knowledge base products that we support and it's biomedical knowledge base HD and it is the same data that some of you may be familiar from using ingenuity pathway analysis it's a graphical user interface tool for exploring relationships biomedical kb hd is the same data that fuels ipa but with interfaces that allow easy access for bioinformaticians and data scientists. We also have biomedical KB AI and I will talk a bit about it later. It is an NLP curated, NLP processed extracted data. And other products is omics soft harmonized database with multi omics information and also we have variant databases such as Cosmic, HSMD and HGMD that we are also looking into integrating into a bigger knowledge graph in the future. So what are the main applications of all those data knowledge base products that we support and solutions? Mainly we are supporting pharma companies in their target identification and drug discovery and biomarker discovery. And we see that some pharma companies are also building their own knowledge bases and then they either use our knowledge graph as a foundation for integrating their knowledge and their internal data on top of our schema and structure, or they can already have an established structure and then the integration goes the other way around. And of course we can see with applications that some companies are building their own dashboards applications to support biologists in the company, and then they are using programmatic interfaces to interface with the data and build their own web applications and dashboards to access the data. So just diving deeper into biomedical knowledge bases. So we have two knowledge bases biomedical KBHD and that is human derived, so all the findings are manually created by scientists by reading through papers and following established curation process and filling in the database and we have over 24 million biomedical relationships that will be accumulated over the course of 25 years of aggregating this data and we are constantly updating it with the new curated applications but of course you cannot curate every publications coming out every day there are hundreds of them so for those for those purposes So for those purposes we use advanced AI processes for extracting the relationships automatically. And there we have over 600 million biomedical relationships. So this is a huge graph of data. And on the left it just shows kind of their visualization of how what is interacting and just basically they are we try to go for physical concepts biomedical concepts such as drugs, genes, chemicals and of course we also use pathways and diseases so maybe a bit more abstract but we try to keep it very concrete and also simple so we integrate a lot of data and we provide this information in many different ways to our users because our main users are data scientists and bioinformaticians in the pharma companies so the preferred way of accessing the data for data scientists is either Python or R so we we provide, of course, those interfaces that allow submission of SQL queries to query the data in a relational format. But also we have REST APIs, and on top of that, in Python and R, we also support extra functions, analytical functions, such as causal analysis to extract additional analytics from the data we expose. But of course this relationship data is more naturally presented as a graph, I will show it later. So we also have presented this information as a Neo4j import dump that users are able to import into their own knowledge graph and work with it. And it is all downloadable to our customers and we are also trying to support a smooth and to remove barriers in this data value chain. So it's both in accessing the data, we want to simplify access to the data, we want data to be also easy to understand and easy to analyze. So we try to productize this data and so we try to work with every step of this data value chain to make the journey as simple as possible. So going into tabular representation versus graph representation. We offer both because we know different data scientists different data scientists have very diverse preferences and some people maybe would prefer to use R and tabular data access. So we offer that and I will show you how it looks. So here is, for example, we have relationship tables, which I truncated. It has actually a lot of more context that's captured. But just for demonstration, you can have one column representing node 1 interacting, and node 2 interacting as a third column. And then the middle column is a type of relationship between gene DARP60 and the function quantity of nervous tissue, for example. But then you want to know what DARP60 really is. Well, for that we have extra entity metadata tables, which you can go from DARP 60 to metadata of what DARP 60 is oh well and you can see that it is a gene and it is a in this case evidence comes from a rat and actually the evidence comes from is the form level is a protein. So extra context is available there. But here's how it looks in labeled property graph format, much more condensed and easier to work with, in my opinion. So here's again, DARP 60 and all causal relationships with its functions and all causal relationships with its functions. And all those attributes that were previously had to join using SQL are now just sit as an attribute on the DARP 60 node directly. And it shows when you try to query the data and work with the data. The queries look much simpler. Just to show you what our Python interface looks like, we have our own Python package called BKB, where you import it, connect to a cloud version of our knowledge graph, or you can also connect to your local SQLite or Postgres, and you can just submit a SQL query here. For example, you select all molecule-molecule relationships for a gene NO1. And you get a data frame as a response. So this is how it looks. But interesting, actually, this is an example from our documentation that I hope we've corrected. Actually, this query is not exactly correct. It's a bit tricky to write the query if you want to extract all NO1 relationships. Well, here, for example, I specified that only node 1 equals NO1. Well, what about... Can NO1 be also as a node 2? Actually, yes, it can. So actually, the correct query would be where node 1 equals NO1 or node2 equals no1. So that would be a complete query. So you have to keep in mind those things when you work with SQL, but it's possible. So we created a new 4j version of our knowledge graph and this is a basic representation of the schema of how we try to organize and what the relationships are present in our graph. I show this slide because the real schema, we need to to prepare you looks like this uh so just uh entities are the same but we have a lot of different relationships between nodes that example like causation correlation this web of relationships. And this is an example of the neighborhood of Eugene. It's a really densely connected network. And when we were designing the Neo4j schema, we were trying to go for a compromise between completeness and simplicity. Because, of course, you can put all of the data, all the context in the graph. But then will it be easier to use for bioinformatician? Does it help them answer the questions? That's a tricky question. So yeah, if you go for completeness, definitely your graph will support all diverse user needs and you have like complete scientific thoroughness. But then you compromise on performance, certain queries would be slower and the adoption it just drops. Because if it takes two weeks to understand the schema and to understand the data, or even if you think you understand the data, you make mistakes, then there is no value in it. So that's why I want to talk right now about design choices that we made when designing knowledge graphs that might be also valuable for you and maybe you can learn from it. But I want to say it is a learning process all the time. I don't want to say there is one way of designing a knowledge graph. I think you have to offer different configuration options and design it to a specific use case sometimes. But let's start, for example, with gene model here. When people talk about gene, it can mean many things. What do you mean by a gene? Is a gene a species specific or not? Do you mean maybe actually a protein? Do you mean maybe a specific isoform of a gene? It becomes very complicated and especially when you're extracting information from a paper. So it's important to have an ontology of genes and at the top we have ortholog level and it shows for example here EGFR it's a across species concept of a gene and then on the next level of gene representation for EGFR you can have species specific versions of EGFR And then now we have a gene as a concept, of course, but then how this gene presents itself physically. Well, it can be either DNA, RNA, or protein, and that's what the next ontological level captures. It is form level and even going even deeper you can have isoforms so this is how we try to structure our relationships because in the paper when we have relationships between genes then it could be on either of those levels so it is important to represent this whole tree. But we realize that sometimes not everybody is interested in this specific isoform information. So that's why sometimes you would actually want to collapse this tree to the higher ortholog level. You just maybe in your analysis you don't need this resolution. And so then you would transform the graph and move the links to the higher level of the ontology. Another design choice that comes up when designing a knowledge graph is directionality. So Neo4j graph it doesn't have multiple directions or undirectional relationships they must have directions and it's tricky for example this protein-protein interaction interaction. There is no clear direction in those type of links. And we went with signifying those two choices, right? You can just say, like, well, let's just link them randomly. Or you can have relationships going in both ways. And we chose to have just relationship one way and signifying that relationship is non-directional as an attribute. And the reasoning for that was mainly that it saves efforts and duplicating relationships when you maybe want to extract data and then when you have actually evidence, one piece of evidence having two links that distorts statistics, or you would need to duplicate and it's much easier to have it represented in this form. Another design choice that we had to take was representation of clinical trial data from clinicaltrials.gov. And it's a very complex data set. And here is just a slide not to be understood. So this is an illustration from a publication called Constructing a Fine and Great Representation of Clinical Trial Results from ClinicalTrials.gov. And you can see it's very complex and again, for bioinformaticians who are trying to answer maybe some simple questions like what what drugs are in clinical trial for this target or for this indication. All this level of detail just becomes an extra hurdle to getting to the answer. So we had to come up with an easier representation for this type of data and we created this hub node called evidence that aggregates piece aggregates three interacting coccolates three interacting entities and that's a drug target and disease and whenever there is an evidence for relationships of drug targeting disease it will be aggregated on an evidence node that will also have as an attributes information of what type of evidence it is yeah and here's an example like here it says it's a phase 3 terminated drug trial. Yeah, so I talked about aggregating relationships, and here's an example of how it works. So, for example, you can have NO1 gene interacting with EGFR on form level and isoform level and using different types of relationships and you maybe have two pieces of evidence about of phosphorylation and P signifies phosphorylation. So, you want sometimes a more condensed view of this to simplify your analysis, and we offer aggregated version of the graph where all the different macromolecule levels just collapse to an ortholog level and all the relationships of the same type again collapse into the one relationship type H. So that's how the transformation can happen. And the same goes for disease ontology, because you can have a hierarchy of diseases and you would want in aggregation process we would then roll up the evidence for a gene participating in a certain disease to a higher level in ontology for disease. And yeah, so with all this, I just wanted to show how difficult and complex it is to design a knowledge graph and um yeah i i we believe that um you um often um it's valuable to try to build your own graph and for that uh you want your own certain customizations and for that we also have uh helper functions to to customize uh your Neo4j graph and create an export according to your specifications. So it's not very flexible right now, so the things that we support now in terms of flexibility on creating your own Neo4j graph export is custom names of nodes and relationships, so it's like labels and relationship types. Custom customization of what attributes on the nodes and relationships are included, again depending on how rich of a context for relationship you want, and whether you want edges aggregated or not, that's another dimension of what you can choose. And sometimes you may be not interested in the whole graph, so you would maybe want just subgraph centered around a certain node, and you can do that. And sometimes you want maybe negative conditions. You want to exclude irrelevant portions of the content. Maybe you're not interested in clinical trials. Again, you can do that. Yeah, so again, schema design is a is a tricky um subject but uh yeah i think very important for uh preparing data to be uh uh easily understood so i liked a a quote that sebastian in the previous talk cited about it's not enough to understand that tomato is a fruit. Sometimes you want also, it's more valuable to understand that you can put tomato in a fruit salad. So, yeah, actually, now with the data, like let's put it in a fruit salad. Let's make a salad. So what can you do now with data in your knowledge graph? Well, here's just one toy example. What about finding positive feedback loops in gene expression? Do those exist? Well, interesting question. And actually, yes, they do. So here is an example of positive feedback expression loop that this query identifies. It would be much more complex to do in a SQL. Imagine writing a recursive query to do that. Also what is simple is querying ontology. Again, you can do simple recursions and traversals. Here, for example, we just retrieve all chemicals that are of type corticosteroid. And just for demonstration, here is on the right side equivalent SQL to do that. I'd rather do something on the top for those purposes. So you can also visualize the networks in 3D. So here is HOPE1 and HOPE2 expression networks of gene N01. And yeah, you can see when we go to hop2, it just really explodes. It's a really dense network. So, but yeah, what value is it really seeing your data in 3D? Well, actually you can think certain clusters have become more apparent. So here's, for example, we can see all the pathways visualized in 3D, and it is just very easy to spot this clustering of core pathways. And if you're curious about how it was done, there is this open source library 3D force directed graph, and it just doesn't take much to do this. Here is two lines of cipher, and the rest is just very simple JavaScript, and you can have this visualization. Another very cool data exploration tool and a way for scientists and informaticians to explore the data is using Neo4j plugin called SemSpec. with biomedical data because it tends to blow up. There are a lot of relationships. And it allows to define strict constraints on relationships and attribute types. And it also collapses those multiple nodes of the same type into just one blob. So you avoid hairballs in your exploration. And as I said, you can also... So you can constrain... So it's similar to Bloom, right? Where you are able to have constraints on relationships. Here you also can have constraints on attributes of a node. So here, for example, we have 1,289 diseases and we can see them in the table and we can apply additional filters on that. And what is cool, we can also propagate those constraints backwards in this visualization. But we can do even more advanced things. So like you can already make some discoveries just by doing the simple exploration in the graph. But you can do more complex things. So example, let's look at asthma. You can try to retrieve all the genes that we know to be causally related in the correlation way to asthma. And well, we get 355 nodes, 8000 relationships, a bit hard to tackle this. So we would want an extra step to kind of try to make sense of it. You would bring pathways and functions into the picture. You connect those genes to pathways and functions and bring that into the picture. And with doing that, now you can apply graph data science methods like Louvain neighborhood detection to color to color this cloud of nodes in a functional way. And then you can also reduce the number of nodes by filtering by centrality. And now it becomes much more manageable. As an extra step, you can add also known drugs to the picture and then yeah here you already have then some hypothesis for maybe drug repurposing that you can start exploring maybe there is oh here is a for example immunosuppressant approved for atopic dermatitis and it appears here maybe connected through this chain to the asthma so this is already a hypothesis this is kind of an example of what users of our knowledge graph the type of analysis they might be doing another way of getting information from a knowledge graph that is not there because like this is like what we how do you get extra insights is using link prediction and here we I want to show you an example where we defined train and test data sets using Neo4j you have to really carefully dissect as a graph and split it so there is no leakage of information between those data sets. And we did a gene disease link prediction and we used complex embeddings, as I called complex, because they capture directionality in the data and also different relationship type. And we used a library called DGLKE for training and for benchmarking it, we compared it to to predictions based on node degree to eliminate the biases related to the fact that certain genes are just overly... there are a lot of connections just because there is a bias in certain types of genes to be over-researched. There are trends in science that certain types of links are multiple publications. So this is a way of eliminating this bias in our benchmarks and so the advantage of using complex embeddings and capturing topology of the relationships can be seen in this gap between the dashed line, which shows recall of using node degree and the solid line. So for certain types of diseases, the gap is really significant. the gap is really significant. For example, for colorectal cancer, there is a huge evidence for the model to be really powerful. And so now jumping to biomedical. So this is what is possible using biomedical knowledge base that is human curated. So there are extra things you can do with AI extracted data. And if you integrate the sources such as Google patents, bioarchive, meta-archive, you have a wider scope of information in your graph. And so it results in around 600 million relationships that we capture. It enables new use cases. So, of course, the old ones like discovery of targets, clinical developments, and business development strategy. So here's an example. I have to speed up. I can see. So here's an example of the structure of KBEI. And it's a capture of semantic relationships, which are basically when there is a concurrence of two terms in a meaningful way, there are causal relationships, have also adverse events, clinical trials, GVAS studies. And the new use case that I will just finish with, it enables is competitive intelligence in the preclinical stage so you can see how for certain targets what are the patent mentions across different competitors in different pharma and across the time you can also see the same slice of but across clinical trials again who is looking into the clinical trials for GLP-1 and yeah indications for GLP-1 by top pharma companies what are the indications that are being investigated this is all can be extracted from this from this vast amount of data that is available in KB-AI. And of course, rare disease use case is also because of the scope of information is enabled. And there is a hackathon ongoing right now exploring rare diseases with this data set. Yeah, I would like to thank Neo4j for invitation alex and alexander and of course all the kaijin folks who worked on creating those products and vince from samspec for the license thank you thank you dimitri for your nice presentations I think we have time for one quick question, then we have to hand over to AstraZeneca to Antonio. Is there a quick question here in the audience? Yes. You mentioned the important fact of being a scientist, biologist being biased towards certain pathways. You mentioned a very important point of biologists being biased towards investigating certain pathways because there's already something known so they can build hypotheses and then you know it's just like vicious circle so is your ai based database somehow democratizing it more or actually just propagating it because it finds even more of connections of the same type i i would say uh it would be subject to what it can it there is in the literature so whatever biases exist in the literature, they would be also seen in the KB AI data. Thank you. All right. Then we head over to Antonio. Is Antonio here? Thank you again, Dimitri. Antonio is here. Thanks for joining us from the UK. We'll see your slides in any second here. Thanks for joining us from the UK. We'll see your slides in any second here. Yeah, we look forward here. So we have kind of 10-ish, 15 people left in the live audience. Here we go, Antonio. And we're looking forward for your presentation about generative ai and yeah how you um place it as a platform at neo4j within astrazeneca so over to you thank you i can hear my own voice let me try again. Okay. Can you hear me now? Hello? Okay. Well, I will... Yeah, we can hear you. Fantastic. Well, thank you, Alex and Alexander, for inviting me and present the work that we are doing in AstraZeneca with Knowledge Graphs and Neo4j. As you said, I will be talking a bit about how we use graphs in AstraZeneca. We build a service, an internal service, and we are also applying generative AI in our different projects so let's start what what I will what I have here prepared is a bit on what graph knowledge graph and graph databases are but I guess that this won't be needed I will definitely stop for a little bit on how we go from data to for data and wisdom and then i will continue um with the rest of the of the presentation uh let me this one um one one thing that i i will just say is like i like this very much alex i i think i stole this one i think i know i stole this one from from your slide deck because it's like to explain what is a graph, let me tell you what is not a graph. Because sometimes I talk about graph to people and they imagine images like diagrams and this sort of things. This is not a graph. So graphs for us are this. But let's don't stop on the graph. I guess you have discussed today about the different approaches like the RDF or the LPG approach. the different approaches like the RDF or the LPG approach. In my team here in AstraZeneca, we are focused on the label property graphs. That doesn't mean that in AstraZeneca, we only use label property graph. We also have people who work with RDF. But in my case, what I will be showing you is all about Neo4j, so label property graphs. Let me jump this. Yeah, and I will then talk about this one. So I thought of titling this one, Getting Ready for the Future. Titling this one, getting ready for the future. And the reason why I put this is, um, nowadays the growth of the data that, the, you know, we are getting data from several devices, uh, producing data in our, um, supply chain or in many, many processes. So the growth of this data is huge. It is, it is, uh, is so much that we need a very performing way in which we can deal with all this data. And for that, we see that using knowledge graphs, it gives us a benefit. If we compare that with the traditional data management systems, also it depends on the use cases. But for most of them, when the relationships are really important, or as important as the data points themselves, or probably more, we just get into better performance, we get into better benchmarking. And also, if you just define or if you just store your data in the shape of a graph, because this is very good for machine learning, because it's contextualized, so you can now use your data in machine learning or artificial intelligence, because this sort of approaches, they take advantage of this way of representing the data. So now, why knowledge graph at labs and life sciences in AZ? Well, in AZ or any other place. I talk about AZ, but this is in general. So first, using graph databases helps you, as in other type of database, integrate data from several sources. But now there is a way in which we can keep the provenance and we can have all this data like contextualized based also on where the data came from and everything. Now biological data or data in biology or biopharm is very complex. It doesn't mean that in other areas it's not complex. It means that in our case, the level of interconnectivity with the different domains, not only within the domain, but across domains, is huge. domains not only within the domain but across domains is huge if we use graphs that is giving us an advantage and it also facilitates the advanced analytics generating hypotheses supporting the decision making for our professionals and many more so because having all our data contextualized and in the shape of a graph, that adds meaning to it. So it has semantics. These semantics, we can use them to perform advanced search. And with that, because it's also not only based on the data points, but how they are related one to each other, we take this to the next level. are related one to each other we take this to the next to the next level now with knowledge graphs and graph databases we have data-driven insights and support the decision making we can do that either you know programmatically speaking or we can make use of generative ai to to support us in this process so that's the why now if it allows me the challenges that we find ourselves is that we we need to decouple and specialize meaning that we need to have a very clear idea on the stages of how what how do we treat the data where do we do what, avoid duplication, and not only duplication of processes but duplication of data. So that means that there's the need for a modular design and always having the security in mind. As you know, security, if we are talking about trial data or patient' data, security is one of the most important things. We will be processing data that belongs to our patients and that has to be paid attention with very carefully. And for us, even though I talk about GAR graphs, we think of our data as data products. So we have to manage to put this data as a product to later on use it in a graph or use it in, you know, in a different flavor if that is required because of the asks. So now another challenge is integrating and standardizing i don't know if integrating is a is a challenge like you know but definitely standardizing it is so applying to the fair principles and making sure that you share vocabularies across the company is is a big thing because companies like astrazeneca are very big. So having every other, you know, department group or people working in different projects, they're using the same control vocabularies. That is a challenge, not impossible, but that's why this slide is called challenges. And finally abstract, abstract and automate is because the idea is, well, let's see what is common across the different domains and different projects and let's try to make that as a service. And let's leave what is different just to implement in the different cases. So these were the challenges. This is the plan. I won't go into a lot of detail into it. I just wanted to show you here because this is like, take this as a bird's eye view. So the idea is that you have the data sources, the ingestion and integration. Then in the data platform, as you can see, there are different ways in which we can process our data. Snowflake, Snowpark. We have the different tools that you can find in the cloud. We also have the Data Lake store or the Data Lake computer. So depending on the project, depending on the decisions made in the past as well, because you don't want to actually change absolutely everything that you do everywhere. So that's why we have this variety. And now the next step here would be the Knowledge Graph Service. And in the Knowledge Graph Service, what we have is the data store and the graph compute. We use the GDS, Data Science Library, and we use the libraries like APOC for cipher. This is all this part is in AZ. What we have done is that we have a service, an internal service that for our different projects, we can spin up Neo4j instances where it will, the data will be compartmentalized and we will apply the security, a role-based security in which we will define who can see what, to what level and later on it will also help us interconnecting the graph databases via the federated queries that I will be talking later on in this presentation. And finally, on top of the graph database, we have the software ecosystem on top of the database. And in there, we can have tools for exploration or graph analytics or in-house build custom tool depending on our BTGs or our project needs. That was regarding the plan. Now, this bit, I like it a lot because it's like how you go from raw data to having fair data and wisdom. it from the data sources i i we can see the data as dots on a plane where they might have they can have meaning or maybe they don't have any meaning by themselves but that it depends on the data and the source and the domain now if you apply some grouping in this data. And you can see these as different colors for the dots. What we get now is information. So at this point, we have a clearer view on how our data is classified. So if these data points, color data points are connected one to each other, at this point what we are doing is contextualizing. When we contextualize, we move from information to knowledge. And here you can already see where I'm going to. When you have your data contextualized which is knowledge then finding your insights becomes easier that doesn't mean easy but easier and once you have your insights getting into the wisdom on what you want to find out the relation in between this or how close or far or you know where in the in the landscape this is that is what what we call the wisdom and let me read the last sentence here wisdom is the ability to use insight to facilitate informed decision making and that is the key point so now fair data i guess everybody here knows what FAIR data is. It stands for Findable, Accessible, Interoperable and Reusable. I always say that in the past scientists they find, you know, they run their research, they demonstrated their hypothesis and then write a paper, send the paper away, it gets approved and someone else reads it and you don't need more than the paper, that's fine, you learn and you carry on the research on top of that. I mean, you don't reinvent the wheel. Now, nowadays it's not only reading a paper. The paper has to come with probably a lot of data for you to being able to replicate that. So if you don't have access to this data and you cannot reuse this data, it will be very difficult to continue the science the way it was that has always been done. And since we have entered in this digital age, the amount of data that is produced reaches astronomical sizes. So having a mechanism in which your data can be reused is very important. And that is what the FAIR principles are about. So now let's see how we combine these FAIR data principles and the pyramid from data to wisdom that I explained a moment ago. So how we do it in our team in AZ, what I'm going to tell you here is not that this is the only way or it is the way this is how we do it okay so i'm happy to have feedback and it's okay if people do it differently because there are many ways of doing this so how we did it how we do it is when we retrieve the data from source oh actually what you get at this point is data it's nothing more than what you retrieve now you have to go through an enrichment process in which you apply control vocabularies taxonomies ontologies you make this data interoperable by adhering to the same control vocabularies and then you jump from data to information now if you define a model for your domain for your use case for your project and then you contextualize into this data model then you move from having information to having knowledge and knowledge at this point is reusable because it it has the context it it has a model then out of this we instantiate this as data production data produce can be stored in many different formats JSON CSV XML packet you choose that is is depending on on your company requirements okay now if you put this data if data produce a catalog, you make it findable and then you also make it accessible. So at this point we have fair data or fair knowledge in this. And what we do in the projects we run is that we take these data products and we put them in the shape of a graph. And that is the graph database instantiation and with the graph traversal, the cipher queries, the graph data science is how we find the insights and get out the wisdom to answer the questions that we have at hand. Okay, so that would be the end to end. Now, a glimpse into the future of graph here in AZ. We have Johannes Zimmermann in the audience and he presented earlier today Ontoverse. Johannes, Ontoverse is not yet here because I'm using the slide deck that I had approved in the past. It will come in here but I'm not worried because today you guys have all heard about this nice Ontoverse project that johannes is leading so thank you now um potential applications well you can imagine supply chain planning activities quality control that is all in the make and supply then we also apply to drug discovery compound patient study this is all in the research and development, but you can also apply this to the sell and market for the CRM or competitive insights, real world evidence. I mean, everywhere where the relationship between the data points is as important as the data points themselves or more, is a good place to apply graph databases. And these are the potential applications. So now I will start explaining or talking a bit about the biological insights knowledge graph in which you know our scientists they we are providing with this graph in which which they encode drugs, diseases, indications, genes, interactions, everything. So this can be used in our, let's say, by our scientists to take advantage of this data. And of course they can use graph machine learning and that will help to have better drug discovery or you know to the to help them make better decisions on what is going to be studied or how things are going to be done on the other hand we have the competitive intelligence knowledge graph which integrates data from many different data sources where they gather information when the pharma companies will go into the trials public all this data needs to be shared and once this data is shared you can then retrieve it and we use this data in a graph because we what we want is to you know to have a view of of the world in in the pharma and and the different areas the different trials that are going on by other companies in different um um indications and so on and so forth just to help our leadership teams to to make better decisions or decisions based on data on what is next or where it's worth putting more effort. We also have supply chain knowledge graph. As you can imagine, this is all the digital twin in which you have all the information of what happens in your pipelines on how the materials from the different suppliers arrive to AstraZeneca, how you use them to to produce the the the products the tablets so all of this is put in here and then if we are information about the co2 or the water waste all this you know we can then leverage that in the supply in this in the sustainability initiative in which we can assess the the company to, hey, look, if we improve this process, we will make a huge change in the impact that the company has in the CO2 emissions and everything. And this one is a very exciting project, to be honest. We are very happy to be working for this as well. Now, in the area of compounds, we have three graphs, which are the compound synthesis and management, the high throughput screening, and the compounds and fragments. The one in the middle, this guy is huge. I mean, this has 1.3 terabytes of data as per today, and it has cost AstraZeneca millions of pounds to get into have all this data. of pounds to get into have all this data. And by having this in the shape of a graph, it is increasing the quality and efficiency of the future HTS screens that we will be performing. So this one is huge. On the compound synthesis and management, this is similar to the supply chain, but in how AstraZeneca produces the compounds or compounds that we buy from third parties and how these are distributed and it helps like managing the shipment and how we schedule the different, let's say experiments in the labs based on the availability of compounds. And then the compounds and fragments, this has all our database of compounds broken into fragments all the relationships and we use the neighborhood as one measure of similarity and we add many more algorithms in which we can then see other interesting things about the compounds. These three are in three different databases but we are using query federation to just run queries against the three of them at the same time. This is very exciting, it's impressive. Now in the farm side, we have formulation, the knowledge graph. I have here with me Sandra. She's working in the formulation knowledge graph and the Boston formulation knowledge graph. These are in the preclinical area. And these graphs are helping or leading us, leading our scientists to quicker and more effective scientific development. And of course, they enhance the collaboration by breaking down all these silos in which the data was before. So now we can connect all these disparate data sources into one place. It's a fair explanation, I would say. Then we also have the lipid nanoparticles. This one is run by Susi. And this is the same as in vivo. And it has also activity from in vitro. And well, they have put all this data in the shape of a graph. And later on, the idea is to apply the machine learning models and get more information or more wisdom out of this data. It's growing by the day, so it's very nice. Now let's talk about federated queries without showing you federated queries. And the reason why is because we say, or I like to say, that graphs are stronger together. So when we think of graph databases for different domains, we could imagine like islands in an archipelago in which one of these islands is a different graph database in different domains or different graph databases in the same domain. So now going from one island to the other is challenging. I mean, for sure there are many ways of going. You can use a boat or maybe if you don't mind getting wet, you just swim, you can go from one to the other. But the point being is, what if we build the right infrastructure? So in this case, you know, this road that nicely goes from one island to the next and so on. So if you think of query federation, this would be like these bridges that connect one island to the next, okay, and that will make it easier for us to just go through the different islands to reach our goal. By doing this, we achieve seamless integration of the data from the various sources, that's for sure, not only in one domain but across several domains. We overcome data silos, which was the idea, and by doing all of this, what we are doing is increasing the value of our company data assets, either internal or external, because you can now query them all as if it as as if they were in one place and that is very powerful if you think of it so um i i just put here a a an example but i i won't go through it because uh i don't know how am i doing in time, but I'm probably very close to the end of my of my slot. Just letting you know, we just run this POC using BIKG and CIKG as the internal databases, graph databases. These are company restricted. And just to prove that this can connect, we also use ReactOn, which is the biological pathways knowledge graph. This is publicly available. We just deployed it internally in Azure, and then we used query federation across this stream. And the idea was to have better data integration and analytics, more accurate information, gain novel insights, and then achieve better informed decision making. Of course, now we could have done this by just creating yet another ETL pipeline, taking data from all these three databases and create, you know, putting into a new one. But we didn't do it like that. We just ran the graph ran the federated queries. Plus, we can run on existing infrastructure. We don't need to create new. And then there's no need for upfront pipeline development. And yes, as the cons, you can say, Antonio, but the queries when are federated are a bit more verbose. Yes, but it's not that much and it's worth, you know, the benefits are worth. And of course, when you connect several elements in a system, the overall speed will be always tied to the speed of the lowest, of the slowest, sorry. Oh, you know, but let's assume that you configure all of this based on the requirements and the load that you are expecting to have. Okay. I will jump through the example. Maybe I will explain one of them only. Let's pick this one. Yeah, for example, here, let me go one earlier. So here the question was, oh, what is going on? Apologies. I have genes in BIKG and these genes, they have a side effect. Okay, they are um we know that the the there is some there are some genes you know in the bibliography that they are associated into a um side effect they call it side effect but you can say uh indication yeah um but also in that's in BIKG. But in CIKG, we know what are the drugs that are tested in trials. And then we know these drugs, which genes are targeting. So now if I just say, okay, I want to know genes that are related with pulmonary hypoplasia and these genes I want to know the trials that the drug is targeting these genes. So as you can see one bit of the information is in one place and the other is in another. So by running this federated query that you're seeing here the result is this and here we see the pulmonary hypoplasia is this red dot in the middle then if we go one level these are the the genes in bikg that we know that have some sort of relation with this now the purple ones are the counterparts in cikg and from this you know what are the drugs targeting this gene and from these drugs you know in which trials they are being tested so the answer to the question is the orange notes okay and of course if you now add react on data you can also add not only the the trials the genes but in the counterpart of the genes now you can know the areas of biology where these genes play a role, you know, the reactions, the pathways, so this would be the areas of biology, the reactions, and these are the identifiers of the trials. This was for PITEN, but in any case and to finish i will explain a bit on this talking to your knowledge graph sorry the the name is not final i just put the ragazza ragazza thing because i found it fun but this is me naming things this all of you for sure know when you use generative AI, you are at risk with this AI hallucination. The hallucination is when the generative AI just gives you an answer back that makes a lot of sense when you read it, but doesn't necessarily need to be true. If you just use generative AI to make decisions, that could lead you to a wrong decision, misinformation, or even dangerous situation. Imagine that you are just using that for a self-driving car. That could be a problem. So we were thinking, can we avoid hallucinations? That was our question. And then there are two things. One is like, well, you can keep training your models with exhaustive and updated information and then check very regularly for inconsistencies. But this is an ever-ending problem, at least if you do it with this naive approach, as I explained. But we could also, for example, use AI to write queries against one database and then run these queries against the database and see what the result is. So now hallucinations will still be there, but if the query is wrongly written, it won't probably give you any result. And if the query is written, it's right, then it will run and it will give you the results. So if you base your answer in those and the data in this result, you know, that is what we are looking for. So here, the point being is that we need to have a source of truth, a graph database in this case that we have built ourselves that is reliable or gold standard. We also need to make sure that this database is updated and accurately represents the domain for which it has been built and then by doing that, what we will do is generate these queries, run the queries and interpret the results. It's important to say this approach is not applicable to creative tasks. Okay. So what we end up having with the ragazza is we allow experts to ask questions using natural language. We have AI driven instant responses and avoiding hallucination or more than avoiding when hallucinations happen. What we have is that we don't give any answer. Then this is this approach will allow so we don't need to send this data or to train any model with our data. So this data is not shared with third parties. It provides a personalized experience per data domain because we can tailor this to the different domains. And by doing this, we have like an engineer that can help you writing your own queries 24-7, which is interesting. How this thing works, you probably have seen this during the day probably more than once. But there we go yet another time when our experts our professionals our scientists they type a question in the chat and this is sent what we do is that together with the model generic and domain specific rules some training set and of course the previous interactions because you want to keep having like a conversation with the chat. That is all sent to the OpenAI, Microsoft Azure OpenAI. What that gives us back is a cipher query that we run against the domain graph database that will produce a result which is in JSON. This JSON is now sent to the generative AI, to OpenAI, with some rules to say, just provide me an answer solely using the data that I'm providing you. So generate an answer to this question, make it look like a chatbot answer, but don't invent, don't add't don't put more but don't miss anything in this set of data that i'm giving you and this is what is now sent back to the to the user as the response yeah i hope i'm in time i think i'm a bit late apologies for that this is not a one one man band this is a group of people working on this actually a large group of people doing graph and knowledge graph in astrazeneca but not only employees in astrazeneca also people in neo4j that they have been helping us and you have them on the slide on the on the right hand side thank you very much and if there is time alex i'm happy to take any any questions thank you very much antonio all right is there one question from the audience here no question from the audience yeah i have one i have one from from the internet uh corey asks graphs get huge after adding all the info and you contextualize it how do you make it accessible graphs get huge after adding all the info and you contextualize it how do you make it accessible to to your clients to outside people do you use bloom or use custom web apis to get insights in a friendly use or do you train your clients to code cypher queries actually it's a very good question and the answer is it depends on the user. So there are more technical users that might be other developers that they use the graphs. In this case, we introduced them to the fantastic world of graphs and graph traversal using cipher queries, graph data science, and then they are in this interface of maybe the browser neo4j browser or if then what you have is that the user is not like computer scientists let's play like that but they still have a good technical knowledge and you can offer tools such as bloom in which you can explore your data, you know, and then take advantage of having it in the shape of a graph. We offer also this possibility. Let's not lose sight of other people that they don't have technical background and they need to have dashboards or they need to have custom UIs that are built just to answer their needs with some visualizations, some search, you know. So at the end of the day, we offer a variety of options. But we are always happy to teach Cypher to people and just to make this community bigger in AstraZeneca. So it's a bit of everything. It depends on the user. Unfortunately, sometimes it's often the answer we have to give. It depends. Even though we don't want to give that answer, but it is. It is. It is true. Cool. Thank you very much, Antonio, for the great presentation, for the great overview. And yeah, we continue with the program almost at the finish line. Second to last presentation is going to be from Matthew. You're going to say a few words, Alex, to introduce him and then we continue. Yeah, exactly. Thank you, Antonio, for your nice presentation and looking forward to see you live soon. So next presentation is about glycoproteins and sugar monomers. Super interesting presentation by a startup company from Australia. So that's why it's in the middle of the night. I think it's two o'clock at night. So that's why we have a recording from Matthew. Yeah, they look for glycopatterns and doing mass spectrometry and using cipher to query the incomplete dataset and then connecting it to literature data. So really looking forward for the update. Last presentation we had at Nodes 2023, which was already really interesting. So let's see what they have in their update. Hi and good afternoon. Thank you for that kind introduction and the opportunity to present some of our work at Interven to unlock glycoscience with knowledge graphs and the development of regenerative AI solutions. A majority of the work that I will present this afternoon is a continuation of the story that Karina presented at NOSE 2023. Last year, we introduced the Intervenz activity to build a glycocentric knowledge base that complemented other omics studies. Karina has been pivotal to the success of the work so far and has been instrumental in designing the models and schemas and the ingested data, both from internal and third-party resources. So protein oxidation plays a fundamental role in health and disease. It orchestrates steps of protein folding in NER. It fine-tunes protein interactions as well as protein-glycan interactions. Glycans shield proteins from antibody binding as well as pathogen interactions and also contributes to malignant transformation events. Glycans are highly dynamic in state. So if we look at the glycosate and protein model on the left hand side where glycans are shown in blue, over the time course of this simulation you can see that most of the protein surface is protected or covered by glycans at some stage. The aim of glycoproteomics is to understand how glycosylation shapes biological processes by understanding peptide sequences, glycan structures, and sites of modification in a system-wide context. In a simple example, we have three possible in-glycosylation sites. With 10 different possible glycans at each site, this would lead to 1000 glycoforms per protein. We also must be mindful that even though we have a possible in glycosylation site present on the peptide sequence, that site may or may not be occupied at any given time. It is therefore this rich diversity that contributes to protein function. It is therefore this rich diversity that contributes to protein function. At Intervene we have developed an integrated analytical and informatics solution for glycoscience. Blood samples are collected from patients. Liquid chromatography and mass spectrometry platform is then used to separate glycopeptides from these samples. The abundance of glycopeptides is then determined by an in-house platform called PIP. PIP has the capability of performing automatic peak selection from LC-MS data. In turn, ML models are built from the abundance data determined by PIP to diagnose disease or predict its pronostic state. In addition, the proteins and glycans associated with these glycopeptides, identified by the models, are further investigated in previous publications and over diseases. This is using data obtained from internal or public resources and contributes to the knowledge Graph that we are building. The aim of our GenAI and Knowledge Graph projects is to provide the community and industry with access to an AI glycoscience discovery platform. This will bring together clinical and translational data collections, complemented by analytical data collected through our MRN platform, enabling us to build a preparatory clinical, structure, and functional glycoproteomics graph knowledge base. Together, this will allow us to model changes in protein coagulation and disease. Ultimately, this will allow us to discover novel disease biomarkers, free multi-omic analysis, and identify functional pathways. We are also extending the look at generative biologics, building models of human disease, and exploring biology at scale, including the de-nerve optimization of glycosylated biologics. The GenAI and KnowledgeGraph platform that we are developing complements an existing product pipeline called GlycoVision. GlycoVision is generating insights from biomarker discovery to clinical actual testing. As previously mentioned, it relies upon PIP. PIP is a piece of software that translates complex LC-MS instrument output into easy interpretable digital data powered by AI. By using recurrent neural networks and trained on Manley-Anset and Gram-Douglas peaks, PIP automatically determines or extracts peaks from LC-MS data, therefore allowing high-throughput glycoproteomics analysis. This data, combined with information on the clinical samples, is then used in part to build components of our knowledge graph. A glycoproteomic data combined with public datasets to provide unique insights on disease and related pathways. In addition, we provide information from the glycovision package, which includes expression profiles, analysis, and reports of clinical samples run. Similar to other initiatives that we heard about earlier today, the Interven Knowledge Base provides information on pathways, including protein-protein interactions. It provides detailed information on disease and disease mechanisms. It also includes information on the binding and mode of action of chemicals, drugs and other chemical entities, and also protein function and gene regulation pi processes. Importantly, what sets the pi is the inclusion of proprietary biomarker data, and also the connection to glycomics and glycoprotiomics. We have widely provided information on site-specific annotations, and therefore the glycoprcol forms and the structures. But we also relate this back to the biosynthetic pathways and subsequently other protein-protein interactions and also the impact and role of how drugs or other chemical entities affect the regulation of the building blocks of glycans and subsequently glycoproteins. Here we provide a brief snapshot or an example of the main glycocentric subgraphs of the knowledge graph. On the left-hand side, we see the samples, i.e. the clinical data used to build the graph. We have the associated glycopeptides. We then link this to protein-protein interactions. These are linked to curated resources, i.e. the references. Here we have the glycan structures associated with the glycopeptides, and then how these are linked to protein-protein interactions or pathways. For each glycan, we then link to a glycan or structure topology subgraph. This subgraph shows the connections between the monosaccharide residues, therefore allowing both motif and epitope discovery of glycans, associated glycopeptides, and subsequently relationship back to clinical samples and therefore an understanding of how glycans change in disease context. Consequently, the Interven Knowledge Graph is really connecting the islands between multi-eumic data collections. Our Knowledge Graph offers access to comprehensive data collections from in-house clinical data to third-party resources. Ultimately, we are bridging the gap between other omics and gloca science. Therefore, this is enabling more stronger, broader biological discoveries to be made and ultimately more of biologically meaningful questions to be asked of the graph. The diagram on the right, similar to other diagrams we have seen throughout the day, really indicates the relationships that we are building in the graph, whereby for example a gene encodes RNA and subsequently a glycoprotein. This glycoprotein interacts with another glycoprotein. This glycoprotein interacts with another glycoprotein, which can be described by a publication, or is mentioned in some sort of clinical relevant patient sample data. We have information on drugs or other entities that can interact with a glycoprotein, and subsequently the protein-protein interaction, we have the information on the disease state and we can link this back to any clinical trial information that's available. So if we bring all this information together, our knowledge graph features exploration of complex molecular network interactions and pathways, drug binding actions, protein function and disease specific contexts. All of this information is also related back to glycoproteomic and glycomic associated knowledge. We also enable language search through custom phases, full text and pattern mining, building visualization and rendering engines for molecular pathways, providing complete information on data provenance and metadata, and building new and exciting tools in the GenAI space that will allow users to interact, interrogate this curated knowledge base with new AI tools. For the remainder of the presentation, we want to show a few use cases where we combine information from third-party resources or experimentally derived data and how it links with the Internet Knowledge Graph. As many of you would be aware, PD-1 engages with the PD-L1 and suppresses T cell activation. Many monoclonal antibodies target in PD-1 to block latex interaction with the L1 protein, enhance T cell activity. More interestingly is that several of these antibodies have an overlapping binding epitope that includes the N-calcisation site at position 58. And this is clearly shown on the models with the N-calcisation site circled on the top right corner. Work by ourselves and others has clearly shown that the core fucose, shown here in red, of PD-1 at position M58 is a key determinant in antibody binding. PD-1 fucosation determines PD-1L1's blocking efficiency of some given antibodies. It has been shown that fucosate is soluble PD-1 is higher in serum of late-stage lung cancer patients, and therefore variable glycosylation modulates the activity of monoclonal antibodies used in clinical practice. For the purposes of this exercise, we wanted to show if there was a connection between the PD-1 protein and CDL4, both immune checkpoint targets, and if it connects with a panel of glycopeptide biomarkers on our glycovision product. The clinically derived glycopeptide biomarkers of glycovision are stored in the subgraph of the intervent knowledge graph. As you can see here, by using three degrees of separation, we clearly show that both PD1 and CDL4 have direct connections to the regulation of biomarkers on the glycovision subgraph. As such this clearly demonstrates a role in regulation of these two proteins on the top and its connectivity with glycoprotein expression and regulation. For the purposes of this presentation, we omit the 10 proteins shown in the bottom panel, however we are quite happy to discuss these further offline. In addition to providing clinically relevant glycosylation data on PD-1, the IntervenGlycoKnowledge graph also includes data on other cancer types. This includes melanoma. As part of our GlycoVision products, we have identified biomarkers associated with immune checkpoint inhibition therapy and melanoma. The figure on the right-hand side clearly shows the network model, describing relationships of the clinical design studies shown in the center, and the glycopeptide markers identified has been off-regulated or down-regulated. For each of these N-glycopeptides, we showed a composition attached, in this case, by 412. In addition to displaying the glycopeptide, we then have a direct link to the protein node itself. And from this protein node, we can go away and extract data from Uniprot, Nexprot and other resources that describe the functionality of that protein. A summary of those 12 N-glycopeptide markers shown to be involved in melanoma immune checkpoint inhibitors displayed here on the left hand side. So we have the protein, the site of glycosylation, and the glycan composition. We also do have a non-glycosylated marker here as well. These two examples of clinical data collections graph really show the power of our design to incorporate metadata on clinically relevant samples, analytical data collected, and how we can connect that information to other functional pathways and interaction studies available in the public domain. This really allows the end user to ask very complex, biologically meaningful questions of the data available in the graph. So over the last few months, we have expanded the diversity of information in the graph. So over the last few months we have expanded the diversity of information in our graph by curating internal data collections and ingestion data from other facets of life science. As you can imagine this has increased the overall complexity but also the quality of the products. To help improve data discovery and end user access we are developing a suite of Genio tools. As an example we are developing a suite of GenIO tools. As an example, we are building upon the text-to-cypher shared by Obers, that we may hear about later today, to build agents that use the provider graph schema and LLMs to query the underlying database. So far this has proven relatively successful for more simple queries, and with the next releases of the products we'll be providing novel tools that unlock Glycoscience for the life science community and industry. During this presentation we have hopefully given a snapshot of how we can improve the connectivity of Glycoscience knowledge with other multi-economic data collections. The tools and databases we are building are truly novel, and we believe we can improve our understanding of the role of glycosylation in disease. Provision of an integrated analytical, informatic and gen-IA toolset will help simplify data discovery, but also accelerate the fine-tuning of models with glycocentric applications. So I'd like to take this opportunity to thank the organizers for this chance to present some of the work that we're doing at Interven. If you have any questions or would like to follow up, please feel free to contact me at these below details. Thank you. Super cool. Thank you very much, Matthew, in absence, but is excused at 2am in the morning. Exactly. It's clearly not a time you want to do a presentation. I hope you liked the presentation as well. I found that super interesting because most of the people talk about proteins, but always forget that these goddamn beasts are glycosylated and makes it even more complex to work with so interesting stuff so that brings us to the last presentation the last session of today uh i hope i hope too much can we you know we are a little bit early so um he's probably just joining about the session. The last session we are doing today is a bit of more hands-on session. Exactly. We're going to show how a graph rag actually works. If you want to implement it, if you want to do it really. We heard about it a lot. I mean, Antonio said that you probably heard it a couple of times when you showed the classical diagram of how it works. Actually, he was right. We heard it a couple of times, but it's good to hear it a couple of times more, I think. But today to finish the session, we are going to hear about it, but also see how it works and how you can experience it and take that on basically as a little inspiration if you want to continue doing the same at home. We have, or you have, prepared a data set for Tomas. Well, it's a publicly available clinical knowledge graph by Matthias Mann, so pretty much known in the field. That is kind of the basis for creating a chatbot, and I think it's one of the best person to have Thomas here, who is the champion of working with LLMs, optimizing them, trying things out. How do you say? Get the feedback, communicate the feedback, what works, what doesn't work. And I think that is really valuable for all of us, because in many ways, right, GenAI promises everything. But in the end, we have to be realistic. And I think that is really valuable for all of us, because in many ways, right, GenAI promises everything. But in the end, we have to be realistic. And that's why I think we should do things, break things, as I always say, and try to learn from that. So we shouldn't be shy of breaking databases. Yeah. Just don't do this with your production database. Yeah, just don't do this with your production data. Yeah, exactly. But yeah, Tomas is great. I mean, I don't know if you if you follow maybe the live streams of Neo4j, maybe you subscribe to the weekly newsletter we're sending out, or maybe you just follow us on a medium where we where we regularly post on the developer blog. But Tomas is like, like you said, a lighthouse instance in Gen AI blog since he took on the topic, I think, early last year already and before that already, but with the advent of Gen AI as a whole, he was like the leading voice, I would say, at Neo4j. He wrote so many blog articles, so many hands-on interesting pieces, how to work with Langchain, how to work with Lama Index. Just recently, he wrote an interesting integration there. We have a GenAI Labs page, so maybe that's an interesting shout-out before we dive in with Tomasz. If you go to neo4j.com slash lab slash gen ai ecosystems you find everything you can you can it's available in terms of content around neo4j and gen ai so some hands-on stuff as well should be interesting but i see Tomasz is ready so without further ado let's dive in to the final session of the life sciences workshop day of 2024 and yeah good good to have you too much uh thank you for doing this and uh yeah as a last point of the of the intro to thomas he is a real person as you see uh if you see the amount of posts and articles he does you could think like thomas is an llm itself but or maybe that's a new gen ai with with some animated image here. So, Thomas, over to you. Looking forward for your last presentation. So, okay, just let me know that you can hear me. We can hear you. We can hear you, yeah. And you can see my slides. Okay. So, I'm going to assume you can see my slides, but there's just some echo, unfortunately. Okay, it's fine now. Okay, so my name is Tomas Velatenic. I'm gonna talk about graph-electro-life science today. I'm not an MD, like a medical doctor, so don't expect in-depth medical explanations, although I worked on a medical project with a biomedical institution here in Slovenia, so I have some idea about biomedical knowledge graphs in practice. And that's enough about me. So let's dive right into it. Right. So today we're going to talk about LLM based chatbots using the retrieval augmented generation. So as we all know, LLMs are really great and they know how to answer a lot of questions. But one of the problems they have is the knowledge cutoff date. All the information that was part of the training data, you can kind of assume That was part of the training data. You can kind of assume that the LLM will have some idea about that information, although we don't exactly really know what happens under the hood. We just kind of take it for granted that LLMs can learn from training data. And like in medical field, like that's kind of a problem, right? Because I think GPT-4 is trained up to like October 2023. And then it's missing the last half year of research papers. And like in the biomedical space, you always want to know what's the latest and the greatest today and another problem is hallucinations so specifically with biomedical data for example if you're gonna ask uh uh llms to cite some research papers i am guessing you'll get quite bad results. So you don't want to do that. Like here we got examples where lawyers asked LLMs to cite legal cases, but like citations are notoriously bad with LLMs. And then another problem is the lack of information. So you don't really know the information the LLM was trained on. Because LLMs are trained on web data, And there's a lot of conspiracies on the web. So if your LLMs would start spewing conspiracies, that would also be a problem, right? And kind of the lack of domain specific information is also a problem. Although I don't really know how much of the biomedical information is fed to the general LLMs. So, the accepted solution to overcoming these problems is the so-called retrieval augmented generation, where you pass context at query time. So the flow is the following. A user asks a question, we have some sort of smart search tool. The tool has some ways of identifying or retrieving relevant information from your knowledge base. And when the element documents information is received, we feed that together with the question to the LLM. And then the LLM can generate hopefully an up-to-date and accurate answer. So in this case, we are kind of ignoring internal knowledge of an LLM. And we're just using LLMs as kind of like interpreter instead probably know, like chat with your PDFs is kind of the hello world of LLMs. So you take a bunch of PDFs, you chunk them into text, you embed them using a text embedding model embed them using a text embedding model into a vector database. It can also be just vector index because Neo4j has a vector index as well, and it's not a vector database. So that is kind of the ingestion part. Obviously there are nuances, how do you chunk the text, which embedding model do you use, all of that. But that's the gist of it. Then at the deliver time, you again use embedding models, and you embed the question that the user asks, and try to find the most similar documents that the user asked. And again, as the part of the retrieval augmented generation, you feed the documents to an RLM and then the RLM generates the final answer. So that's kind of the hello world of Rlan's and what you see in most examples. But since documents are basically unstructured text and then text embeddings is a way to find similar or relevant documents which are in unstructured text. That's all really nice. But there's a whole field and this is really relevant for life sciences or biomedicine. If there's a ton of structured data available. For example, a typical biomedical knowledge graph has a bunch of structured information available, right? So you can have genes, compounds, diseases. I just copied this from, I don't know, from a Neo4j page, but this is mostly what we kind of see as a typical biomedical knowledge graph. So a lot of structured information and coming from various sources. So and one thing how I see basically biomedical knowledge graphs are being constructed. So you have a bunch of existing biomedical ontologies that you can use to incorporate kind of existing expert curated data into your knowledge graphs. But then on the other hand, like biomedicine, before LLMs, I think, from my observations, is that biomedicine was kind of the driver behind NLP and constructing knowledge graphs from text even before LLMs. Because if you look at the history, historically, there are a bunch of named entity recognition models and the relationship extractions, and it's like academically backed projects. And then you have a bunch of here, I added to, I don't know if you're familiar, but it's basically a very sophisticated entity linking model, which can basically take the text and link entities in that text to existing ontologies. And then there's the Indra. I think it used to be from Harvard, but now they moved to a different college. And they do similar, but they also do relationship extraction. And so the field of data mining and extracting information from text is really strong in biomedicine. So all of that together allows you to create really sophisticated knowledge graphs which contain mostly structured information. And now that you have this structured information in the knowledge graph, the idea is to use it in a reg pipeline. So when I tuned in, I saw the presentation, they talked about text to cipher. So basically generating cipher statements using RLM, where you provide RLM with existing knowledge graph schema, and ask it to generate a corresponding cipher statement that returns the information that can answer the question. So here I have like one example that Alex actually gave me. So it's for finding clinically relevant variants of a gene. And as you can see, it's like not really advanced, but it's not the simplest retrieval query either. Right. So, and fortunately, my standing at the moment is that generating cipher statements with with llms is just not good enough as also the previous presentation stated it's good for like simple retrieval queries but when you get into more advanced ones or like even there's this problem of mapping like user there's this problem of mapping user questions to existing schema. Even if the question is simple, but the user is not aware of the information in the graph, the mapping from the question to the schema can be not straightforward. So there are a lot of problems in generating cipher statements with LLMs. But we still want to have REC using biomedical knowledge graphs to answer questions. So what's the solution? So the solution is, in my opinion, is to use basically LLM-Suite tools. Some of them call LLM-Suite tools agents, and I also like to use the word agents. So instead of providing an LLM with graph schema and optionally few short examples and asking it to generate the cipher statements, we provide LLM with a couple of tools it can use to interact with the knowledge graph. And then we also give it instructions, when to use those tools and how to use those tools. So we kind of, as you will see, we lower the complexity of the task given to an LLM and we make it more robust and deterministic. And the whole idea is also that instead of generating cipher statements with RLMs, we use predefined templates. And I will show you that as well. So here is an example agent I developed in collaboration with Alex so we have a user asking a question and then the LLM has three tools it can decide to use so it can if the user is asking about associations of food to diseases, it can use the associated food tool and that has a predefined cipher template, right? Or if there are protein expressions in tissues. And by the way, I might be saying something completely stupid from like a medical perspective because I'm not an MD but that's just my understanding of what Alex provided me and then the last one is the last tool is finding clinically relevant gene variants right relevant gene variants. And basically we provide those tools to an LLM as functions. I don't know if you're familiar with LLMs, but most of commercial LLMs nowadays provide native function calling and some open source models now also followed perspective where they support function calling natively. And so how do you define a tool? You can give it a pretty cipher template. So for example, this um a cipher template to find these like proteins that are associated with diseases and then those protein proteins uh basically which tissues are they associated with so alex and alex told me that that's basically proteins expressed in tissues. So basically, yeah, let's just put it that I don't want to explain too much because I don't really know. But basically the idea is we can provide a template with parameters, so that could be one or more. we just have one parameter so you can see that this is so basically when an s when a user will ask a question about proteins expressed in tissues for like cancer the lna will populate the disease parameter with cancer and then it will return the results. Now obviously this in practice this cipher would be much more advanced because there's like a whole lot of ranking and sorting and prioritizing of results that you should do because this simple query will return a lot of results so here I just put a limit 10 to not over the LLM. But in practice, the Cypher template could be much more involved. So this is basically defining the Cypher template, and then we have to inform the LLM when and how to use this tool. So, for example, here we have disease tissue input, and we say, okay, disease is basically just the disease mentioned in the question. And then we inform the LLM, you can use this tool when you need to find tissues where proteins with a specific disease are expressed. Now again, this is my interpretations of what Alex provided me, so it might be completely wrong, but that's not the most important thing here. And let's show you, oops, a demo. So hopefully if I can... Copies I created a simple GitHub repository that you can see basically here that you can see is basically just on my GitHub at clinical agent and all the information. But basically if I ask it a question, now hopefully it will work, But if I ask it what are associated foods with cancer, it says, OK, I am going to use the associated foods tool. And I'm going to populate the disease parameter with cancer. And then here we don't really see what we get out of the database, but we do see the interpretation by the LLM of the results, right? So for example, chocolate spread is associated with colorectal and lung cancer due to metabolite betaine. Because in our code, I can show you basically just a second. But for the associated food, let me zoom in a little bit. we return the food, but also the metabolite that's associated with the disease. So that's why we can see that the LLM knows the associations between diseases and foods, but also to do with metabolites. Then if I ask it a question that requires a different tool, you can see that now, okay, because we're asking for gene variants, I'm going to use a tool gene variant and I'm going to populate the parameters correctly. And as I mentioned, there could be one or multiple parameters that Dial-LM can populate so the Cypher templates can be much more involved and more advanced. But this is basically just a simple presentation or simple implementation I did in one day. And as you can see again again, the LLM then, I'm not even going to try to explain the results. But there is like the relevance that we get from the database, which disease, right? And then the variant ID. And with this approach, right, because we are not generating cipher statements, I am much more comfortable giving demos or using that in production, because LLM is not tasked with generating a query like this, which is not really complex, but still, like TextoCypher, I don't think it would get it all the time. Basically, we reduce the task complexity of an LLM, from generating such a Cypher statement statement to basically just deciding which tool to use and populating parameters. So the complexity goes way down and that in turn means that we get much more robust process, or the workflow, and more deterministic as well, right? Because when you're generating text-to-cypher, it's not a given that for the same question, you will always get the same text-to-cypher. But here, like, you will always, like, for the same parameter, you will always get the same results, right? So that's definitely an improvement. And then let's just see the protein tissue example just to show it all now. So there's the last one. And then here's just a bunch of tissues. Let's see. Yeah, it's expressed in various tissues. So this is how the LLM interpreted the results. So again, you can give more context to the LLM, how it should interpret results. And you can also like deterministically define what types of results that you want to give back to the LLM, right? So it makes the process much more robust and deterministic. And one thing I haven't tested, but the nice thing about this approach is also that it's scoped down so now if i ask it we do you have any drag think um i say candidates for aspirin right let. Let's see what happens. I haven't tested it out, but hopefully we say, okay, I don't know how to do this. Right. Ah, okay. But you could fine tune it. But the idea is that it says I don't have the capacity. Right. So it basically says, I don't know how to do this and this is really nice because we want the lm to have like some sort of scope uh so that it doesn't answer just any question how it feels like. And you also then know what information came from the knowledge graph. And here basically it's also really evident that this information came from like internal knowledge of an LLM. And in this case, we could also prompt engineer it a little bit to just say, I don't know, and leave it at that but you know because uh lms are trained for um to be i say what's the correct word to always kind of provide helpful answer it's still uh provided the answer so i see somebody writing something. Ah, he just left. Okay, fine. So that's basically it for my presentation. It was really short and condensed. But the main takeaway, if anything, the main takeaway is that instead of generating Cypher statements, what you want to do is to have here a call it semantic layer, but some people don't like that name. But basically, you just give an LLM a set of tools, like with predefined cipher templates so you narrow or lower the complexity like by a couple of levels and in turn you get much more robust and deterministic results which is basically what you want to have because if you ask the same question every day of the week and the data doesn't change in the knowledge if you want the same results so yeah and now we can do a couple of questions i see we have some time and yeah cool thank you very much too much great presentation as always especially the demo part is always interesting um like to open up to the room any any questions uh from from the audience here anything uh maybe you you had in mind i mean you you said a couple times you you don't know what you're talking about so i think you don't know what you're talking about so i i mean the biomedical domain right yeah yeah and i am they could say okay proteins are not a great issue or something with that there's one question here uh hi uh thank you for the presentation i was curious so how do you think this uh approach with semantic layer and defining tools scales uh because you showed like three very specific kind of tools uh and i imagine like if you would want to expand the vocabulary of the tools like to hundreds of course it will also increase the context size and I don't know maybe LM might become confused what do you think about it yes that's a good question because like you don't want to have hundreds of tools, right? You kind of want to have, I don't really know, but my gut feeling is like 10 to 20 tools at most for the LLM. But the thing with returning information from knowledge graphs is that it's not like basically with vector retrieval, let's say the most use cases return like 2000 to 3000 tokens of context and then ask the LLM to interpret that those results right so i think for the most part lms are quite good at handling contacts up to let's say five to ten thousand tokens and let's just go with the lower side which is five thousand tokens so imagine all the information you can uh information you can uh congest into 5 000 tokens if you were to build like these really general uh tools right because my idea or recommendations would be like you could build like I don't know what's the correct word but like model descriptions so like each gene you could like for each gene you could have like 2000 tokens of information and then you would have a tool that just identifies information for a specific gene so I don't really think there is a problem with over-development information because you have like a couple thousand tokens available to congest the information that you want. available to congest the information that you want to learn so hopefully i mean i i can't it's a convoluted answer but hopefully it answers your question cool thank you thank you too much one question from the internet can you dig in from the back by the way can you dig in a bit more on the semantic layer um no more context people the better i don't know if that helps um can you say glare maybe yeah i mean like it's all about instead of having one text-to-cypher tool that does everything, like here is a good example of an overwhelming LLM, right? You have this very complex biomedical knowledge graph and you just have like one entry point and it can be very overwhelming for the LLM because like even schema description could be a couple thousand tokens and then if you add a few short examples and additional instructions it could be like a lot of tokens and it can be quite overwhelming for an llm to generate quality uh ciphers so instead of that you how you say scope down and instead of having one entity point to your knowledge graph you can have like 10 20 uh entry points right like if you can uh i mean it's not like linearly but imagine if like instead of having one tool that does hundred percent of the world you have 20 tools that each does five percent of the work and then it's much easier to optimize each tool for those five percent right because it's a much lower scope of task and complexity cool thank you and then we have one more question so maybe maybe we take that on before we we close the the q a section here from david uh and he wants to know do you have the in of an llm plus the agent the graph data science are developed in we have in mind the integration of an agent with the graph data science with a graphic science algo i guess i don't know exactly what you're asking but if you're asking if an agent could be executing graph algorithms uh i think that's that's a question i think that's a question okay so that's generally advisable because graph algorithms like usually are graph global operations right so it can take it's not like on the scale of milliseconds and seconds that the execution takes. It's more like minutes or more, right? And you don't want the, for like this graph global operations, you don't want the agents to kind of decide this 30-minute operation and see what we get, right? So that's not a wise. Obviously, you could do that, but I would advise against it. It's more about running those algorithms like in a batch job nightly or something, and then just having the agent interpret the results. You run the graph algorithm overnight or something. So, the results go back to the database and then you have those for an agent to interpret. Now, obviously, if you have some algorithms that take a very Obviously, if you have some algorithms that take a very small amount of time, you could integrate them into the agent as a tool, but I don't have any examples. Again, you just provide a Cypher template, which is a graph algorithm. You have a couple of parameters that you want. You just have source nodes. You feed those source nodes as parameters to the Cypher template and there is your preset for the mouse part. That's not something you want to be doing so may i take a final question thomas do you see in in your experience with customers and and you know in all the work that you do ratio of notes to relationships where I don't know the graph is too tightly packed too much connected is that kind of limit with the LLM is that it is that okay yeah I mean, let's just see what happens. But like I said, the problem is biomedical knowledge gas, the problem is too much data. How do you return as much as little data, but still enough to get correct answers? And I mentioned you should be doing something like thing, ranking, prioritizing of results. So forever, probably, because there's like the graphs are very good. So it's more about graphs, like good medical domain. You see the numbers. It's too big to even for me. It's too big for me. So yeah, this is a problem. You don't want to overrun the error. You have to have some nice sorting priority systems in place so that you only let's say right and right. Some that's not mentioned, but you could also provide like a text answer, but also graph visualizations alongside as well for better understanding of the data. Thomas, thank you very much. Thank you very much, Thomas. Yeah, thank you for having me. With that, we're at the end of this year's Life Science Workshop. Thank you very much for tuning in today. Thank you for coming over on site here in Munich for the event. As I mentioned before before uh notes call for papers is is coming up so if you wanna submit something do that if you wanna know more check out the community pages check out neo4j.com come to the um come to the forum on discord obviously we'd like to know your feedback what you thought of the today event what you liked but you didn't like improvements comments you can send us emails you can write us you know messages in the forum you can comment on the youtube video obviously as well let us know what you what you thought of the day give us give us your thoughts give us your feedback we will be Give us your thoughts, give us your feedback. We will be doing another session surely again sometime. Absolutely. Yeah. Thanks. Continuing with the life science community. So that's crucial. We regularly host live streams on YouTube and on LinkedIn. So if you want to watch more content on that um that's always something to subscribe and to look out for but yeah in terms of life sciences we will do something and else again um and yeah i was i was glad and super delightful super delighted that you were all coming and that we had a nice event with super interesting presentations super content from from our friends and from the community and from customers and questions for your precious time for all the ones who watchers on the channels and thanks to all the presenters. I really appreciate the success of us. If you are successful, we are So, thanks a lot and yeah, hope to have you at the. Bye bye. | Neo4j LifeScience Workshop 2024 | 18,032 | Neo4j | 20240612 | Welcome to the Neo4j Health Care & Life Sciences Workshop 2024!
Join us for an immersive exploration into the intersection of cutting-edge technologies and the life sciences domain. This year, we're thrilled to introduce GenAI and Knowledge Graphs as exciting additions to our event, promising innovative solutions to the challenges faced in data management and analysis.
Our workshop serves as a dynamic platform for professionals, researchers, and industry leaders to come together, share insights, and collaborate on pioneering approaches in leveraging graph databases and advanced technologies. From deciphering complex datasets to unlocking actionable insights, we'll delve into the latest trends, best practices, and practical solutions shaping the future of the life sciences industry.
Sessions:
Opening: Graphs and GenAI in Life Sciences - Dr. Alexander Jarasch; LifeScience Expert - Neo4j
BioCypher/BioChatter: An Ecosystem for Connecting Knowledge Graphs and LLMs - Dr. Sebastian Lobentanzer; Heidelberg University Hospital
Harnessing the Power of NLP and Knowledge Graphs for Opioid Research - Gursev Pirge, PhD; Senior Data Scientist - JohnSnowLabs
Leveraging the Graph for Clinical Trials and Standards - Katja Glaß; OpenStudyBuilder Community Manager - Katja Glaß Consulting & Marius Conjeaud; Principal Consultant - Neo4j
Biomedical Knowledge Graphs for Data Scientists and Bioinformaticians - Dmitrii Kamaev, PhD; Senior Product Owner - QIAGEN
How Knowledge Graphs and Generative AI Revolutionise Biopharma and Life Sciences - Antonio Fabregat; Knowledge Graph Lead - AstraZeneca
Unlocking Glycoscience with Knowledge Graphs - Matthew Campbell; Manager, Bioinformatics - InterVenn Biosciences
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph - Tomaz Bratanic; Graph ML and GenAI Expert - Neo4j
#neo4j #graphdatabase #lifescience #healthcare #pharma #graphrag #rag #genai #bioinformatics #knowledgegraph | 2024-08-14T09:41:50.890020 |
https://www.youtube.com/watch?v=rp_MxJXvK60 | Hello, good morning, good evening, good afternoon, and thank you for joining this Neo4j Live. Hi, Ben. Welcome to you as well. Thank you for joining. Hi. So, yeah, this Neo4j Live is about CROSR, Knowledge Graph Based Drug Discovery. It's what it's called. So I'm very happy to have ben ben vossa here with me today who is the co-founder of of crosser which is a and i guess you will give us a few uh a few in a few moments you give us some overview of what it is so i'll summarize briefly in my words and then you use the better words i guess so it's uh it's uh yeah i think you you found it across a few years ago. So it's a smallish startup out of the UK on drug discovery by using Knowledge Graph to make the process better, to make it quicker for companies to develop new drugs and new medicine, I guess, for all kinds of diseases to help the long-winded process, which usually takes lots and lots of time to make it just a little bit quicker to be fast on the market, which obviously helps companies sell more in the short term, but also obviously helps everybody, you and me, in the short term, but also obviously helps everybody, you and me, if we get, you know, well working or the right medicine for the right treatments, the right treatments for the right applications. And the illnesses faster is obviously a plus for everybody. So I'm looking forward to this, especially because it's a little of a pet peeve project, or not project for me, but interest, I would say. Since COVID, I'm involved in a few things in the life sciences and healthcare industry around with graphs. So particularly, this is a nice project as well. So enough rambling from me. Ben, again, thank you for joining. Yeah, how's it going? Yeah, good. So yeah, thank you for having me on the E4J Live. I've seen a few of these before and I've really enjoyed them. Dr. AI, there's one recently on a chemistry library that was also really good. Yeah so I'll start by maybe talking a little bit through what CROSSA is and kind of the wider landscape of biomedical knowledge graphs. Well, Ben gets the screen sharing ready. This is obviously, again, as usual, is an open invitation. So we try to make this interactive and as lively as possible. So if you are watching this live, then please use the chat, chime in, tell us what you think, tell us your, maybe your experiences, maybe you're working in healthcare life sciences and are in the process of research. Maybe you are in the pharma industry and are in the process of actually, you know, working on a new medicine for some treatment and you're interested in this or you just are an interested in potty, so like myself I guess let us know, let us hear what your thoughts are and keep it interactive. Yeah, please do. I'll start with a quote that so across at the moment two co-founders, me and my co-founder, Jamie. My background is mostly computational. So I have a background in software engineering, data science. I worked at some consumer tech companies like Deliveroo. So if you're in a country that's served by deliver and you know you've ordered takeaway there then you know i've helped you out there um and uh and um yes and my co-founder jamie um the the ceo of crosser um has a background in uh biology and uh kind of the pharmaceutical industry and he recently sent me this quote that I think well sums up how knowledge graphs are incredibly relevant in life sciences and computational biology in general the quote is from Florian Markovets and it's computational biology is providing a reference map for biology an atlas of life that holds together individual insights and for me and Jamie at Crosser this was yeah this this quote and also this paper so I think he did a paper the paper is all biology is computational biology. And that's kind of a thesis that across that we believe to be true. And the idea is that all these little pieces of biology kind of all come together to build up this atlas of life or an atlas of the cell that really describes all of life together. And having that wider context of the entire cell and all these individual insights together can kind of help us move forward with medicines and getting therapeutics faster to the patient. So for us, that was a very powerful quote. And you can imagine that if you're looking at computational approach for building an Atlas of Life, the best way to model that is through a knowledge graph. And yeah, the leading solution for knowledge graphs and kind of graph-based infrastructure is Neo4j. So I'll move on to the next slide. the leading solution for knowledge graphs and kind of graph-based infrastructure is Neo4j. So let me go to the next slide. And when I'm talking about knowledge graphs in computational biology, I'm talking about a biomedical knowledge graph. And the primary kind of node types of these knowledge graphs are gene, disease, and so you know you have a drug that affects a gene um so that will be the drug will up regulate the gene or it will down regulate the gene um and that gene will have an association to the disease uh so you know there'll be some kind of some kind of genetic variation in that gene will have like a relationship to the disease. So by taking a drug and up or down regulating a gene, you're then having hopefully a positive outcome for patients with that disease. And a kind of wider data model for that is on the right. So this is a biomedical knowledge graph by this open source. It's called HeteoNet. It was released in 2017, and it was, in my opinion, like way ahead of their time. So they released a knowledge graph called Helionet that you can, yeah, I recommend anyone that's interested in biomarker knowledge graphs, this is kind of the one to definitely explore because it kind of starts with these core nodes of gene disease and compound, includes some important functional annotations. so on the gene you have which pathways of this gene in molecular function other go processes uh some other node types around compound uh side effect pharmacological class how the compound uh you know treats or palliates the disease and symptoms anatomy and this this by itself represents a pretty good base layer of schema for a knowledge graph in the biomedical domain so yeah i'd absolutely recommend anyone that's interested in biomedical knowledge graphs this is a great place to start because it's at the right level of information versus simplicity to to do some to do some analytics on knowledge graphs and play around with that. And kind of what this database is built out of is you can see here that HeteoNet combines information from 29 public databases. So that's genes that are from Entrez gene, there may be some information here from Ensembl as well. Compounds from drug banks, disease from disease nodes coming from disease ontology, and I'm sure the pathways is another important kind of biological process from wiki pathways and reactome and the kind of pathway interaction database that kind of feeds into this knowledge graph. And this was released in 2017 and is available for, they actually have a public Neo4j instance that if anyone wants to try out Neo4j in a public domain, If anyone wants to try out Neo4j in a public domain, then they can, here I'm just showing an example, but, so if anyone wants to try out a Neo4j graph database in the biomedical domain, then HetioNet has a publicly available database. You can query cipher the domain language for Neo4j. And here I'm gonna just run a really quick query for let's say, show me all the diseases that have a relationship to the gene APOE. And you can see here that there are various diseases that are related to APOE that may be interesting to you. So yeah, if you're interested in just playing around with biomedical knowledge graph in Neo4j, then HetioNet has a public instance that's definitely a good place to start. Yeah, that's a good point. That's interesting to know for people. I think there's very, I don't know, because you said 2017, do you think this is, are people still working on this or is this more like a project at the time? Yeah, I think this was a project at the time. There are a lot of initiatives that are carrying on at the moment that are more recently updated. which is a gene disease, so if I search for APOE here, for example, gene disease, a kind of relationship database that allows you to kind of look at the profile of genes. Are they related to diseases? Are they, what pathways they exist in all that kind of stuff um but not in a knowledge graph so i think one thing we have found at crosser is that uh there are that there are a lot of academic initiatives such as helionet um that uh build out knowledge graphs um but without the right, but, but, you know, the right funding is required to continually update those graphs, which is why you see a certain, you see like pharmaceutical companies or biotechs that, you know, have financial incentives. So they're able to kind of continually update those knowledge graphs and keep them up to date. But yeah, as of 2017, I'd say that HeteoNet was a very good example. I think another good example of one that's a bit more up to date. But as of 2017, I'd say that HeteoNet was a very good example. I think another good example of one that's a bit more up to date is the clinical knowledge graph. This is open source as well. This was, I think, about a year ago. This is up to date as of about a year ago. A bit of a larger knowledge graph. So compared to Heteo net um kind of more entities so where hetio net had a schema of kind of gene compound disease in the surrounding edges uh the clinical knowledge graph had um that their scheme is a bit more bit wider so they had uh stuff like uh they've split up gene and protein and they included tissue and possibly cell type and some other entities that made it a bit more of like a wider knowledge graph yeah okay so yeah so all these entities build up a biomedical knowledge graph that can help in early stage drug discovery. But at Crosser, we found that there was an opportunity for an enterprise solution for an operating system that assisted with knowledge graph based drug discovery. that assisted with knowledge graph-based drug discovery. By that I mean a platform that has workflows for target identification, target validation, drug repurposing, with specific tasks such as functional gene annotation, identifying which module genes exist in, and analyzing the results of experiments such as RNA-seq and CRISPR screen hits, which we can go through an example in the platform of how that would work and how Neo4j helps us complete that task. That's good. Yes, that's good. So this is the Crosser platform. Yeah. I don't know. Do you need maybe zoom in a little bit? I'm on the big screen. Yeah, yeah, yeah. Yeah, that's better. That's cool. Thank you. Yeah, so this is the Crosser platform. And what we will look at now is a workflow for, just an example workflow for target identification following a recent paper or a high-impact paper that came out in 2018 that provided the system several framework for drug discovery with a focus on epilepsy so this paper came out in 2018 it was it was by the division of brain Sciences at Imperial and some other colleagues and helped to identify a novel target for epilepsy. And as an example, we'll go through how you could use Crosser to speed up this process and perform the same workflow that was done inside the paper. perform the same workflow that was done inside the paper, using our cross-platform. So leveraging software, leveraging Neo4j to kind of do the same workflow, but faster. So, what I mentioned is, we'll start by importing the dataset that was used in CRAFT. CRAFT being the analytical framework for target discovery from this paper. And one of the first things that they did note is that they had an interest in focusing on the cell type microglia. There's a couple of reasons why you might want to initially focus on a cell type. One would be commercial reasons. So if you have a pharmaceutical partner, then they may be interested in that cell type. Also, it may be easier to target genes that exist in that cell type with a drug. So we're going to focus on a module of genes from the craft paper that exist in the cell type microglia. So I will start by importing that module into the CROSA platform. So this is the CROSA canvas. And once I import the modules, this is a module from the CRAF dataset, which is a set of differentially expressed genes in the disease state of epilepsy that also enriched for the cell type microglia. And we can add those to the canvas. And in the paper, this was a this was a module of genes that they decided to focus on for the for identifying drug targets. You can see here, so we've imported differential expressed genes into the canvas. You can see here, here we have a network of genes that has built up an interaction network here. Probably one of the first things I'll do is genes that, as we're trying to look at kind of knowledge graphs and a network of genes, I'll get rid of the genes that don't have a network associated to them and then start here. So the different colors of genes you're looking at now represent how differentially expressed those genes were in the disease state. So, for example, CD22 here is a very deep red, which means that it was highly upregulated in the disease state, was highly upregulated in the disease state, which means it had a much higher expression in samples with epilepsy versus a healthy sample. And ZAP70, for example, here was downregulated in the disease state, so had a much, so had a lower expression level in the in the disease sample versus healthy state in mice which I think this experiment data set was taken out of mice okay yeah yeah usually it's mice I think yeah and and yeah there's a whole uh thing there um yeah one one quick uh sidebar and that is um there's been a very so so on the animal model versus uh human uh model and a discussion around uh the benefits of knowledge graphs and computational biology is that there was a recent um there was a recent bill passed in the u.s that required that meant the fda no longer require animal testing uh for uh drug for drugs to be approved by the fda ah okay yes so this is very, very interesting for knowledge graphs because they've added a new section that means that in place of animal models, you're allowed to have non-clinical tests, which includes cell-based assays, microphysical systems such as organ-on-chips, which is where you take some part of an organ and then you kind of put it on silicon and you're able to create an environment there. Or a computer model, where that computer model could be a knowledge graph that represents the cell in a disease state. So even on the regulatory front, there is form of progress towards computational models of disease. Yeah, I think that's very interesting because then you can, with the help of these models of a knowledge graph, you can find maybe an application that hasn't been used before for already approved, let's say, medicine. I could imagine. And then you can say, hey, you use this for, let's say, I don't know, fever, but you can also use this for, I don't know, headache and you don't have to do the testing anymore. Is that what it is? Yeah, I think it's a couple of things. I think first of all, ethically, it's not having to do animal testing ethically is a great benefit because it doesn't require mice or any other animal to be used in that process. Also an efficiency thing, it's a lot easier to build, it's a lot faster to build a computational model using a product like Neo4j of the cell and in different disease states, if you have the data rather than looking at looking at animal animal models so between those two things of kind of speed and ethical considerations yeah this I think this is a great a great step forward in kind of accelerating a better version of discovery absolutely yeah yeah the less animals are needed, the better it is. That's really good. Yeah. Yeah. A couple of comments here on this. Mark writes, can drive real efforts in drug repurposing. Yeah, I think that's true. Especially there when everything is already approved. I mean, maybe completely new discoveries is probably maybe a little, I don't know. I'm not really a fan of animal testing, but I see their point in some regards. But if something is already approved and you just need to step out of the originally approved path sideways a little bit, then I think it's totally fine. That saves a lot of time a lot of um a lot of um you know any animal pain as well so i think that's that's only wins here john says extends the possibilities of a digital twin work uh i think yes that's that's probably true what do you think about digital twin is there something yeah so across we we think a lot about the concept of a digital twin um you know we have a forward vision of our platform becoming uh you know a digital twin of the cell um and and this this act is is making digital twins more valuable um in meaning that digital twins where you can say where you can interactively for example you know model model a cell and the disease state and say you know if i up or down regulate this gene one of the downstream effects is it moving towards reversing the disease state? And that evidence is now kind of more widely acceptable because of the Modernization Act in helping pharma or biotech companies move forward with, move their drugs forward towards human clinical trials. Yep. forward towards human clinical trials. Yep. Yeah, so moving back across the platform. So yeah, so here you're seeing a network of differential expressed genes in the epilepsy disease state. And differential expressed genes are interesting because if they've changed the disease state there's a hope that if you can revert their expression to normal levels then you're you know reversing the the disease state. There's a lot of there's a lot of conversation around this about, you know, are the most differential expressed genes important? Or, you know, do you just want to find the causal driver of the disease state that's causing this differential expression and then affect that? But investigating the differential expressed genes is a good start. genes is a good start. And what we're going to do is focus on trying to identify small clusters inside this gene set that have been different expressed in epilepsy to potentially identify a target. So the first thing we will do is using the Neo4j graph data science library, which provides some is using the Neo4j Graph Data Science Library, which provides some great analytical features, one being Louvain clustering with community detection and some others around embeddings that potentially we can talk about later if we have time. Yep. We can identify the, we can talk about later if we have time. We can identify the, we can very quickly identify the modules of genes inside this network. So I'm gonna use the cross platform to run clustering. And here using Neo4j's GDS library, Using Neo4j's GDS library, we've identified five clusters of genes, five functional modules of genes in our network. the connections between these genes are interactions, which means that the proteins are functionally interacting and having an effect on each other in a functional environment. So the next thing to do is trying to identify which module you'd find interesting to interrogate. So the paper that we're following, CRAFT, takes the approach of identifying which ones are important based on the enrichment of certain gene ontology processes and pathways in those modules. And one example function that's important is immune response. So having this before, we can now try and identify which biological processes are prevalent in the modules. So what I've just done there is I've just kind of ran enrichment analysis for all these different types of function annotation for the modules and I can look at which ones I think are interesting to investigate and that will vary depending on your hypothesis and yourself as a scientist. So I've just expanded it here just so I can see it a little bit bigger. And this was done. We benefit a lot from Neo4j here. Having these functional entities such as disease, molecular pathway, biological process, cellular component in our knowledge graph and being able to use Neo4j to easily query those relationships. So that'll just do this in a fast way that might require a few steps in a Python or R script without this kind of tool. Yeah. And the module that I want to focus on is this pink one, just cluster two. So there I've just ran enrichment analysis for A, all the nodes separately, but also the individual clusters to try and identify which cluster has some interesting findings. Okay, yep. And it is inside cluster two, which is this pink highlighted cluster here, that we find a couple interesting processes that, as a scientist, I may be interested in investigating further so this would be so for me looking this I look at inflammatory response and inside yeah so inside the original paper one of the things they focus on is immune response processes per module. So if I identify the immune response, you see that this cluster has genes that are associated with inflammatory response and the regulation of immune response in red and yellow, respectively. Yeah. So as a scientist, that might lead me to believe that that module is interesting for one to investigate as a potential therapeutic intervention for epilepsy based on these biological processes and the fact that these genes are already found to be highly enriched for microglia as a cell type. Okay, yeah. So if I wanna focus down, so I've kind of selected this cluster so I can get rid of the initial results and I can focus here on this. So yeah, so it's going to focus down on this set of genes here. And I can quickly see if there's any links between any of the targets I've selected and epilepsy, you can see there's a few. There's a few already known links between epilepsy and the genes that I've selected. And because we're looking at epilepsy, another potentially interesting phenotype is seizures. So we can add that to the graph as well. I can see there's one gene inside this module that's also interesting for seizures, which partly validates the selection of this module to look at further. The relationships, I mentioned the relationships between these disease and phenotypes and the targets inside this knowledge graph is based off evidence that comes out of open targets. And that varies in what the evidence is. So just quickly, here's an example of the open targets evidence that's used to associate a gene to a disease. This is for a widely known association between APP and Alzheimer's disease. And you can see here that there's known genetic variations in APP is associated with patients that have Alzheimer's disease, and that can kind of increase the confidence in knowing that there's a relation between a specific disease and a gene. But also, there's a lot of other factors they consider, for example, text mining. So if there's recent papers that associate epilepsy with CCLF3 or seizures with ITGACs, then that'd be considered in the graph to create these edges. So yeah, so that's, we've just done that quickly to validate that there's some relationship between this cluster and epilepsy or seizures. So we can remove those genes now. And now we have a module of genes that are interesting to epilepsy. We can continue following the CRAF framework to identify a potential drug target for epilepsy. And the way that they move forward in doing that is after they've identified a module of interest, it is to try and identify upstream regulators that have an impact on the module as a whole as potential drug targets. So you're looking for genes that regulate the genes that are in this module and try and reverse them from the disease state back to the healthy state. And in CROSA we would do that using the master regulator analysis type. So here we're looking at this module. So if I run master regular analysis, I've now identified a little bit, now identified potential targets that are upstream of the genes that are inside that module. One of which that's been identified is CSF1R, which I can visualize here, which is upstream of these targets, SLC11A1 and ITUGB2, as a potential therapeutic target to as a potential therapeutic target to intervene with that module. This target was the one that was identified by the original paper as a potentially interesting target as an anti-epidemic drug target. Okay. For a couple of reasons. For a couple of reasons. One, because it regulates multiple targets inside the module. And also because it's a receptor, that R at the end means here, which means it sits on the outside of the cell and it makes an easy drug target to manipulate with therapeutic. Yeah, interesting. So, just so I understand it correctly, so we had this module, we had identified in this module that this is the genes in this, yeah, exactly, in this group of genes here is most likely, at least partly responsible for epilepsy and seizures. And now we have looked for potential genes or ways of how can we influence this group of genes in a way to make people healthy again if they are affected. And this one we identified is particularly good because it doesn't target only one, but it targets a couple in this group. So, yeah, so this module of genes are genes that were, So this module of genes are genes that were, you see they're all kind of red or shading towards red, which means they were all partly upregulated in the disease state, which means they were more expressed in the disease state than the healthy state. And they have a close interaction network between them, which means they represent a module of genes that are potentially functionally related because they interact with each other. So having that information, then we're looking to try and understand how we can reverse the differential expression of these genes to revert back to a healthy state uh so yeah we look for uh what genes um are upstream of of of that module um to identify which ones are potential drug targets um to to provide like a therapeutic intervention. And CSF1R is a good candidate because in our database, it's been identified to have an impact on two of the genes. Potentially that's more in other databases. It really depends on what data you have. And this is an open source dataset we're using for the regular network. But if you use kind of commercial data sets, they may have more information on which targets they affect. Yeah. And it flows with the paper, which does identify CSF1R out of the selection of upstream regulators, mostly because it is a receptor, which means it sits on the kind of outside the cell. And it's an easier drug target than other genes. Yeah, OK. That makes sense. You want to find something that is a good vehicle for the drug and is easy to get to, but has an impact on as much as possible. The question from Heather, the JAK2 has a blue relationship on itself. Can you explain what that means? Yes. means? Yes, so that would mean that JAK2 is known to downregulate itself. It's a phenomenon in biology that I'm not, as a scientist can explain well, but certain genes have a certain effect on their own expression. So some genes when expressed have an effect on the expression of when that transcribes to a protein and it can reduce or increase the expression of that protein. So it's a self loop of a negative self loop. I see. Okay. Thank you. There were other questions, but we can... Let's continue and two questions at the end. Yeah. So yeah, that is the demo that looks to replicate or at least like explore the findings of that paper. And as you saw, there were other targets in that list that might actually be interesting, separate to CSF1R. But if you were interested in CSF1R, then you could benefit from a knowledge graph in further exploration. So, it's going to be the wrong gene there. So this was the target that the paper identified and through upstream regulatory analysis we also identified as a potential target for that module and we're looking at epilepsy. So you can also benefit from knowledge graphs in exploring the kind of functional annotation of that target and the neighborhood. So here I can easily say if I was interested in this target, CSF1R, this target CSF1R, what pathways does it exist in? Transcriptual regulation by Bentex and other interleukin signaling, which means that might help build your hypothesis of which pathways you're interested in looking at for the kind of disease you're looking at. And you can go further. You can look at which drugs are currently known to impact that gene. And you can see there's already a few, there's already quite a few drugs that are known to have an effect on that gene. And you can see there's already quite a few drugs that are known to have an effect on that gene. I think the one that they tested in a preclinical model here was, I think it was, so this one is PLX7486. I think they had, I think they had, so this paper did move forward with a preclinical model of epilepsy to validate CSF, one eye inhibition and they used PLX3397 as a command to do that. I think the way they select that compound was it was a compound that they had available to them that they knew had an effect on that target. But you could also use a knowledge graph to explore other compounds that have an effect on that target and look at um if you you know had a compound of interest you could you could also look at um what other uh what other genes that compound uh has an effect on um so you know if if uh what potential side effects there are of of uh using that compound for that target, because drugs are known to be messy and often have an effect on various targets and not just the one that you're interested in. You have to consider that in your development cycle. Yeah, so that completes the example, like target identification of our platform. So that CSF1R was moved, was validated in a preclinical model, but would obviously need human trials before it was identified as an actual target of interest. Yeah, of course. Yeah. identified as an actual target of interest. Yeah, of course. Yeah. There was an addition to the Jack2 question. Maybe we can quickly slide it in. Sebastian gave some insights and he said, it's indirect effect mainly through stat and SOX. So these are downstream from Jack, he says. So I think that's just like the way it comes to be. So thank you for that, Sebastian. Yeah, yeah. I also, yeah, I mean, that's interesting. Yeah, I also saw that stat one and stat three are potential master regulators for these modules as well. So yeah, interesting that they've found that insight as well. So that's kind of the demonstration of a platform. And the other things that, I i mean if there are more questions i'm happy to answer that yeah yeah yeah yeah well there is uh for sebastian actually had a question or at least a comment uh the he says i think the fda likes to see at least dynamical models they'll both back back but i didn't want to interrupt the back from for the fda um models. They'll both back, but I didn't want to interrupt the back from, from the FDA part we talked about. Ideally, so sorry, I start again. So the FDA likes to see at least dynamical models, ideally parts of ones that have previously been approved. So I guess that makes sense for a new, your new drug. You don't want to go all in with only your let's let's say your ai advisor told you hey this is great let's use it and then it's it's approved i guess this does not work does the knowledge graph connect to relevant models uh for example bio models or something else so to to you know add in some outside information as well. Yeah, I think it really depends on what data you have available to you because I think knowledge graphs are incredibly powerful and I recommend anyone that is interesting to, you really realize the power of a knowledge graph when you're using them regularly because you realize how much data is very quickly available to you. Is Alex still there? Yep. Sorry. Yeah, no worries. Yeah, so you very quickly realize how much data you have available to you, depending on what you want to do. So what I've experienced is i mean you can see there in in just looking at this platform is that um you know there may well be a knowledge graph that's just a uh kind of interaction between uh you may say csf1r and you would just just want to look at the the neighborhood of those targets and that's the knowledge graph and you can see here that you know these gray lines are an interaction with a score between them and the red and blue lines are represent the targets that CF1R up or down regulate but if you have you know wider information adding that to knowledge graph is definitely beneficial. I think there's a lot of benefit in having different knowledge graphs for different disease models. So having a knowledge graph that's focused on Alzheimer's disease, having a knowledge graph that's focused on cardiovascular diseases, cardiometabolic diseases, and having the graphs traded there so that you can kind of have these wider considerations that are very quickly accessible due to the fact that they're sitting in this graph infrastructure such as Neo4j and they're kind of one query away from getting that data. I guess that like ties in with some of the challenges you have, right? I mean, you have lots of data available, but on the other hand, you need to incorporate that data and to build the builder knowledge graph, you need to see, okay, there's lots of data available, but you can incorporate lots of data, but then you need to be on okay, there's lots of data available, but you can incorporate lots of data, but then you need to be on the same level semantically, at least to sort of make sense to connect, you know, reasonably. Yeah, and I think that's a bigger problem in biology than other domains because, you know, there's a lot of arguments on if, for example, I mean, these two targets, you know, there's a lot of arguments on if, for example, I mean, these two targets, you know, they interact with that, they have an interaction in a knowledge graph. So that means the two proteins functionally interact, and they have a confidence score of, you know, 0.944. And this is coming from StringDB, which is another kind of great resource for, which I'm sure anyone in the biomedical know your names, but a protein-protein interaction open source database. But there's a lot of decisions in deciding like which databases you want to consider when building up your knowledge graph, you know, like our knowledge graph here Our knowledge graph here is built from eight, nine source databases that build up a wider knowledge graph across all these different biological entities. And even at the protein interaction level or the gene regulatory network, which is a directed relationship between genes. There's lots of different databases you can choose from. As a scientist, you decide which ones you think are valuable, which confidence thresholds you think you're happy to work with. So if you're happy with, you know, a 90% confidence threshold when looking at analytics, then use that. But if you're very strict, you want like a 98% threshold, then you might increase that before you start analyzing that as a knowledge graph. All right, and then there is one more question from Creek Compendium wants to know if you know tools like druglike made by Martin Shkreli, Shkreli, Shkreli, sorry, which does apparently computational chemistry. I don't know that maybe you know it and leak, legal talking. Druglike? Druglike, that's what he writes, yeah. If that is something you are aware of or is there something if not then uh not not not not hugely aware of it um uh but uh yeah yeah not aware of it but um there are definitely a lot of open source initiatives. It looks like one of those that kind of moves forwards with open software initiatives that are trying to enrich the amount of data that we have available to us as scientists trying to identify novel targets, trying to optimize the best leads for those targets and move forward. So yeah, great to see any open source approaches to that. Yeah, cool. And then maybe one question for me is on the future, on the outlook on other use cases, on the future on the outlook on other use cases other than target identification you see here for knowledge graphs in this particular field of drug discovery? Yeah I think that you know we are really at like I mean that quote that I come back to on you know on computational biology providing reference map for biology as an atlas of life the whole scale of individual insights um by florian markovets he's the cancer researcher at cambridge um i think it does i think we're definitely at a turning point in using kind of these computational approaches to really drive forward drug discovery. Partly from the way we're thinking, I've seen, you know, almost all the big ones uh you know they all you know adopt neo4j um uh as as you know as a knowledge graph i think i saw a a recent presentation by neo4j um at a conference in basel i was at that that you described that well yeah um and uh so so technically we're kind of, you know, the technical infrastructure is moving that direction. Last 20 years, we've seen lots of data generated through kind of initiatives, such as the Human Genome Project, that's really kind of generating that data that's then being able to form into a knowledge graph. We're seeing lots of companies uh being kind of created around around this idea of of computational biology and knowledge graph based drug discovery um some to mention would be benevolent ai um i think uh isomorphic labs is an interesting one to check out it's it's kind of google's or uh the deep mind if you know those know those ai guys um they've kind of started their android discovery company focused on that kind of computational approach and i'm sure that the data infrastructure layer is going to be powered by by knowledge graphs um so, and with, you know, on the regulatory side, seeing stuff like the FDA Moderation Act is definitely helping all of this coming together to, I think really speed up the time from kind of therapeutics and getting that to the patient. Yeah, yeah, absolutely. Yeah, another comment from Mark. He says, I agreed, maybe use disease-specific knowledge graphs for target ID and larger generic multi-disease knowledge graphs for drug repurposing. So, yeah, I think definitely disease-specific knowledge graphs are really valuable when you're focused down on that. If you can get the... Building up these knowledge graphs requires a task of, you know, if you're looking at disease-specific knowledge graph, you need to have the curation to get it down to that kind of disease model layer. But if you can get down there, then I think there's a lot of benefit of being there. And then yeah, for drug repurposing, or even target repurposing. So you know, which which targets might be relevant across diseases, which diseases have similarity to other diseases. So if I'm a company looking at disease area X, can I potentially use all the work I've done so far to focus on another disease, whether that's through the same drug or the same target, then that's where more general knowledge graphs are useful. Yeah. I mean, this is now probably thinking a little bit into the future, but I could imagine if you think about a knowledge, sorry, a disease specific knowledge graph, you're almost generating a digital twin of the paths, of the attack paths of a virus or whatever else or cancer or something as well as of the of the of the of the medicine the drugs that that help against it and then you have you know the direct way and you can you can directly see okay where is where is the where are some weak points where is where is it strong where where does it help where how does it you know how does it work along the whole the full line and then you have you know the the best scenario possible but it's it's you know it's it's becoming very complex obviously yeah yeah yeah exactly um you want to you know you want to the perfect situation for each disease is to have an individual model for that disease a digital twin of you know someone in that disease state you can uh then try and you know in a silico model see like if i intervene with that target uh what's the total impact of that disease state um and as long as you're considering like the whole context then you know i think it's important there to think about like which what cell type are we looking at, which tissue type are we looking at. If you have all those wider contexts, then you can kind of bring them all together to find and take what is like a lot of individual insights and then build novel insights from the wider picture. Yeah, I think that's what's complicated like what sebastian says the problem of digital twins to get to the mirroring of biological variability to get the mirroring of biological variability right yeah where you again need dynamical models or at least very well tuned statistics so yeah i think that's that's probably super super complicated i mean the body is it. Yeah, it's messy. Yeah, it's messy. And it does things differently sometimes. So it's not as, unfortunately, it's not as ones and zeros in this regard, unfortunately. It's not only ones and zeros like with the digital world. Yeah, exactly. All right. Cool. I think that was very interesting. Ben, exactly. All right. Cool. I think that was very interesting. Ben, thank you very much again. Thank you for your questions and comments. If people want to continue to keep the conversation going, how would best people reach you or Crosser? How would they get in touch? Yes. So I'll just share this slide quickly um so this is how you can get in contact um either my email or the kind of social media that i'm available at is linkedin um there's also a cross linkedin page um if you want to visit our website we have a form you can fill out to kind of speak further if you want to visit our website, we have a form you can fill out to kind of speak further. If you want to get a more in-depth demo of the platform or have a conversation about use cases, that kind of stuff, then feel free to reach out over all three of these options. And yeah, also, yeah, check out our website. We have some we're kind of in the process of posting various blogs. we have some we're kind of in the process of posting various blogs one i did recently on uh recommendation systems of drug discovery that we haven't really spoke about here but i spoke about a nose 2022 which is uh which was the neo4j conference that was uh yeah really good um so so i have i did a separate talk there on on uh recommendation systems and that i think it's also interesting if someone should check that out. And there's a blog on our website that goes through that. And we'll probably soon publish a blog on kind of this example of epilepsy as a target identification in our platform. So yeah, definitely follow LinkedIn and the website and get in contact via any of those free options. Cool. Thank you. There's another question from Dev Tether. Maybe we squeeze that in. Wants to know if you have thought to expand CROSSA after the drug has been discovered to use in trials, to use knowledge graphs in CROSSA during the trials or, you know. Yeah. Yeah. It's interesting. So I think knowledge graphs in clinical trials is definitely a great use case as well. I think for us in clinical trials, I see knowledge graphs most beneficial in stuff like patient stratification. So, you know, trying to identify, you know, which patients are based on their demographic and kind of various other factors would be most relevant for that clinical trial. Maybe there's some stuff with geography there. I think, you know, knowledge graphs go even further down the pipeline to, you know, end patient care. line to you know end patient care you know a great a great talk uh recently on u4j live was dr ai that looked at a knowledge graph that used uh that kind of had like you know symptoms and i mean potential uh i think it was a chat bot that uh was kind of leveraged a knowledge graph to help to help the end patients um so i recommend checking that out. So yeah, definitely clinical trials down even further is as a lot of benefits to kind of knowledge graphs there that yeah, we are thinking about. Yeah, cool. Yeah, I posted a link to the Dr. AI session in the chat. I'll add it to the YouTube description also. We had lots of links, so I added them all to the YouTube video description. So if you are watching this, you should find all the links there. So I think that's great. Thank you, Ben, again for your presentation for the demo in particular. I think that was very interesting, for your presentation, for the demo in particular. I think that was very interesting, very good to follow and very good to see. It's a cool project, so yeah, check it out. Go to crossout.co.uk or, like Ben said, write him an email. So with that, yeah, I think we're at the end of this episode. Again again thank you very much for watching thank you very much Ben I'll be back with the final episode of the GraphQL Book Club tomorrow Wednesday at a little earlier than this the slot an hour earlier 4pm Central European Time tomorrow and on Thursday another Neo4j live stream. This is a very busy week. On a book release, Graph Data Processing with Cypher with my colleague Ravi, who wrote a book on the topic. And we'll do an overview of this. So that's happening on Thursday. Yeah, with that, I wish you all a great rest of your day. Have a good evening or enjoy your working day. And have a good rest of your day. Have a good evening or enjoy your working day and have a good rest of your week and see you soon at another of these sessions. Like and subscribe as always and yeah, see you soon. Thank you, Ben. Thanks, guys. | Neo4j Live: Crossr - Knowledge Graph based Drug Discovery | 3,722 | Neo4j | 20230131 | Knowledge graphs are key to extracting insight from volumes of available biomedical data. In this session, we'll cover a practical example of how Neo4j + Graph Data Science can be used to create graph-based features to help identify potential gene targets for a given disease.
Crossr: https://www.crossr.co.uk/
Blog: https://www.crossr.co.uk/post/recommendation-systems-in-drug-discovery
Ben Vozza: https://www.linkedin.com/in/ben-vozza-a652b695/
Lightning Talk @ NODES 2022: https://youtu.be/q5061JL5LaU
All biology is computational biology: https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.2002050
Hetionet: https://het.io/
Open Targets: https://platform.opentargets.org/
Clinical Knowledge Graph: https://ckg.readthedocs.io/en/latest/INTRO.html
A systems-level framework for drug discovery identifies Csf1R as an anti-epileptic drug target https://pubmed.ncbi.nlm.nih.gov/30177815/
StringDB: StringDB: https://string-db.org/
Doctor.AI Live Stream: https://youtube.com/live/lZidMw6vVWY
0:00 Intro
3:40 Crossr Overview
7:05 Open Data in Life Sciences
13:00 Biomedical Knowledge Graph - Crossr Demo
19:05 FDA Modernization Act 2.0
24:10 Crossr Demo (contd)
44:10 Q&A & Contact
1:01:00 WrapUp
#neo4j #lifescience #healthcare #biomedical #knowledgegraph | 2024-08-15T16:43:59.963237 |
https://www.youtube.com/watch?v=pYXy-A4siMw | Hi. This video is a recording of a talk that I gave a while back. I already published a version of it on my second channel, but did you even know I had a second channel? Most people don't. I thought more people should see it, so I remastered it. I cleaned it up, improved the graphics, and yeah, this is that. Enjoy. Right. Hello everyone. My name is Robert Miles. I am usually doing this on YouTube. I'm not really used to public speaking. I'm not used to not being able to edit out my mistakes. There may be mistakes. Also, I may go too quickly. Sorry, not sorry. So when it comes to AI safety, you can kind of divide it up into four areas along two axes. You've got your short term and your long term, and you've got accident risks and misuse risks. And that's kind of a useful way to divide things up. AI safety covers everything. The area that interests me most is the long-term accident risks. I think once you have very powerful AI systems, it almost doesn't matter if they're being used by the right people or the wrong people or what you're trying to do with them the difficulty is in keeping them in under control at all so that's what i'm going to be talking about what is ai safety why is it important so i want to start by asking the question which i think everybody needs to be asking themselves what is the most important problem in your field take a second to think of it and why are you not working on that? For me I think the most important problem in the field of AI is AI safety. This is the problem specifically that I'm worried about. That we will sooner or later build an artificial agent with general intelligence. So I'm going to go into a bunch of these terms. The first thing is what do I mean when I say sooner or later? This is a little bit washed out but this is a graph of a survey, a large survey of AI experts. These are people who published in major AI conferences and they were asked when they thought we would achieve high-level machine intelligence, which is defined as an agent which is able to carry out any task humans can as well as or better than humans. And they say that a 50% chance of having achieved that, we hit that about 45 years from 2016. but then of course we hit like 10% chance nine years from now so it's not immediate but it's happening this is definitely worth taking with a pinch of salt because if you ask the question slightly differently you get an estimate of 120 years rather than 45 there's a lot of uncertainty in this area. But the point is, it's going to happen, as I said, sooner or later. Because at the end of the day, general intelligence is possible. The brain implements it. And the brain is not magic. Sooner or later, we'll figure it out. So what do I mean when I say an artificial agent? Well, so an agent is a term from economics mostly. But basically agents have goals, they choose actions to further their goals. That is the simplest expression of what an agent is. So the simplest thing that you might call an agent would be something like a thermostat. It has a goal which is to have the room be at a particular temperature. It has actions it can take, it can turn on the heating. It can turn on the air conditioning. It chooses its actions to achieve its goal of maintaining the room at a steady temperature. Extremely simple agent. A more complex agent might be something like a chess AI, which has a goal of, like if it's playing white, it has a goal of the black king being in checkmate. And it takes actions in the form of moving pieces on the board in order to achieve its goal. So you see how this idea of an agent is a very useful way of thinking about lots of different intelligent systems. And of course, humans can be modeled as agents as well. This is how it's usually done in economics. Individuals or companies could be considered to have a goal of maximizing their income or maximizing their profits and making decisions in order to achieve that. So when I'm talking about intelligence, intelligence has a lot of, as a term, is a heavily loaded term, has a lot of different people put their own definitions on it. In this context, what I mean when I say intelligence is just the thing that lets an agent choose effective actions. It's whatever it is that's in our brains or that's in the programming of these systems that means that the actions they choose tend to get them closer to their goals. And so then you could say that an agent is more intelligent if it's more effective at achieving its goals, whatever those goals are. If you have two agents in an environment with incompatible goals, like let's say the environment is the chessboard, and one agent wants white to win, and one agent wants black to win, then generally the more intelligent agent will be the one that gets what it wants. The better AI will win the chess game. And finally, general intelligence. This is where it becomes interesting in my opinion. So generality is the ability to behave intelligently in a wide range of domains. If you take something like a chess AI, it's extremely narrow. It only knows how to play chess. And even though you might say that it's more intelligent than a thermostat because it's more sophisticated, it's more complicated, it couldn't do the thermostat's job. There's no position on the chessboard that corresponds to the room being a good temperature. There's no move that corresponds to turning on an air conditioner. The chess AI can only think in terms of chess. It's extremely narrow. Generality is a continuous spectrum. So if you write a program that can play an Atari game, that's very narrow. DeepMind, one of their early triumphs was that they made a program that could play dozens of different Atari games. A single program that could learn all of these different games. And so it's more general. Because it's able to act across a wider variety of domains. The most general intelligence that we're aware of right now is human beings. Human beings are very general. We're able to operate across a very wide range of domains, including, and this is important, we're able to learn domains which evolution did not and could not prepare us for. We can, for example, drive a car, and evolution did not prepare us for that We can, for example, drive a car. And evolution did not prepare us for that. We invented cars. They're very recent. We can, you know, invent rockets and go to the moon. And then we can operate on the moon, which is a completely different environment. And this is kind of the power of general intelligence. Really, the power of general intelligence is we can build a car, we can build a rocket, we can put the car on the rocket, take the car to the moon, drive the car on the moon. And there's nothing else that can do that yet, but sooner or later, right? So this is what I'm talking about. I'm talking about what you might call true AI, real AI, the sci-fi stuff. An agent which has goals in the real world and is able to intelligently choose actions in the real world to achieve those goals. Now that sounds... I've said, what's the biggest problem? This doesn't sound like a problem, right? On the surface of it, this sounds like a solution. You just tell the thing, you know, cure cancer or maximize the profits of my company or whatever. And it takes whatever actions are necessary in the real world to achieve that goal. But it is a problem. So the big problem is, this should be auto-playing and it isn't. The big problem is it's difficult to choose good goals. So this is an AI made by OpenAI. It's playing a game called Coast Runners, which is actually a racing game. They trained it on the score, which you probably can't see down here. It's currently a thousand. What the system learned is that if it goes around in a circle here and crashes into everything and catches fire, these little turbo pickups, they respawn at just the right rate. But if it just flings itself around in a circle, it can pick up the turbo. And that gives you a few points every time you do that. And it turns out that this is a much better way of getting points than actually racing around the track. And the important point here is that this is not unusual. This is not OpenAI doing anything unusually stupid. This is kind of the default. Picking objectives is surprisingly hard. And you will find that the strategy or the behavior that maximizes your objective is probably not the thing you thought it was. It's probably not what you were aiming for. There's loads of examples. Actually, Victoria has a great list on her blog, Deep Safety, there's like 30 of them, different things going wrong. There was one they had, they were trying to teach, they were trying to evolve systems that would run quickly, so they trained them on the, I'm gonna pause this because it's distracting as hell. Where's my mouse? They're mesmerizing. Yeah. Pause, pause, please. They were training agents that were supposed to run. So they simulated them for a particular period of time and measured how far their center of mass moved, which seems perfectly sensible. What they found was that they developed a bunch of these creatures, which were extremely tall and thin with a big mass on the end, that then fell over. Because they weren't simulating them for long enough that you could go the fastest just by falling over rather than actually running. That moved your center of mass the furthest. There's a lot of these. There was a Tetris bot which would play reasonably well, and then just when it was about to lose, would pause the game and sit there indefinitely. Because it lost points for losing, but didn't lose any points for just sitting on the pause screen indefinitely. This is like the default of how these systems behave. I have no memory what my next slide is. Oh yeah, right. So we have problems specifying even simple goals in simple environments like Atari games or basic evolutionary algorithms, things like that. When it comes to the real world, things get way more complicated. This is a quote from Stuart Russell, who sort of wrote the book on AI. When a system is optimizing a function of n variables where the objective depends on a subset of size k, which is less than n, it will often set the remaining unconstrained variables to extreme values. If one of those unconstrained variables is something that we care about, the solution found may be highly undesirable. In the real world, we have a very large number of variables, and so we're talking about very large values for n here. So let's say you've got your robot, and you've given it a goal which you think is very simple. You want it to get you a cup of tea. So you've managed to specify what a cup of tea is and that you want one to be on the desk in front of you. So far, so good. But suppose there's a price-listening vase on a narrow stand in front of where the kitchen is. So the robot immediately plows into the vase and destroys it on its way to make you a cup of tea, because you only gave it one variable to keep track of in the goal, which is the T. It doesn't care about the vars. You never told it to care about the vars. It destroys the vars. This is a problem. So, okay, now you can shut it down, modify it, and say, okay, get me a cup of tea, but also don't knock over the vars. But then there will be a third thing. There is always another thing. Because when you're making decisions in the real world, you're always making trade-offs. You're always taking various things that you value and deciding how much of one you're willing to trade for how much of another. You know, I could do this quicker, but it increases the risk of me making a mistake. Or I could do this cheaper, but it won't be as reliable. I could do this quicker, but it increases the risk of me making a mistake. Or I could do this cheaper, but it won't be as reliable. I could do this faster, but it'll be more expensive. You're always trading these things off against one another. And so an agent like this, which only cares about a limited subset of the variables in the system, will be willing to trade off arbitrarily large amounts of any of the variables that aren't part of its goal for arbitrarily tiny increases in any of the things which are in its goal. So it will happily... Let's say now, for example, now it values the vase, and those are the only things that it values. It might reason something like, OK, there's a human in the environment. The human moves around. The human may accidentally knock over the vase, and I care about the vase, so I have to kill the human. And this is totally ridiculous, but if you didn't tell it that you value being alive, it doesn't care. And anything that it doesn't value is going to be lost. If you have a sufficiently powerful agent, and you manage to come up with a really good objective function which covers the top 20 things that humans value. The 21st thing that humans value is probably gone forever, because the smarter, the more powerful the agent is, the better it will be at figuring out ways to make these trade-offs, to gain a millionth of a percent better at one thing while sacrificing everything of some other variable. So this is a problem. But actually, that scenario I gave was unrealistic in many ways. But one important way that it was unrealistic is that I had the system go wrong, and then you just turn it off and fix it. But in fact, if the thing has a goal of getting you a cup of tea, this is not like a chess AI, where you can just turn it off because it has no concept of itself or of being turned off. Its world model contains you, it contains itself, it contains the possibility of being turned off, and it's fully aware that if you turn it off, because it knocked over the vase, it won't be able to get you any tea, which is the only thing it cares about. So it's not going to just let you turn it off. It will fight you. Or if it's slightly smarter, it will deceive you so that you believe it's working correctly, so that you don't want to change it until it's in a position where you can't turn it off. And then it will go after its actual objective. So this is a problem. And the thing is, this is a convergent instrumental goal, which means it sort of doesn't matter what the goal is. It doesn't matter what your goal is as an agent. If you're destroyed, you can't achieve that goal. So it sort of almost doesn't matter what goal we give it. There is only a very tiny fraction of possible goals that will involve it actually allowing itself to be turned off and modified. And that's quite complicated. There are some other convergent instrumental goals. So we had self-preservation, goal preservation. Resource acquisition is the kind of thing we can expect these kinds of systems to do. Most plans, you can do them better if you have more resources, whether that's money, computational resources, just free energy, matter, whatever. The other one is self-improvement. Whatever you're trying to do, you can probably do it better if you're smarter. And AI systems potentially have the capacity to improve themselves, either just by acquiring more hardware to run on or changing you know improving their their software to run faster or better or so on so there's a whole bunch of behaviors which intelligent systems intelligent agents generally intelligent agents we would expect them to do by default. And that's really my core point. Artificial general intelligence is dangerous by default. It's much, much easier to build these kinds of agents which try to do ridiculous things and trick you and try to deceive you or will fight you when you try to turn them off or modify them on the way to doing some ridiculous thing which you don't want, much easier to build that kind of agent than it is to build something which actually reliably does what you want it to do. And that's why we have a problem, because we have 45 to 120 years to figure out how to do it safely, which is a much harder problem. And we may only get one shot. It's entirely possible that the first true artificial general intelligence will manage to successfully achieve whatever its stupid goal is. And that could be truly a disaster on a global scale. So we have to beat this challenge on hard mode before anyone beats it on easy mode. So are we screwed? No. We're only probably screwed. there are things we can do safe general artificial intelligence is totally possible it's just a very difficult technical challenge and there are people working very hard on it right now trying to solve a whole range of difficult technical challenges so that we can figure out how to do this safely thanks of difficult technical challenges so that we can figure out how to do this safely. Thanks. You may have noticed in the intro and this outro that the image quality has improved since the last video. That's largely thanks to my excellent patrons. Thank you to all of these people here for helping me to get this new camera. In this video, I'm especially thanking James Petz, who's been hanging out with us on the Discord server, helping answer questions from the YouTube comments and so on. And actually, that last video about Mesa optimizers has had a lot of really good questions. So the next video, we'll be answering some of those. That's coming out soon. So thanks again to James and to all my patrons, to everyone who asked questions, so the next video will be answering some of those. That's coming up soon. So thanks again to James and to all my patrons, to everyone who asked questions, and to you for watching. I'll see you next time. | Intro to AI Safety, Remastered | 1,084 | Robert Miles AI Safety | 20210624 | An introduction to AI Safety, remastered from a talk I gave at "AI and Politics" in London
The second channel: https://www.youtube.com/channel/UC4qH2AHly_RSRze1bUqSSNw
Experts' Predictions about the Future of AI: http://youtu.be/HOJ1NVtlnyQ
9 Examples of Specification Gaming: http://youtu.be/nKJlF-olKmg
https://www.patreon.com/robertskmiles
With thanks to my wonderful Patreon supporters:
Gladamas
Timothy Lillicrap
Kieryn
AxisAngles
James
Nestor Politics
Scott Worley
James Kirkland
James E. Petts
Chad Jones
Shevis Johnson
JJ Hepboin
Pedro A Ortega
Said Polat
Chris Canal
Jake Ehrlich
Kellen lask
Francisco Tolmasky
Michael Andregg
David Reid
Peter Rolf
Teague Lasser
Andrew Blackledge
Frank Marsman
Brad Brookshire
Cam MacFarlane
Craig Mederios
Jon Wright
CaptObvious
Brian Lonergan
Jason Hise
Phil Moyer
Erik de Bruijn
Alec Johnson
Clemens Arbesser
Ludwig Schubert
Eric James
Matheson Bayley
Qeith Wreid
jugettje dutchking
Owen Campbell-Moore
Atzin Espino-Murnane
Johnny Vaughan
Carsten Milkau
Jacob Van Buren
Jonatan R
Ingvi Gautsson
Michael Greve
Tom O'Connor
Laura Olds
Jon Halliday
Paul Hobbs
Jeroen De Dauw
Cooper Lawton
Tim Neilson
Eric Scammell
Igor Keller
Ben Glanton
Tor Barstad
Duncan Orr
Will Glynn
Tyler Herrmann
Ian Munro
Joshua Davis
Jérôme Beaulieu
Nathan Fish
Peter Hozák
Taras Bobrovytsky
Jeremy
Vaskó Richárd
Benjamin Watkin
Andrew Harcourt
Luc Ritchie
Nicholas Guyett
James Hinchcliffe
12tone
Oliver Habryka
Chris Beacham
Zachary Gidwitz
Nikita Kiriy
Andrew Schreiber
Steve Trambert
Braden Tisdale
Abigail Novick
Serge Var
Mink
Chris Rimmer
Edmund Fokschaner
J
Nate Gardner
John Aslanides
Mara
ErikBln
DragonSheep
Richard Newcombe
Joshua Michel
Alex Altair
P
David Morgan
Fionn
Dmitri Afanasjev
Marcel Ward
Andrew Weir
Kabs
Ammar Mousali
Miłosz Wierzbicki
Tendayi Mawushe
Jake Fish
Wr4thon
Martin Ottosen
Robert Hildebrandt
Andy Kobre
Kees
Darko Sperac
Robert Valdimarsson
loopuleasa
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Klemen Slavic
Patrick Henderson
Oct todo22
Melisa Kostrzewski
Hendrik
Daniel Munter
Alex Knauth
Kasper
Ian Reyes
James Fowkes
Tom Sayer
Len
Alan Bandurka
Ben H
Simon Pilkington
Daniel Kokotajlo
Yuchong Li
Diagon
Andreas Blomqvist
Bertalan Bodor
Qwijibo (James)
Zubin Madon
Zannheim
Daniel Eickhardt
lyon549
14zRobot
Ivan
Jason Cherry
Igor (Kerogi) Kostenko
ib_
Thomas Dingemanse
Stuart Alldritt
Alexander Brown
Devon Bernard
Ted Stokes
Jesper Andersson
DeepFriedJif
Chris Dinant
Raphaël Lévy
Johannes Walter
Matt Stanton
Garrett Maring
Anthony Chiu
Ghaith Tarawneh
Julian Schulz
Stellated Hexahedron
Caleb
Scott Viteri
Clay Upton
Conor Comiconor
Michael Roeschter
Georg Grass
Isak Renström
Matthias Hölzl
Jim Renney
Edison Franklin
Piers Calderwood
Mikhail Tikhomirov
Matt Brauer
Jaeson Booker
Mateusz Krzaczek
Artem Honcharov
Michael Walters
Tomasz Gliniecki
Mihaly Barasz
Mark Woodward
Ranzear
Neil Palmere
Rajeen Nabid
Christian Epple
Clark Schaefer
Olivier Coutu
Iestyn bleasdale-shepherd
MojoExMachina
Marek Belski
Luke Peterson
Eric Eldard
Eric Rogstad
Eric Carlson
Caleb Larson
Max Chiswick
Aron
Sam Freedo
slindenau
A21
Johannes Lindmark
Nicholas Turner
Intensifier
Valerio Galieni
FJannis
Grant Parks
Ryan W Ammons
This person's name is too hard to pronounce
kp
contalloomlegs
Everardo González Ávalos
Knut Løklingholm
Andrew McKnight
Andrei Trifonov
Aleks D
Mutual Information
Tim
A Socialist Hobgoblin
Bren Ehnebuske
Martin Frassek
Sven Drebitz
https://www.patreon.com/robertskmiles | 2024-08-15T20:38:16.957797 |
https://www.youtube.com/watch?v=RmfFyuYki0g | Hello everybody, good morning, good evening, good afternoon and thank you for joining this episode of Neo4j Live. Today we talk about the book Graph Algorithms for Data Science and I'm super happy to have Tomas Bratanic with me today. Hi Tomas, how's it going? Hey, you kinda almost named my name which is quite commendable so yeah I'm doing very good how about you I'm good as well I'm looking forward to this episode now that how is the proper pronunciation it was good right it's Tomas and then Bratani it's like nobody has the chur. Okay, Bratani Chur. So now I know. It's good to know because I have been using your name in the past for almost a long time and I'm planning on using your name for a long time as well because you've been and you are super active in in the Neo4j community as well as in you know posting blogs and answering questions in the community but you know as I mentioned also posting blogs so you have a very active medium blog where you post I mean it feels sometimes like every week um is is or maybe more often than once a week you post something pretty amazing there but you you're super regular so i think i just opened it up latest latest post from you was on march 30th of march so a couple of days ago and then in february and so on yeah I mean I used to be more active, now I do a bit more coding, I mean integration but there's like, I think there's like 80 or 90 million articles. Yeah, that's a lot and it kind of kicked off, well, you know, for a long time. But I think you went really on fire towards the course of last year, when you wrote lots of articles, lots of posts about, you know, artificial intelligence, Gen-AI intelligence gen ai llms integrating knowledge graphs integrating graphs with with gen ai with rack so so yeah but the funny thing is the funny thing is that like this like nlp like natural language processing topics were always interesting because i wrote like let's say one or two a year interesting because I wrote like, let's say one or two a year, they got like five times as many views as the other blog posts. So like MLP, natural language processing and like things around that, that was always interesting. It's just that now with LLNs and GenAI, that's more at the forefront, right? Because LLNs make life much easier. Yeah, and I mean it's the hot topic these days. Everybody is talking about it. Everybody wants to say something about it. Everybody wants to use it, wants to play with it. So it's only natural that the interest is is high and and we do a lot of it as well here at neo4j so um there is there's a good i think as a good connection and a good a good reason for it to use uh these these tools and to but to also pair them with uh graph technology um and i think we we talk about those about those in relatively often on these streams. A couple of episodes in the past, we had interesting guests to showcase what they're doing. We have together with Jesus, I'm doing a going meta live stream series, which talks about these things as well in the past. So it's a very interesting topic with lots of content. It's a very interesting topic with lots of content. And, you know, very, very interesting to see what's happening, how the technology evolves, new discoveries, new ways of working with data, new ways of, you know, playing with that in that field and try things out and also to make things work and to implement things in the end and to benefit everybody. So that's a cool cool thing yeah and you you've been also a little bit of a mentor i would say in in this space for for many people leadership exactly yeah yeah totally um you you have you inspire lots of people that have been out in the field and read your articles and then tried out themselves, play with it and then post themselves something about it and lead on to how they then themselves become influencers or, you know, thought leaders, I guess, or, you know, at least experts in the field and to share the voice here and to share the stories. It's quite awesome to see that, like, you plant an idea and then you see, like, like glowing people giving their own touches and like inputs and then making you something great yeah i think that's awesome yeah absolutely no that's very cool very very very good to see um and i guess that's a little bit how how you how you decided to write a book um is is that is that one where you want to kind of collect your knowledge and then summarize it in one space? Yeah, I mean, it was kind of funny because as you know, I've been writing a Medium blog post. Even before Medium, I had a WordPress blog posts, like even before medium, I had like a WordPress blog post, which probably nobody knows about, but I had a WordPress blog post for like two or three years and then went to medium. But then it was like I remember exactly there was like one blog post that I did, It was kind of long and it showed basically how to build value element topic, how to build knowledge graphs from text. And that was like maybe four years ago or something. And I think we use some like Google API for like topic detection Google API for like topic detection and then like named entity extraction from Amazon or something like that. So it was, it wasn't like today, right? Where this is all much simpler. It was like, had to go to one provider for this, to another provider for that. And then kind of combine everything and it's already static, not flexible. Yeah, but anyway, I showed how to build an orgy lab after text and it was like, at the end, it was like a very long blog post. And then at the end I say, if I'm ever going to write a book this is going to be a chapter in the book and then you know what happened Michael said Michael Hangard Of course, write a book! And then he said let's write a book. Why not? I mean, by we, I mean you. And then he got me in contact with the publishers. And that was that. Well, I mean, he at least contributed the foreword. So he did something. But I mean, yeah, I can imagine how it went. I mean, he was a helpful also along the way. He likes syntax and book and he did chapter the reviews and, you know. Yeah, absolutely. All over the place. But yeah, it's basically Michael. Because it's also my car why i work for neo4j in the first place because maybe that's even like before the book happened like long long long time ago like seven or eight years neo4j didn't have a graph algorithm library. And I mean, and before GDS, it was called graph algorithms. And I started writing content and contact me, Michael, when graph algorithms, the first library that now doesn't exist anymore was being developed. And I said, I like to write content for Neo4j and do you have any work for me? And then he said, yeah, I need someone to write documentation for graph algorithms. A library, right? Because it was a labs project, so it means not... Yes, I remember.... actually supported, so like external contributors. And he said, yeah, I need someone to help with graph algorithm library documentation. And I said, yeah, fine, why not? I mean, what's the reason it could happen? And then that was like seven years ago. And then they implemented some like first 10 algorithms. And then I was like, in the beginning, I was like reading research papers, algorithms, how people are using it, like what to do, but also like giving like user experience feedback, like a little bit, I have a little bit with standardization of parameters of a graph. So that was basically, it was all meant to be right. The second I started to work for Neo2J I was writing documentation about graph algorithms. Cool. And now you went on and written a book. It's called Graph Algorithms for Data Science. So it's a little bit like you said, the evolution from the original graph algorithms data science. So it's a little bit, like you said, the evolution from the original graph algorithms without the library part where we had like data, they were like a little bit standalone in their own, in their own world. And now we have a proper graph data science library and client that's, that's fully integrated with Neo4j. And that's, that's, that's part of, of Neo4j and, and the book covers book covers different algorithms and showcases them across. Yeah, I mean, actually it's designed because unfortunately most people never worked with graphs. So it was designed to take a complete novice or someone who never seen graphs before or used them before and teach them. So the assumption is that the user knows a little bit of Python. Maybe they do like a pie chart or a bar chart in Python, just like some basic data analytics, but they have no idea about Neo4j or anything like that. And then the idea is to take them and like educate them first a little bit about what are the graphs, like when to use them, how to model them. And then the next two chapters, we kind of go through what is Cypher, what's like the syntax of Cypher and like also like how to do, how is it similar to SQL, how to do basic aggregations, stuff like that. So that's gently... I feel like that new users, people who never touch the technology, they need a lot of handholding they need a lot of handholding, because you want it to work. Because if I give you something and on the first try it doesn't work, okay, you may try again. But if on the second try it doesn't work, you're just going to say, okay, that's not for me, goodbye. Yeah, you can be discouraged quickly and it's yeah i mean whatever maybe next time yeah and then that's why i take the approach of like really hand holding and like really gently introducing because i can i want like you want those small victories to motivate people and have something working right, because if nothing is working, that's not really motivating. Yeah, you're right. So the idea is to take a complete beginner and make them, I don't know what's the word, but you can like graph data science. Because maybe I can share a picture. Yeah, and we can. So we plan to look over the book a little bit, give some insights, and show you a little bit what to expect there and some learnings and some tips and tricks from Tomas as well. So maybe that's a good way of doing it today. And also, if you have any questions, any comments, anything you want to know more about, then please don't be shy. Use the chat and let us know. We are also giving away a couple of free ebook versions of the book so um if you're watching this live and if you are watching this on twitch or on youtube because otherwise it's not going to work on the other channels but if you're watching this on youtube or twitch or you can switch over if you're watching somewhere some if you watch it somewhere else if you can switch over if you're watching somewhere else. If you can switch over to YouTube or Twitch, then you can use exclamation mark raffle. And you're in the pot. And later, maybe after we introduce the couple of chapters, we draw a couple of winners. Tomasz, remind me, how many PDF e-books do we have? Like three. TOMASZ CZELINSKI- Do like three. MARTIN SPLITTINGER- OK. All right. That's good. Three. So that's a good number. So if you want to win that, then let's switch over to Twitch or YouTube and use exclamation mark raffle. And good luck with that. And yeah, here is the slide. Yeah, let's go there. So now you can observe my diagram skills. Yeah. So it was funny because all like with the book, like images, right? It's all about copyrights. So in the end, like this, I don't know how many let's say 100 pictures in the book and like 99 came from me and only one was from someone else so a lot of diagrams and images as well and yeah this is the image you can see it's on the second page and basically how I designed the book how I designed like to take a journey from a complete novice and hold you basically thank you by the head and make you here we call it yeah that's good that's cool and that's like incremental I guess it starts. So it's super simple and easy and then you. Yeah, like I said, so we start, it's like what is a graph? Like when to use it, how to construct it, but it doesn't go like into very a lot of details, but just like some basics about graph construction and modeling. And by the way, we use like Twitter dataset. I mean, can you still call it Twitter? Yeah, it's probably has a different name now. You Twitter because it was written before X happened. So yeah, so first we construct a knowledge graph, I mean a graph of a Twitter dataset. Maybe, I don't know, maybe if I should jump out a lot or not. Maybe, I don't know, maybe if I should jump out a lot. Yeah. But yeah, I mean, there's the simple idea, right? How to describe the structure then how to import it into Graph database. And then once we have the graph, we kind of do exploratory data analysis, right? And then cipher to the exploratory data analysis. So like how many users are in the graph, like how many posts, like what's the ratio between posts and users, just some basic statistics and help you get into the flow of writing Cypher and using it for analysis. And then that's, I think, chipped up to, so like graph modeling and construction is like chapter one, chapter two. And you already call it graph query language. It's very, very, you know, very up to date. If you don't know, we're laughing why it's a little bit of a joke, the ISO committee just published the GQL as an official database language standard. And it's called, obviously, GraphQL language. And that is an official standard now for databases, how to interact and how to query data with databases, graph databases, property graph model databases. And that's now an official standard, similar to what data with databases graph databases property graph model databases and um that's now an official standard similar to what sql is for relational databases as of past friday and it's called gql so it's it's very cool we're very very happy yeah i had the first part yeah exactly yeah yeah Yeah. So that's, that's, I think chapter three and four, and then we put maybe, yeah, I don't know exactly, but yeah, somewhere around that. And then we go into graph algorithms, right? Cause that's kind of the topic and the title of the book, right? It's called graph algorithms. the title of the book, right? It's called Graph Algorithms for Data Science. Yeah. So we dive into graph algorithms and like the restarts with the classical ones. So the classical graph algorithms like finding the most important nodes using something like PageRank or like finding communities of nodes or like identifying similar nodes and then also a little bit about indirect relationships. So that's like the, that's like four or five chapters about graphic bulletins and how to use them. And it's all like, if you read any of my blog posts, the chapters are kind of, each chapter is a standalone like tutorial. So you get like a theory introduction and then some nice diagrams, what you're actually going to be doing in the chapter, what was the idea and we kind of put you into the scenario so like imagine you're looking at twitter and want to analyze something whatever that and then we actually do the analysis throughout the chapter and and explain out the chapter and explain what we are doing, but also the results, what do they mean and how to interpret them. And then when you kind of get the, I call it graph machine learning. I mean, this is some distinction I came up with because also graph algorithms, they are also machine learning. The typical graph algorithms like PageRank or like community detection, they are like unsupervised machine learning where you basically say, here I have a graph, right? And I don't have any labored information. So that's why it's unsupervised. And please find the most important node within the graph. So that's why it's unsupervised. But I specifically, I kind of mentioned the reasoning behind this distinction is that for the graph machine learning, I kind of say that here is basically we're training as a class, in this book there are only classification models. We don't do any regression. So we actually train like a classification model like a random forest model. So we use graphs and like Northern bearings and all of that fancy algorithms for like feature engineering. And then we extract the feature vectors for nodes and use them downstream to train and influence in machine learning models like classification. So that's basically the gist of it. So yeah, like I said, the idea is to take somebody who's never seen graph of the witness and have them train a couple of machine learning models based on the graph. OK, so I don't know what to, how should we, do we just walk? Well, let's maybe pick a couple of chapters where you think are most interesting and give a good impression and maybe where we can share a bit of additional knowledge and a bit of interesting bits where you think this makes a good impression, a good overview, and is maybe most interesting for the audience. And maybe we do that. Does that make sense to you? Yeah, I mean, it's totally fine. It's totally fine. So by the way, the first chapter is free, right? So if you're interested, the first chapter is always free. I don't know how you... Yeah, you go just on the website and click. Maybe look inside. Yeah, and you get the first chapter because i have the oh actually i'm not signed in right you can see yeah oh that's good yeah so you can you can check it out the first chapter but yeah for example we say here are you not listening Are you not listening? But yeah, maybe let me show in the PDF. It's like, it's not so nicely. It's how to spot a graph-shaped problem. Yeah, it's not so easy because PDF is not so nicely structured. But yeah, basically we go into how to, how to spot a graph shape? Okay. Problem. And for example, like the typical use case is like the terminology is like self-referencing relationships where basically you have like a single entity, like for example, user, and then like just one type of relationship between users. So in SQL, I think that would be like self-joins. But in graph database, that's just like a nice graph. So basically you have like users who have relationships between us. It could be for that, that they can send money between each other. Or like, if you want to be more technical, you could have like, for example, dependencies, like let's say Python dependencies, and then see, okay, if there's like a vulnerability detected inside the package but all the packages that it influences because you can then just reverse depends on relationships to do that so that's like one types of graphs another type of graph is the like pathfinding networks that you have. Basically you're searching for certain paths because you can also think like Google Maps. They have, you say I want to go to Berlin, and then the locations are nodes, and then you have different options. And even on Google Maps, you can choose, do you want to go by public transport, do you want to go by car, do you want to go by bike? Do you want to walk? And depending on the transportation mode, there are different paths and it also like different costs. Like for example, the cost in Google Maps is time, right? How much time do you need? Yeah. Yeah. Okay. Okay, so that's like another like classic example for when to use graph databases and graphs and then the third one that I showcased is the so-called like bipartite graphs, where you have like two types of nodes. So for example, user and item, and then you have which user you have like the purchasing history, like which user bought which item. And these types of graphs are very frequently used in recommendations, right? Which user bought which item. And these types of graphs are very frequently used in recommendations, right? Because you can, or in e-commerce in general, right? Because you can use these graphs to find similar users, like groups of similar users. And that like, if you use fancy marketing lingo, that means like segmentation, right? Groups of similar users, or you can also use this information and for recommendation, right? So if me and Alex have similar tastes, like similar purchasing history, it is a good chance that if you buy something new, I might also like it because we had similar Purchasing History. That's also quite similar, frequently used and it could also be not just like purchasing but for example like also like spotify you have like playlists and single songs right so it's a lot of these use cases so that's like the i think the three use cases that I showed or actually I did four. And then I could, I just said complex methods, but like, because Neo4j in particularly labeled property graph models are very good for representing highly complex and heterogeneous information and which is highly connected and like biomedical so this is one image that i didn't draw and like biomedical networks like prime example of like heterogeneous, a very complex connected information. Because here, for example, you have genes and genes interact with genes, but genes also interact with compounds and diseases, and then it can upregulate, it can downregulate, it can totally associate when you don't know exactly what it is. And it's like highly complex and everything is connected, right? And knowledge graphs, as you might call them, knowledge graphs, shine in that data representation and then it can also allow you to analyze direct connections, but also indirect connections that are also might be useful. Because for example in the previous example we said if two users are similar right you might recommend new items based on that but for example it's also like the two genes you could identify the two genes could be similar because they interact with the same compounds. Or I don't know, I'm not an MD, but I just assume something. And then you talk if two genes are very similar, have similar effects, whatever, then they might also be used for similar diseases. And maybe like it can help you come up with ideas, like what should we test or at least prioritize because people always have a lot of ideas yeah it's just that you need some prioritization like what should we look at next and then graphs can help you basically capture existing knowledge. And then once you have that, they can help you predict the future. Yeah. No, it's very powerful stuff. It is almost like it. It's pretty amazing what it can do. I would say we give the raffle five more minutes. I would say we give the raffle five more minutes. So until 20 to the hour, we will do the raffle. So if you want to take part in it, give it a go on Twitch or on YouTube if you watch it live. Exclamation mark raffle, and you're in the pot, and then we draw after that. So now's your chance to win the e-book version of Tomasz Batanic's book that we are discussing today. And, yeah, maybe next. I don't know how would you continue, but do you want to talk about maybe about your favorite algorithm? Is that something we could look at next? Something that you think is most interesting one or something you think is a good one that gives a good impression of? Yeah, we can actually show is that what's quite interesting is with What's quite interesting is with graph algorithms, I mean, it's similar as with most machine learning, but it is that you don't feed algorithms to your data, but you feed data to your algorithms, if you know what I mean. Yeah. Yeah. I know what you mean. Yeah. So we can take a look at one example for the Twitter that I am certain you have seen before, I'm certain you have seen before, but it shows quite well, like the power of indirect translating indirect patterns into direct relationships. It also shows the power of why it's useful to have a graph database because like you can run like a graph algorithms anywhere, right? But usually running it's like the same thing as in machine learning, just running like a machine learning model or training it is very simple. But doing the feature engineering and preparing the data and like massaging it so that it nicely fits, like that's the big problem. So that's something that I show because it's actually I think a chapter 5 we show in chapter 5 we show he starts with graph algorithms but then chapter 6 7 8 I think maybe also 9 I I'm not sure. We show how to feed the data to match graph algorithms input, right? Because mostly graph algorithms are designed to run on a simple graph. So like single node and single relationship type, they aren't designed. So like the typical example is like people friends, right? So let's like mono per type, mono meaning one, per type meaning, I don't know. But like something like one type of notes and one type of relationship, right? So it could be, like I said, person, friend, person. But here in this example, for example, in the Twitter example, it's very famous with tweets, right? Because you can write a tweet or a post nowadays, but then somebody else retweets it, right? So that's like a relationship kind of between the person who wrote the original tweet and the person who retweeted but it's not a direct one because here for example you can see that sasha published a tweet oh no actually it was francois who published the tweet and then sasha retweets it right so it's not like by uh how you say uh raw data, in the raw data, there is not a lot of, there's no direct connection between the two users, right? But then you can make assumption. Okay, if I retweet your content, that means I like your content and I want to share it with my audience, which is a fair assumption, right? Yeah. And you can say, okay. So that kind of means I am amplifying you or the, cause here I went, amplify and you can use Cypher to say, okay, these indirect relationships. So it goes from user to the retweet, from retweets to the original tweet and back to user, you can say, okay, this pattern for me, I want to represent a direct relationship between two users. And that means that Sasha amplifies Francois. And this is basically how you're reducing or transforming indirect relationships to direct relationships. And that's like a big part of graph algorithms. And like I said, also it's like four chapters in my books book just showing how to basically massage and transfer data to fit a graph algorithm input. Because I think that's the most important thing when you're like uh data scientists like analysts right um when you have like nice clean data uh yeah it's very nice but like in reality the reality is often very different yeah it's really very clean and nice it's a bit of a mess. Emphasis on first like exploring to see what's happening inside your data, how because what are some like characteristics of the data set or like maybe something that you messed up maybe or is something that like the the the the one who designed API thought that it should be correct like but it kind of doesn't make sense because for example when I scraped the Twitter right we are API's you long time ago, you could still use API. Yeah. I learned that, for example, every time you retweet someone, by default, it's also a mesh, right? I don't know why, but that's just how it is. Like under the hood, that's how you get it from the API. And then you need to take into account these anomalies, right? So what does the data actually kind of have to dig deeper a little bit and explore and explore and see like, is this like a characteristic of like the network? Is this characteristic of the like implementation? What does it actually mean? Cause it's not like just running algorithms. It's also interpreting the results cause because that's the most important thing. You run an algorithm, you get some numbers, and if you don't know what to do with those numbers, what do they mean? Exactly. What do they mean? That's very important. It's useless, right? You say, okay, Alex has page length score of five. And whatever. What does that mean? Nobody knows. Then you have to go a little bit deeper. So, yeah, so that's, like I said, Transforming data sets and like massaging is very important because I have like, for example, another example where basically you don't have any relationships. You just have a feature vector. for the back while you're looking for that I would say we close the raffle so I think we will be already over time anyways from what I said we would be doing it so I'll close it so everybody that's in the pot by now you have the chance to win one of three e-box and I'll pick the winners as well through the tool that we are using here. So it's all random, it's all very safe, so don't worry I'm not picking anybody by their name or anything else. It's just a tool is gonna do it. And the first winner is Martin, congratulations. The second winner is Salman and the third winner is Amal, congratulations to you three. So you see here, if you won, then that's it. If you did win, please send me an email and reach out to me at alexander.ertl at neo4j.com. I'll post my email in the chat and then you can reach out to me and we get the link over to you. I'll do that in a bit. And yeah, with that, back to you, Tomas. So actually I decided we're not gonna look at nodes with no relationships or like feature vectors because here we have more 3D pictures. because here we have more three-tiered pictures. Okay. I mean, like, for presentations, three-tiered pictures are better. So, for example, another way how you could, like, turn, like, sometimes people call them implicit relationships or, like, indirect relationships between entities and turn them into direct relationships so that you can run graph algorithms on top of that. It's the so-called co-occurrence networks. That just means you kind of define a co-occurrence between two entities and here in this example I think medical people tell how often they occur in the same article. And then that kind of helps you to find communities of similar entities, so how they are kind of related, so what's showing up very frequently. So for example, we have anti-anxiety agents, and they co-occurred with the medical retention. That's something that was just automatically generated, but taking a look at how often do entities co-occur, like medical entities co-occur in the same article. And then we have maybe more the, I say, popular ones. So for example, you can also take this approach and just look at Headed Butter and see, for example, if two characters If two characters are in 10 words of each other, they appear within 10 words of each other in the text, that by definition, we define an arbitrary limit. And let's say we define 10 words. And then that means that they kind of co-occur. And if we take this algorithm and run it through the whole book, we can then construct a graph out of the co-occurrence relationships and see who interacts with each other. For example, Petunia really interacts with Dudley. I think this is the first book and well-known. And then, for example, obviously Harry Potter is the main character of the book. So it's the largest now, then very close to Harry, Argus, Ronnard, Hermione is slightly more far away, but then you can see kind of how characters group together, right? And then you can also, for example, you could use kind of set, I didn't use like community detection algorithm here then you can find like uh but it could be an add-on you could yeah i mean you could i just unfortunately didn't uh uh at uh uh colors right because colors make the images are also nice. Yeah, yeah. And then he's... So for example, here we have like when two, like here we have a co-occurrence network of ingredients in dishes. So for example, when two ingredients are in the same when two ingredients are in the same recipe, right? We say that they co-occur, that there is a relationship between the two. And then the more frequently they are in the same recipes, the stronger the relationships between the two. And then we can use this, for example, here to analyze uh communities of ingredients and for example you can see that like eggs flowers sugars milk that is very frequently used so that would be like that's like most of the baking stuff that Yeah. And then I think I would say like main dishes, it's onions are very vital. Yeah. More like savory. The other sounds like more like cake ingredients and there's more like onions, more like, yeah. Yeah. More savory, a bit more like, yeah. It's like, yeah. More savory, but more, you know. And then you can, for example, onions, and then you can add like chicken, pork, like tomato sauce, potatoes. And you can, if you can, if you dig deep into it, you can kind of, because I know like that they did like some analysis of foods from which cuisine so like from which countries and then you can do like these uh visualizations for each country or like each region and yeah it's quite interesting to see differences between different religions how they combine ingredients, for example. So that's that's always nice because the nice thing about like when you're dealing with small graphs is that you can visualize them and they make really nice visualizations for demos presentations but like when the graphs are too big to visualize and that happens very soon that even like thousand nodes it's beyond any like it's not you you can't find a lot of insights looking at thousand notes. Yeah, no, that's not going to work. So that's when graph algorithms come in. So you can think of how I usually explain it. Like when you're dealing with small networks, you can say, okay, egg, flour, and for me it's funny, it's peanut butter. So I think this is very American. A lot of peanut butter. And like I said, the effort, like small networks like this is maybe 50 or 100. You see these relationships and kind of patterns and it kind of makes sense. But like, let's say you're dealing with even 10,000, but you can be like in the millions, right? What you're going to do, you're not going to look at million notes and kind of digest something you hope to find. Because you can't do that, like, at least in a timely manner, what you do, you use graph algorithms. So graph algorithms are then kind of your substitute for when you can't, like, manually find information, then you use graph algorithms, which, like, programmatically they use some fancy mathematics. They find those insights for you and they say, like programmatically they use some fancy mathematics. They find those insights for you and they say, okay, these are important, these are communities. Maybe these are how communities change over time. I don't know. So yeah, so that's how I would, that's what I would say about graph algorithms. It's basically when you're dealing with graphs that are too big to visualize. That's when you want... And I mean, that happens quite fast. Yeah, that's cool. We have a couple of questions. Tomas, do we want to... Yeah, I mean, we can... Do we want to go through them before we wrap up? Yeah, we can look at some questions. Okay, cool. So one comes from Salman. He asks, will this library be available for free on AuraDB for learning and for experimenting? Or maybe to rephrase it because I know already the answer, what's the best way of exploring and try out and learning graph algorithms in the Neo4j environment? What would you recommend starting? So I would actually recommend Neo4j Sandbox Neo4j Sandbox because, so Neo4j Sandbox is just a bunch of projects. You can, for example, take a graph data science project. You can actually look like libraries and then if have data science is enabled it kind of implies that some algorithms are in there that you can use so if you're like completely new you don't know you you just want to try it out and test i would recommend spinning up because it's like a free recommend spinning up because it's like a free cloud instance that you just click on create, wait a minute or maybe less, 30 seconds, and then you can open it and it's got like nice guides and it hopefully handholds you. Yeah, and it has already the data so you don't have to worry about getting something there. And to completely answer the question, on RRDB-free, the Graph Data Science library is not available. So you are currently not able to try it out there. There is a paid tier which allows you to use the Graph Data Science library, but not on the free tier of AuraDB. So yeah, the sandbox is going to be the best way to play around with it and to see what it is. Cool. The other question comes from Lavanya. She asks, what are some known challenges of the cipher language implementing custom graph algorithms as a complex query, e.g. the with statement only always maintains modifies a table structure. Knowing these challenges would help users to look right these algorithms in the backend. Maybe that's a bit of a longer question. A long question, but what I would say is that like what you're saying is that like you want to write your own custom algorithms with Cypher and yet there's like a lot of I mean there are like no challenges of Cypher. There's like someone can write a book and I mean, and talk about it. I mean, I'm not saying that there are a lot of known challenges for Cypher because I honestly don't know, but it's like, this is like a big topic. Yeah. But what you should know is that like in the book, when we are referring to graph algorithms, we are not referring to actual, to implementation in Cypher, but in Neo4j you have a graph data science library that exposes less algorithms to do procedures, right? So if you want to run PageRank, you just say, okay, I want to call GDS PageRank.stream. So I want to to call GDS page rank.stream. So I want to run a page rank. So that's all you need to do. It's not like now you go into this PhD math mode and you look at the original paper and then use Cypher to implement the original paper. I mean, that's like only a handful of people know how to do it. But for the rest of us, we just kind of abstract the complexity away and just use procedures to use to call the algorithms. Yeah. Yeah. Cool. And the next question comes from Mac. He asks, is Neo4j suitable as a replacement for vector databases? I think we talked about vectors. You talked about it a little bit. I mean, that's also. I know it's also. It depends. Most of the time. Yes. There are probably some cases where no. and it depends like exactly what you're doing but in theory I would say most of the time yes not always so that's a very nice political yeah no yeah but that's that's hard to say. And it's always difficult if you don't know exactly what's going on. I think things are changing, right? Exactly. You don't know what's going to happen in one month. Yeah. Yeah. Absolutely. Cool. I think that's all the questions I could at least see right now that come in. If you have any other questions then then please obviously reach out to the community either on Discord or in the community forum or comment down on below this video on YouTube we will have a look out as well for anything on there. Anything you want to to say before before I close us out too much yeah I want to say thank you to Asheville he said it's very nice but yeah like I said, the idea, so if you don't know anything about graphs and you want to kind of, if you want to start, then I think it's a good book. Obviously I'm biased, right? But it's a good book. But if you already know a lot about graphs and graph algorithms for the last three chapters, actually four, I have the code on GitHub because we use, as you can see, spaCy, for example. But also like machine learning libraries in Python. And you can take a look if that's interesting for you. Obviously, there is no copy, there is no text in notebooks because it is what it is. But I don't know if you can also maybe just share this link if people want to take a look at some of the latest chapters it's fine for me you can if you're just interested in seeing what's the code what maybe to expect by the end of the book you can take a look around the code yeah the culture to them if you if you have no idea anyway and then you can decide if you like it yeah no no thank you very much thank you you very much, Thomas, for, for obviously for writing this book. Thank you, obviously, also to Michael for making you write this book. It's really, it's really good. If you if you didn't win the ebook, we have a 35% off discount code that is also the YouTube description in, in the video. So if you want to, you know Tomas has a discount code and you get the chance to get it for a little bit cheaper than the list price. And yeah, thank you very much for the cool presentation. Thank you for showcasing us or guiding us through the book. showcasing us or guiding us through the book. And thank you everybody for, for, for participating today. The next stream is going to be on the 25th of April. So there's going to be next Thursday where Martin and I, we will talk about the Neo4j and LLM fundamentals course that's happening or that you can already participate and try out yourself on the graph Academy. And we will, we will go through the basics of the concepts of the, of the, you know, what to expect there, what, what you can learn there and how it works in the, in the stream next Thursday. And if you between now and then basically want to know what's, what's up, then check out the new Virginia community on discord or on the forum, as I said before, for any questions, anything, any open discussions, anything else where you would like to talk about and you have, you know, issues and want to discuss or network with Neo4j folks and experts. If you want to learn more on the Graph Academy, then please check that out as well. That's a really good place to learn about graph databases and also graph data science we have we have courses as well for this so you maybe it's a good combination to try out one of these courses on the graph academy and then also read the book from tomas and if you want to play with it you can use neo4j aura free which obviously gives you a database on the cloud but does not work with crop data science. So I think that's really a bit of the downside here, but you can use Sandbox instead as we discussed. And yeah, with that, thank you all for watching today. Thank you for participating. Thank you for your comments and your questions. I hope you fun um and i hope this was informative and interesting to you um if you want to know anything else then yeah reach out and yeah with that thank you very much and thank you tomas have a good rest of your day and see you soon bye thank you bye | Neo4j Live: Graph Algorithms for Data Science | 3,759 | Neo4j | 20240419 | Graphs are the natural way to represent and understand connected data. This book explores the most important algorithms and techniques for graphs in data science, with concrete advice on implementation and deployment. You don’t need any graph experience to start benefiting from this insightful guide. These powerful graph algorithms are explained in clear, jargon-free text and illustrations that makes them easy to apply to your own projects.
Guest: Tomaz Bratanic
Book: https://www.manning.com/books/graph-algorithms-for-data-science
35% off with code: au35bra
Blog: https://medium.com/@bratanic-tomaz
Neo4j Graph Data Science: https://bit.ly/45V0WMU
Sandbox https://dev.neo4j.com/try
Code Examples: https://github.com/tomasonjo/graphs-network-science/tree/main
0:00 Intro
15:20 Chapter Overview
24:25 Chapter 1
34:30 Graph Algorithms with Twitter Dataset
43:00 Relationships between entities ( co-occurrence)
52:10 Q&A
1:00:30 Wrap-Up
#neo4j #graphdatabase #datascience #algorithm #python | 2024-08-18T11:22:23.418043 |
https://www.youtube.com/watch?v=gV_rdZw5bDs | Hello everybody, good morning, good evening, good afternoon, and thank you for joining this live stream on Discovering New Virtual RDB with Michael and myself, Alexander. Hi, Michael, how's it going? Hey, Alex, good. Yeah, recovering from last week, I guess. and yeah no going well sun shining everything is good and yeah had a quiet weekend this time not doing much resting and getting out into the park a little bit and reading I'll need to finish my book to follow your recommendations then next yeah let me know if you like it. Yeah, I think I did do much yesterday. I spent another couple of hours yet another time in the train. So my Yeah, actually last night I kind of I fell asleep early and then I woke up at 1am or so so I started to read some stuff and started to set up local stable diffusion on my computer so I downloaded like 20 gigabytes of image models trained image models, and upscalers, and control net, and all the other stuff to get some of the examples running. And some of it worked. I wanted to do the QR code thing with creating nice pictures that still serve as a QR code. But I'm not there yet. So I'm still working on that. I still have a QR code, but I'm not there yet. So I'm still working on that. I'm glad because setting it up on, on iPad Silicon was a little bit more involved, but I got it to run so that all the pieces are in place. And now I have to understand a little bit what I'm doing. Like before there's so many options and so much that you can, you know, tune control set up in terms of the pipelines of generation and filtering and upscaling and applications of additional LoRa fine-tuning models and all the other stuff. So it's different to just prompt engineering with LLMs. So this image stuff is much more involved, I find. Because it's not just a prompt. It's also like a gazillion other parameters and other things to know. Yeah. So there are definite jobs there for people that understand all the bits and details and can generate reliably good AI images as much. Yeah, it's not so easy. I mean, you think you see all these great examples everywhere and you think, oh, this is easy. I can do the same. And then you get an empty prompt and then you type something and then say, oh, I want this and that. And then you get four images back or whatever. And you realize, actually, that's not what I envisioned. And then you add more details and you think about it. But you have to find the right prompts. You have to specify what you want. And then still, sometimes you get a hand with six fingers and confusingly. MARTIN SPLITTINGER KORNENBERG- Exactly. But I think Midjourney is already doing a really good job at making it really easy for users with the Discord bot. Because you can also see what others are doing and kind of see what kind of parameters they're using and what prompts they're using. I found, to be honest, I found the MidJourney Discord really relaxing and enjoyable. Unlike TikTok or Instagram, where it's so much just crazy ad style stuff and really exhausting. So it's actually really relaxing to just watch this scrolling on my journey before the new images pop up. And so I find it quite calming experience. It's quite nice. Yeah. Yeah. Cool. Yeah, maybe that's at some point when you're a little bit more knowledgeable about the thing that could be a topic. Yeah, that could be interesting to see. Just to demonstrate how to use it. Exactly how to guide him. Actually, I wonder, Alex, if you should do an LLM stream or so, right? Because Neo4j is now doing also internal work on integrating with Vertex AI, with OpenAI, also with Microsoft, to kind of highlight some of those things. Tomas did a bunch of things, and Noah, and Oscar, and John, and others. And I did a bunch of things as well. So it would be kind of nice to have a stream where we do LLM or ML integrations with knowledge graphs or so, and have a bunch of episodes on that. For each episode, cover one area or one project. Exactly. Cover a couple of topics, couple of areas and discuss a little bit of what's going on and how to use it. How to combine the tools, how to combine the technologies, right? Exactly. We showed a little bit with the graph GPT demo that Varun developed and with Jason, his mock graph data generator that also relies a little bit on that. But I think there's definitely... But yeah, let us know in the comments. So if you watch this and think, oh, LLMs, ML, all these topics are interesting. You want to know more how to combine them with graph technology or with Neo4j in particular, then that would give a good indication for us what we should go for. Yeah. Yeah, and we can also get people from the cloud architects team that did like the Microsoft and Google integrations as well. We did like APOC integrations for OpenAI and Vertex AI as well. So there's a lot of fun things that you can also try yourselves. So from that perspective, it's not just us demoing stuff, but it's also practical things that you can try yourself as well. Exactly. But it's like just us demoing stuff but it's also practical things that you can try yourself as well exactly but it's like you said your experience it is a little bit involved and you get lots of things you need to prepare and set up and then sometimes this puts people off at least I mean speaking of my experience if I need to install 20 different things and it's all very confusing I don't know what to do then maybe I think oh well maybe not. But if there's a guide in the other hand, yeah. Cool, yeah, I like that idea. Yeah, that sounds good. Good, all right, before we go into today's topic, notes 2023, a call for papers is still open. So if you go to either neo4j.com slash notes 23 or dev.neo4j.com slash notes 23, and let me put that also in the chat, there you can obviously submit your talk. So if you have a graph topic, if you have anything that is, we have identified a couple of key themes for this year. So visualization will be one, graph data science will be another one, and application development will be another one. If you have something on these three points, then definitely submit something. If you have something else, also submit something. If you have something else, also submit something. So no matter what, if you have something on LLMs, which is sort of GDS, sort of app development, it doesn't quite fit somewhere in particular, definitely submit. If you have something on anything else that is graph related, please also think about it. It's going to happen live. It's going to be fun. And it's in October. So we have plenty of time. But the Code for Paper ends this month. And it's also 24 hours, right? So that means if you're like in the Asia time zone, in APEC somewhere, and you want to submit something, or in Europe, or in the US, or in the Americas, we have like for all the different time slots, there's a lot of potential and possibilities. Of course, we have 24 hours of time to cover. So we are looking really for a lot of submissions. So please go ahead and submit. That would be really great. Yeah. And if you don't know if your topic fits or if you want to talk to us first before you submit, there's always options. So you can obviously reach out to us in the forum. There is a specific notes channel. There is also the Discord server is a specific notes channel. So you can reach out there or you can ping us. Just send us an email as well. Send us an email if you're not quite sure or if you have any other questions that the website does not answer, then reach out and let us know. But yeah, it would be great to see you there, see your presentations in October. Exactly. Or if you know someone whom you want to see to speak, so then remind them and nudge them into submitting, right? So sometimes we'll just need a little nudge. And the other thing is also we have both lightning talks as well as full talks. So if you or someone else wants to do a rather like only like a 15-minute talk on a very specific subject, then please do this as well. Besides the regular talks, you also have the option for lightning talks. Yeah. Super cool. Yeah, and today I think today we do a little bit different. So I think I'll skip this one. But just as a reminder for Neo4j, what is Neo4j RDB? It's a free version of Neo4j RDB, the graph database as a service offering from Neo4j. So you get, and maybe just to clarify, it's Neo4j is the software, it's also the name of the company, but RDB also runs Neo4j. So there's no difference really uh there's some difference of course but there's still you get a graph database um which you can use in the cloud um and there's some limitations to it for the free version which uh is mainly sizing so you get 200 000 nodes and 400 000 relationships which um the other, there's a paid tier, professional enterprise tier, which go away, take away these limitations. But generally, these are good enough for testing and for playing around with it. You get the new workspace, which includes Neo4j browser, data importer, and Bloom. And it's free forever. So as long as you like to keep your project, it will be yours. and it's free forever. So as long as you like to keep your project, it will be yours. The only thing that could happen is that there's potentially being paused for unuse, but you can always go into your dashboard and break it up again. And then it's there as you left it. Exactly. Exactly. Today we wanted to show a couple of the Neo4j guides that are now available from the Aura console and from the Aura dashboard, basically, in Workspace, where we had added a couple of things to make it easier for new people to learn and to get their journey going. So let me switch to here and let me call the fans out of the way yep here we go super and you are already logged in so yes exactly so uh really quickly so that we don't need to wait for the database to be created uh i already created a free database i called it Get Started and then just clicked on Open. And actually just today our team launched this new guide experience. We have a new open workspace. You get the guides presented. Can actually, oops. You can also get them back if they go away. There's this little tutor or like graduation hat icon on the top right that you can use i just wanted to yeah and that's new right so they used to be just the question actually that's new exactly so they used to be just a question mark item but now there's also this guides item which you can also always click to get them get them back right and some of you might remember that we already worked through the Getting Started Guide at some point, which basically goes through data import and querying and exploring the data really quickly, like really in 10 minutes total through everything. And now the new guides are more detailed on each of those. And I thought because it was just released today, I wanted to use the opportunity to showcase and demo it so that we can see what's in there and what's worth it. So we would start with the input guide and look at the Explore data and then querying at the end, right? So it should take us like 30 minutes roughly to get through this. And let's see how we go. OK. So if I am logged in, I have my guide, and I click on the Import Data Guide. I probably need to go back to the beginning. So each of these guides has a lot of text, so don't be afraid. But it explains kind of in detail what you're doing, how you're meant to process through the guide and so on, right? So this introduction talks a little bit about data importer. That's a simple, powerful tool to get your data into Neo4j to model and map existing CSV files. And you can basically configure all of this, right? And it also points back to the getting started guide if you haven't gone through this. It's usually worthwhile doing that as such. Okay, if I click next, it says to get started, download and unzip this file of CSV. So it's a zip file. And actually something nice in the new guides is they have these kind of underlined areas. And if you look at this, then it kind of highlights this in the UI. So the rest goes gray, and then it highlights this. So I can just hover over them. I don't even need to click. Then it basically highlights them. That's something that our colleague Adam actually added. So I only already downloaded these files. And why is it doing things? I basically just have these CSV files on my local machine and I just drag and drop them here into the UI. Right? Yes. drag and drop them here into the UI. Right? Yes. Okay. So now it kind of talks a little bit about the files, the CSV files and that they need to have a header and at least one row and what they look like. So this is kind of the order details file with order ID, product ID, unit price and so on. So what you would kind of expect from, as an export from a relation database, for instance. It also has this thing here that says, are you having trouble? These come up now and then in the guides where it basically allows you to just click a button to achieve the same thing that you do like interactively now, right? So which is also nice. So if you really struggle, you can just click this button and get the same thing that you do interactively now, which is also nice. So if you really struggle, you can just click this button and get the same thing as well. Yeah, if you're sitting maybe behind a firewall or something, you're not able to download stuff or something like that. You can do that. Exactly. So it talks a little bit about the CSV file. And basically, there's a product, and there's an order ID. An order and a product will be kind of our entities that we will create, and it contains relationship connecting them. And the product CSV and order CSVs is each of them. It's the products and the orders. That's kind of what it says down here. So now we're kind of getting started. It says click the Add Node button. This is this button here, right? Hello. There's also this one here, which it says add node label. So we can just click this and give it a label order. So we can either give it a label up here or we can just click into the node and call it order. And then it says map a file. So there's a file pop up. And there's if I click on this other thing, it says select this file. So I get again this highlight. So I say this is my orders file. Pick the orders file. And now we can use the map from file button, this one over there, to add order ID, order date, and required date. I'm not 100% sure why the required date is in there, but so be it. And so which gives us like these three properties now. And then it says also change, edit the name, order ID to ID, and change the type from integer to string. Okay, so we change the type to string. Can also make this panel a bit bigger. And here I can just call this ID. And so we have done the things that are in our to-do list here and then we're good. Usually in the basic just explains the initial names are taken from the CSV file columns and but you can adjust and change them as you want to. That's good. And now it's about setting an ID. So how do I find an order? How do I address an order? So that's done via the ID field. Again, we have this little ID box here, which should be highlighted, which is down here on the bottom of the right side. But for some reason, the highlighting doesn't work. It should actually. And it automatically, for us, it automatically sets the ID here because the ID has an ID in its name basically. But if your ID is something like SSN or SBN or something like that, then you would have to select the ID manually. And if there's an ID selected, there's also a green check mark on your note as well. So, and then it says, do the same with the product node. So we can click again on, at the note here, create a product node, map the file, and it says, map product ID and product name. Okay, map from file, product ID and product name. Okay, map from file, product ID, product name, and change this to ID again and change this to string again. Right. Okay, this ID is set as an ID and both have a green check mark, right? So all good. So Tarek is asking a question. If you are using Zona to ingest logs on-prem, how does this work with ODB? I'm hearing you have to turn on CloudWatch. What is possible? Is it possible without this? Aura has, I think they're currently working on a feature for lock shipping. Right now, Aura has a manual lock export, at least for the pro version. For enterprise, I think the lock shipping could already be in the direct, but you can just open an order support ticket and then they should open, should help you with this search in detail. Cool. So we created our product note. Yeah. So it should be good to go. Five steps already done. And now it's about the relationship, right? It says your data uh concerns orders and products and we contain have the contents relationship and for uh nodes we use camera case so if i click on these links it opens the wiki pages basically for camera case and snake case but i hope uh that we understand what it is right right? So camel case basically has capitalized first letters for multiple words even, and snake case has the capitalized all caps letters with underscores instead of spaces. So we now create a relationship. It says hover by the edge of the order node circle and you'll see a plus sign. So that's actually new. This, Alex, and I'm not sure if you've seen this before, so this is now a plus cursor here. And if you track this out, the relationship for the cursor is new. And then it says, in the mapping details panel, there's a type section, which is also currently not highlighted, but it's up here. So, and we should- It highlights the whole thing, right? Yeah, on the second, so the two actually, right? So here it highlights the whole thing. Ah, I see, okay. Actually, here it highlights the whole thing. And this should highlight actually just the thing up here, but that doesn't. Just the beginning, yeah, okay. Yeah, so, and then, yeah, exactly. So it currently doesn't have That's the beginning, yeah, okay. Yeah, so and then, yeah, exactly. So it currently doesn't have a green check mark, so we need to link the two first, but it's, I think, on the next page. And, but you can also delete it, so if you accidentally created something, then you can delete this here, and you can also reverse direction of the relationship with this button up here on the top right. Okay, next we're mapping files to relationships. We again, in our files panel, we pick in the file dropdown, why is this also not working? The order details one, and then we now see two ID columns here in this one. So we want to map order, meaning order ID from the file to product ID, product.id, which is product ID in the CSV file. So basically just pick from the CSV file whatever represents the order for the order side and whatever represents the product for the order side and whatever represents the product for the product side. And then it talks a little bit about additional craft model pin. Now everything has a green check mark, right? As you can see, even on the relationship, below the relationship, there's a green check mark. And if you look at the CSV file here, you see that there's also unit price quantity and discount in there. So which are good properties for the relationship. So we can add them to a relationship directly by again saying map from file and then pick these three relationships here, click on confirm, and then it automatically turns them into float integers and floats basically as types, right? So this looks good and everything is green. Oh, it also says when we map here from file, we don't need to include auto ID and product ID because they're already used for mapping the start and end connection, start and end node for the file. So just the other properties such. Cool. So we've done this. And then it talks a little bit about other stuff, which I kind of want to skip because it's a little bit confusing to talk other people and location things. It's basically for a CSV file that has everything in one CSV file as such. So perhaps you should do this. So if I click on this, it loads the new model. And then you need to basically create new relationships, like person lives in city. Looks in. And you see, because everything is already in one file here, there's only one file to select, so it auto selects the file. And it also automaps the start and end node for the relationships as well. So that's actually something that's in country. So it basically does this automatically. You don't need to do anything else. that's in country. So it basically does this automatically. You don't need to do anything. If everything is in one file, then you don't need to do additional work. So then Data Importer already is smart enough to know how things map together, right? So that's just a quick detour for denormalized files where you have everything in one file basically. Okay. The source, the target and the relationship. Okay, let's reload our Northwind model. It's really nice about these buttons, right? So you can just click the button and then it resets everything to what you want it to. Which is nice. everything to what you want it to, which is nice. And so we have what has to be mapped on a node, a label, an ID, and at least one property. And for relationship, it has to be type, start at node, that's it. Right, so and then it gets a green check mark, and then we can go ahead. Okay, in the next step, we look at a preview. So we have mapped our data, but do we know if we made any mistakes or so? Not really. So we see our model, but fortunately, there's this preview button here that we can also highlight. The preview button here that we can click, and it samples the first 300 lines of each file. And you can look into those and see, OK, is this mapping stuff correctly? Here, there's also a sidebar. I see orders and products. And I see also, are they correctly mapped as such? Do I have my relationship properties mapped as well? If I click on a relationship, and are my node properties for the order also really mapped correctly. So if I hover over these, it also maps in as daytime. Oh no, as string actually. As string, I think we didn't specify, did we? Yeah, we could have mapped them as daytime actually. Yeah. Actually, perhaps we should do this. We go in here and say this is a daytime. And we run our preview again. And hopefully, let's see. Yep. Now, because you know the t and the time zone in here. And if I hover over these, it now also says date time. See, we found mapping error. And so we can fix it directly. So that's actually quite nice. So you see the connections if they are correct and you also see the properties as well. Okay. And basically if there are any errors, then you would see them visually or it would also complain if they're mapping arrows, it would actually complain if you click run import. So you can run import from here, but you can also run import from the top. So here is on the top. I really like these highlights somehow. It's very satisfying. You can run import here, and then it basically says, run import successfully, compute it, or process these files, so many rows in the file, and because I already imported the data, it doesn't say notes created, but it says basically stuff updated, right? Properties set, and yeah. So we didn't add new data to database, but we just updated data that was already there. So you get these results, which you can also pull back with the show results here on top. So it explains a little bit about this, and also that if you run this multiple, no, that's actually on the next page, if you run this multiple times, then that's actually on the next page. If you run this multiple times, then that it doesn't destroy your data, but you can rerun this, right? So it also talks about a little bit on downloading your model and your model with data. So download model, downloads the JSON file for the mapping and download model with data as she downloads the zip file, where the CSVs and the mapping file are all together as such. Right. And then it has a little bit section here on when you outgrow data and put what do you want to do then? Like either bulk import for hundreds of millions or billions of rows in your CSV or connectors to connect to Kafka or Spark or other BI tools or something like that as well. And with apoc.load procedures, you can also import JSON or data from other databases and then so on. And then a bunch of FAQs. What is when there's an error about files missing here, when you refresh the page, then you just re-add them. How do I replace a file by just using the same name and dropping it in here, a different file? And then the database, the data input loads in, you can set the home database of the user, which then allows you to import it into that database for this user. And with this, we finished our first guide, the input guide. 10 minutes, exactly, right? So, yeah. And then the next one gets us to the explore guide. Right? So, and at the beginning to the explore guide. Right? So, and at the beginning of the explore guide, it basically talks about, let's load the Northern data set again. So we just run this import quickly and should be pretty quick. And then you can busy here, click on explore results, or it should take us automatically when I switch between the two, takes us actually to the explorer. So if I switch from here to here, it takes us to the explorer. And then it says basically when I opened this the first time, so because of my switching back and forth, that's kind of not what happened. But it has this show me a graph visualization, which basically just shows a example graph. And it also talks about, I'm not sure what's taking so long. I think I messed up something. The import is complete. The import is complete. Oh, yeah. You're right. Okay. So all this stuff is in here. Yeah. I think it's just some hiccup. Oh, there it is. Oh, here we go. I probably just forgot to press enter or something. Usually it's really quick. We can try this again. Clear scene. Show me. Oh, yeah. So it just was still the drop down and I just forgotten to press enter. So we could have waited for hours. So it shows you a first graph. In the legend panel, which is this panel on the right, it shows you how many graph in the legend panel, which is this panel on the right. It shows you how many nodes are in the scene. So you see there's 38 order nodes, 52 product nodes in the scene here. And there could be other nodes as well from our Northwind dataset. Right? This is just a random, yeah, sorry. This is just more or less a random snapshot of your graph. It didn't analyze it anyway. So you see a couple of nodes and edges. Actually, if you want to know what it is, there's a saved cipher for show me a graph. And then you can actually look it up. It's the first 100 patterns in the database, basically. So that's what it is. OK. On the left sidebar, we have the perspective for which kind of where you can control and select which things are in the scene, basically. You can add new ones. And on the top, you have the search bar that we just used for our sample here, but you can also use for other things, right? So let's look at the search bar a little bit. That's where the graph exploration starts. So you basically, if you have an empty scene here, then there's nothing there. And then you could say, I explore this. We already did, we showed me a graph, clear scene as well. So if you do right click here and clear scene, then it cleans out. And so this is only showing me a graph here. So we've already done. Okay, good. So now let's find the supplier supplies product, which is part of a category, right? So that's kind of part of our Northern model that we can fetch, right? So which is shown here in this. So this is easy to search because we will start just clicking on this and then we can basically find our supplier, then supplier to product and product to category. So we basically just clicked our tray together. Then I hit enter and then we see our graph, right? So the green nodes are categories. The blue ones are suppliers and the yellow ones are our products as such, right? And it says there's one part of key products that are disconnected from the others. This one down here. Can you tell which category it belongs to? What category is this? It's... Category is the green ones? Yeah, exactly. So it's dairy products. So there's also a little hint. So the dairy products form a subgraph of their own. So they have obviously not the same suppliers or very specific suppliers. I've worked actually in an IT project in dairy transport and they have very specific requirements for their tanks and tankers. And so you can't just put olive oil into a milk tank because they have very strict requirements. And so it's understandable that they only eat. JOHN MUELLER, RACHAEL POLLACK PORTER, Raw milk is very delicate. It goes off super quickly if you don't keep the right temperatures. So you have to pay attention. MARTIN SPLITT, So it's understandable that these suppliers are very specialized. So cheese and yogurt and milk and all the other stuff. Okay. So now we can do an even more specific search. Okay. Let's see. Specific values. You can also search for specific property values. For instance, we want to see which categories to have customers in London that order discontinuous products. Okay. Let's clear our scene again and see if we can get this together. So it says customer, customer, city, city, customer, London. Let me do this like this. City. I think, yeah. Looks like this, right? London, okay. We can also autocomplete. That's our start node. And then we can do discontinued true. So that would be products. Product, discontinued true. First from the city, we need to go first to the orders, right? Order. Order. Yeah. So product. Continued. True. Yes. Top. And then to category. So this is basically the same pattern that we have down here, just with different colors. Right? Yeah. OK, and if we run this, we see basically all the people, all the customers in London who ordered this continued product. So it could be customers that we need to notify, basically, that these products have ordered discontinued products. So it could be customers that we need to notify basically that these products have been discontinued and perhaps suggest some alternatives as such, right? Yeah. The products are yellow in this case. So you see the yellow products ordered by the pink, light pink nodes for the customers. And our question is when you run a search pattern, you can see that one category has more discontinued products ordered by customers in London. Others can you tell which one? And it's- Again, categories by green? Yeah, exactly. So it's meat or poultry, poultry. So meat and chicken. Yeah, exactly. So that's what we found, but there's no hint. We can't- There's no hint. We can't. So note inspector. So if you look at these notes, there's only the title available here in the visualization. But if I want to see more, there's a note inspector that's down here on the bottom. And we can basically double click on a note and it pops up in the node inspector. And then we can see all the properties. We can edit them as well. We can also edit the label. And so that's really nice. You can also add nodes and properties, right? So that's also possible as such. And we also see the data types here, like Boolean, string, and so on. You can also see the neighbors of a node, and also the relationship connecting this node to its neighbors as well. So that's more stuff. Okay, that was the node inspector. So try to bring the different categories into the data set. Okay. So we clear our scene. And can you remember how to pull out the different data sets of the categories into the data set? Click on category. Head enter. Just different categories. That's all that we need, right? Eight nodes. You should have eight category nodes visible. Context menu allows you to do things with them. So let's start with expand. So let's pick one of these category nodes and then there's this expand here. So we can expand all. You did a right click on a node. Yeah, exactly. So I right clicked on this node. So for context menu. So that's the part of relationship. If you zoom in here a little bit. I just expanded that. That was easy because categories have only one type of relationship. So you can also find the shortest path between two. That means we need to press, I think, command or control and click. So I have to click them both by holding down my command key here. And then in path, I can find shortest paths. And then we see here the shortest path between dairy product with mozzarella cheese uh an order and then they also were ordered the masala cheese was ordered with coffee uh which gets us into beverages right yeah so that's something the same order containing the two different products out of the two different categories. And I hope while you're watching this, you're not hungry. Actually, our job was to connect confections and condiments. So that's what we haven't done. So so we can basically find the shortest path for this one as well. And it's also the same order containing two different products out of two different Yeah, exactly. The order titled White Clover Markets. This is the order titled White Clover Markets. Now we probably need to find a customer for this order. So we expand to the customer. Oops, now we selected. Had too much selected. You did order. So unfortunately there used to be undo actually in Bloom, but I think they disabled it. It's true, there used to be. I just wanted to say maybe you can undo, but Yeah, there used to be undo, but for some reason it's not working anymore. So we can also just select all these order nodes by clicking here on this one and then just dismissing them. Dismiss. Right, and we can pick out two nodes again, get our shortest path back that we just had. And then actually, I wanted to just expand the relationship of not part of why order expand something. Something is wrong here. So I click on this order. First of all, why is it over there? Something is weird with the offset. I think it might be the guide that changes the offset, actually. Because also the context menu is so far.. Yeah, the context is a little bit far away. I was wondering. Yeah, it is. So I think it's the purchased relationship. Ah, exactly. So this gives us the customer. Okay. Good. What's next? The legend panel on the right side contains a list of elements on your data set. We already talked about this a little bit. And then we can bring additional notes into the scene and expand from the context menu menu so yeah i think this is meant to click on the categories here on the side so they're all selected and then expand them all as part of which should give us more products right so that's now like all the categories with their products. Try to change exactly styling for relationships and nodes. Try to change the color from the product nodes or play around with different sizes. So we can pick these product nodes, and you click on this yellow icon here, and I can change the size, make it bigger. I can change the color to light blue or something like that, or orange. Or even pick something else that you want to. Exactly, just want to say you can always find your color. Exactly, so nicer green, for instance. And then you can also change the text that's showing on a product note, for instance, you can say product name shown in note. And so it shows the product names here. I can also change the text size, interestingly. I think that's also a new feature. It could be new, yeah. I'm not sure. Text size gets bigger. Yeah, I think that's new. Interesting. So that's a little bit of styling. You can do rule-based styling as well. And rule-based styling is basically changing node colors or sizes by attributes. So in this case, from the list of available attributes, you want to see discontinued. And what does it want to do? Can also the size or the text. I think make them smaller or something, maybe. I just want to see what I want to change. I think this next step. I think this is just explaining because it says at the bottom in the next step you know how to use filters. Yeah. Discontinued and condition. Oh, when discontinued is true, we could for instance make the color a lighter green or something like that. Right? So then these nodes should have a lighter green. Or make them smaller. If you look now, there are certain nodes which are discontinued which have a light green. Yeah. Right? Okay. It says basically choose whatever you want to style. Yeah. MARTIN SPLITTMANNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN So for instance, you could say, and add another pricing, could be the order level, product ID, ID. Where is my unit price? So we can see this is unit price distribution. And for instance, we can make the size of the node based off the range, actually, of values. So I have a range of values. And then I want to apply the size based on the range. And we can say the really cheap ones, you make really small, and the really big ones, you make really small and the really big ones you make really big. And then we get like nodes, product nodes sized by unit price. So if you see the big ones are really expensive and the small ones are cheaper. Smaller ones are cheaper. Okay, the next one is filtering. How are we in time? Yeah, I think we need a little bit more, right? Because you also talk through it. You think 10 minutes is probably also a little bit optimistic with all the tech, to be honest. So let's assume you want to place an order of 100 items for a seafood category and only requirements, it's in seafood and there are 100 items available now. So let's clear the scene and say requirements, it's in seafood, and there are 100 items available now. So let's clear the scene and say category name, seafood. See if this works. Yep. We can expand all here. And then it wants to apply the filter. And we want to click on this filter icon, which it should actually highlight for us, this one here. And then we can say add a filter of product for units in stock. And then we see a distribution here, right? And we can say, we want to have at least a hundred which works I think okay with a slider, yeah. So all the ones that are not a hundred should actually be grayed out a little bit. Oh no, apply filter, then they're grayed out. So you see only three remain. And then you can actually, if you want to really get rid of them, then you can dismiss them. And now we could actually pick these three items and create a new order, for instance, from them. So you can actually program Bloom that it can actually also do real operations on the data. So you could actually make it interactive that I now say in scene actions, create order, and then it would pick the selected products and perhaps in selected customer and create an order for this customer for these products. So that's something that would be possible as such. Cool. That is filters. And the next one is the slicer, which is used for timeline views and things like that. So let's clear the scene. And what do we want to do? Bring out the products and the categories. So we say category, app, one, app, product. All the categories on all the products. And so you can use this basically both on daytime information, but also numerical values. So the slice is this icon over here. Let me just push over the other things. So I have a little bit more room and then the slice is this icon here. And for some reason, the same happens. It happened before Alex, but I'm not really sure how we fixed this the last time. But you need to do slice is showing on top. Did I reload last time? Do you remember? Don't you think we have to add a range? No, no. It showed up on top. It should not show up on top. It should show up on the bottom. Oh, yeah. I think we reloaded the scene that helps. OK, let's do this again. Product and no slicer. No, it shows up at the bottom. It's some UI arrow. OK, and we want to slice by unit price. So we just click on unit price. So we see the distribution from 260 from 2.5, basically. And now we can use the histogram here to show or hide parts of the graph. So I can basically say, I only want to have stuff that is up to 7, from 2.5 to 7 point something. So that's kind of our little slice. And if I move this over, then it basically shows all these nodes that are in the range here, right? I can also make it smaller again and so on. It also shows me for each of them how many nodes are visible as such, and also total number of nodes I think that are visible should also be somewhere we can't see it right now. And the other things that you can do, there's playback. So if you play this now, then it automatically moves over the range and shows the nodes in a, you know, how they appear and disappear based on the range changes as such. Over time, yeah. Exactly. So, and you can, there are also other play modes like expansion and so on. But let's finish this guide and look at our last guide, which is query fundamentals. So query guide should get us data port, Northern data set, and we go now to our query tab. Yeah. Right. So you always have to do the same, the presets with the Northern data set. So if you have done them, you can just continue with the guide, but if you just stump into this one, you will have to load the data before you can go further. Yeah, yeah, yeah. Cool. By the way, Tarek also asked, I've missed that question. How do you open a ticket with Aura? So you can basically, if you go to your instance, there's this support icon here that you can click and then, I'm sorry, the help icon which opens the support menu here and then that gets you to the support portal as such. Sorry, Tyrone. And then at the very top of the portal, it says create ticket. So you can almost not miss it. Well, it's like see the forest amongst all the trees, So you can kind of almost miss it. Well, it's like see the forest amongst all the trees, I guess it's like very big, but you might miss it if you don't know that it is there. Yeah. So let's have a look. Go to support and create new ticket. Yeah, exactly. and create new tickets here. Yeah, exactly. Okay, so now we're at our query tab for the last guide. And so what are we doing here? So here in the query tab, you have initially our database overview of what data is in the graph. So we have 2000 nodes, nodes, 7,000 relationships. So it talks a little bit about this. A node can have multiple labels, and this list shows all the labels in a database. And we can click any of these. So it says, click on a product node here. So it fetches us all the product nodes. So similar to what we had just in explore, we can quickly fetch this. Only now that we see also the underlying. Oh yeah, sorry. So now we can actually see the query, the underlying query that fetches the data. So Bloom runs basically the same query, but we don't see it. Okay, in the guide, what's also nice, you can actually run these guides, these queries also from the guide. So if you don't click here, for instance, then you can instead run them from here and it also runs these queries from the guide. That's relevant later. But I think what would be good if these icons would not overlap so much with the actual query text. Here, let me take a screenshot. This is not so helpful. Yeah. Okay, and what we can do is here, and you can also push the query in the self-editor. So if I click on this icon here, then it pushes this over here, and then we can actually read it, right? So this query is, we want to get products from the category with the name produce, and to return them, we assign it to a path variable, and return the path. So we can run this. We can run this by the way, either by clicking here on the little run button, on this one that says run button, or we can add a command or control enter, which I mostly do to run the queries. So these are all the products and produce and the relationships as well. So it has two views here, the graph view that we've just seen and that we've also seen in explore and then a table view which gives us a table representation of the data that we returned as such. What's important is relationships are only shown if you return either a path or you give this relationship a name and return this name of the relationship as well. So for instance, if I don't have to pass here, if I only return product and category, so you can see that you can also edit it here. Then it would only return the notes. So I need to also give this a name and then return rel. Then it would also return the relationships. Otherwise it only returns the products and categories and not the relationships. Or I use the path like we did here before. Or you do return star and it just returns everything. Only the things that have names, Alex. If I don't, even if I have P and C, because the relationship doesn't have a name, it only... Ah, yes, of course. Yeah. Thanks. Yeah. So put the rel here, then it would actually return the rel as well. Yeah. So you need to give it a name. Otherwise it's called anonymous. It's an anonymous part of the pattern and it won't be returned as such. Yeah. Yeah. Okay. Okay. Showing table results. So these are the graph results, and be returned as such. Yep, yep, okay. Okay, showing tabular results. So these are the graph results, which are automatically computed from the result of the query. And if I want to also see tabular results, I could do this by returning scalar information, like attributes, for instance, or aggregations, right? So if I push this over so that if you can read it, it basically has the same pattern that we just had, just it returns product name and category name, right? So, and then if you're on this, then we get a tabular representation of the data. So we have, yeah. Then we can basically use this to look at these specific properties, right? Or if I, for's also a turn, let's say what was it unit price on p.price and you can also see this as well. No, I don't know if you go there, but you know, order by unit price descending, for example, so you can see a list of how much, what costs what. Exactly. Order by p.unit price descending, and then order in this order from 53 to 10. Yeah. So dried apple is more expensive than tofu. Interesting. Yeah. And dried apples are actually more expensive than tofu. Interesting. And dried apples are actually more expensive than dried pears, than organic dried pears. GERALD NEUMANN. MARTIN SPLITT. Interesting. And tofu and long-life tofu is cheaper than sauerkraut. An important thing here in the query is we are here using now variables like P and C because for the product and the category because we use them later, right? So we use the C here for accessing the category and P and C for returning data as well. So that's why we need to name them. If I forget to name this, then it just complains that the variable is not known. So then I must not forget to put it in. And then it also says that if you prefix our query with profile or explain, then we can also see besides the table, what's our query plan. So it shows us basically how the database executed this query as well. Right, so which is also interesting from that perspective. If I want to kind of figure out if I want to optimize or exactly. But it's good to know that this exists. And the raw option is basically just unlike the raw request response. It's mostly not so necessary. So you can also see a schema, your data, not just in the kind of this structure, but also in the visual structure, what does it look like? So you basically have this information in here, in the database that you can do with quality B-Scan visualization to see what this data model looks like. And if you want to look a little bit at the different companies supplying Northwind products, we want to see which suppliers suppliers supply products from which category. So we basically spell out this pattern, supplier to product to category, similar like we did before in explore. And then we return the company name and the categories. And here we use an aggregation where we say, collect all the distinct category names into a list. And if you return this, then this gives us for each supplier company, basically the list of category names that they cover as such, right? And we saw there are some suppliers who only have dairy products. We've seen this before, right? So, which is interesting. Okay. Okay. Yep. So part of a category returns the company names, the supply, even though you specify the product node in a match query, no information about the product node is returned because it's not part of the return, right? So we have the product node here, but we don't do anything with it. We just skip over it. We just have it as connecting the supplier and the category. And perhaps you're only interested in the supplier. Yeah, exactly. So perhaps you're interested in only suppliers of a certain category. Then you basically would say, I change the category name here. It's a little bit to what we had in the beginning. And then we could say, this is basically the suppliers for this category as such, right? And then we could order them or whatever we want to do. So, and then the last two queries, or last three queries are a little bit more complex, a little bit over time. So we can either keep this as homework for folks or quickly go over that, which. I don't know. And then we can do one. Okay, then let's do this one because that's a recommendation query. Yeah. So I mean, the text also explains what it does. Basically it looks as customers that order the same product. So we kind of say these customers have the same ordering behavior. So we have customer one and customer two and we draw out the pattern via order to product and it's the same product in the middle, right? So customers that frequently order the same products. It's the same kind of restaurant, for instance, right? So it's perhaps like different dairy shops or so that are similar, right? And then we don't want to have the same customer pair twice. And it kind of counts how often did this happen and the customer and the products. And then some of these occurrences for our top five products we pull out and then we can actually see these two companies have a similarity of 184, so 184 times they order the same product. And these are the top products they basically had in their joint basket as such most frequently. And this kind of now allows us in the next step to create a similarity graph between these customers. And then we can cluster them and we can identify basic other attributes that make sense to, for instance, recommend things that your peer group has bought, like people in your own cluster have bought, but that you have not bought yet, right? So something to explore new opportunities. So it goes a little bit deeper into these kind of queries. these kind of queries. And with that, we actually took the whole time. So I think talking through them, but also in general, I think 10 minutes are a little bit too little for these queries, for these guides. So I think you should actually think about 15 to 20 minutes each. Yeah. If you want to understand what's happening. Exactly. And play around. That's right. But we have full completion, right? So you see 100% completed on all three. So that's good. Check. We got to the last page for all of them. So we did our gamification, got other points, and all the loot along the way. Yeah. And so I think it's a good feature that has been launched today. Yeah, absolutely. Lots of people will find this useful in getting more familiar with the Neo4j user tools and learn how to do things there as well. And it helps a lot. So if you are interested in that, then check it out. If you are just generally learning more then always there's the Graph Academy, which is like a good upgrade or next step from there. Lots of many more guides from fundamentals to data import, data modeling, and then some language specific training courses, JavaScript, for example, or Python, where you can then, depending on what kind of environment you're working with or what you want to learn how to use Neo4j with, then this is, I think, a good next step to continue the guides, or you just, you know, take your own data sets, play around with it, learn yourself and trial and error is also working. Exactly. And Ashwar asked about how he can show his project in Discord as actually enjoy your project channel. And I think that would be a good place to show off projects initially that you built so that others can have a look and also get some feedback from other folks in the community as well yeah I think that's great yeah and if you have something that's a little bit more ripe then you can obviously also submit to notes and you can show it off and present what you did when you feel like this is ready for an audience exactly yeah thank you for watching super thank you michael thank you all the guys are cool thank you alex that's very very helpful and uh yeah i think um yeah i see you soon uh on the live stream. We have something happening this week on Thursday, actually, on the 15th. Will and I, we talk about geospatial data, and my colleague Sid is doing some learning sessions. They will take place I think 14th, so in two days. APEC, I think 14th so in two days APEC there is 14th APEC training course knowledge graphs with chat GPT so if that is working for your time zone then hopefully we'll see you there and yeah that's the geospatial data on Thursday and yeah with that I wish you all a great day good have a good week. Thank you so much. I hope you got through that nicely and I'll see you soon. Thank you, everybody. See you soon. Take care. Bye-bye. | Neo4j User Guides - Discovering Neo4j AuraDB Free with Michael and Alexander | 4,096 | Neo4j | 20230613 | Neo4j AuraDB Free: http://dev.neo4j.com/discover-aura
Catch-up on missed episodes from the repo!
https://github.com/neo4j-examples/discoveraurafree
0:00 Welcome
11:55 Aura User Guides
13:30 Import Data
29:50 Explore Data
52:15 Query Fundamentals
1:04:45 Summary & WrapUp
Michael: https://twitter.com/mesirii/
Alexander: https://twitter.com/alexandererdl
All previous episodes: https://neo4j.com/video/discover-aura-free-with-fun-datasets/
Check out https://community.neo4j.com/ for questions and discussions around Neo4j
#cloud #neo4j #graphdatabase #datasets #guide #tutorial | 2024-08-18T21:06:34.242716 |
https://www.youtube.com/watch?v=pI9H7Y4h_Sw | Hello everyone, I'm Emre. I live in Istanbul, Turkey. I'm a data scientist in Jon Snow Labs' healthcare team. We will talk about how to create a clinical knowledge graph by Spark NLP and Neo4j. Here is the agenda. I'll start explaining Spark NLP at first and then we will continue with Spark NLP in healthcare, how it works and the pillars of it. Next step will be our other main component to create a knowledge graph. Its name is Neo4j. It's a graph database. And then I will jump into clinical knowledge graph side. I mean, what should be the properties and features of the clinical knowledge graph and the points to be considered by the practitioners. I will share some of my experiences about knowledge graphs. And then here is the most important part of this talk, live demo, how we can create clinical knowledge graph by spark NLP and NEL4J. This will be hands-on part. So spark NLP is an open source library released in 2017 and right now it has around 40k daily downloads. Our monthly download is about 1.2 million and totally we hit the 10 million already. We support four languages, Python, R, Java and Scala. The goal was to be able to create a single unified NLP library that would require no other dependencies other than Spark itself and it should also work on clusters. By the way, there was no other library that could support to run on Spark clusters. We also wanted to take advantage of the transfer learning and implementing the latest and greatest state-of-the-art algorithms. So what are the modules of Spark NLP? We have public and enterprise versions. Public is totally free and the healthcare models are licensed one. We also have another library, which is called Spark OCR. As you know, mostly data entry starts from PDFs or images when it comes to clinical text. So EHRs, electronic healthcare records, or medical records usually go through Spark OCR for that and I'm not for that but I'm not cover that today. On the left hand side you see public modules. We are supporting more than 200 languages by using the same NER architecture which is another nice point to underline, because we can switch one language to another easily, as long as we have the relevant word embeddings or word coverage to accomplish certain tasks in LP. We have some deep learning based emotion detection, text classification algorithms or pre-trained models that you can just plug and play. So instead of trying to build your own pipeline you can just download and use the pre-trained pipelines and then just feed your data frame so you will end up with the latest like state-of-the-art resource using which were various word embeddings including glow and the latest cookies in a happy school kids in NLP Town, Bert, Elmo, Albert, so on and so forth. So we all support them and you can just use any model like a building block and like a plug and play. On the right side you see clinical versions. Like public models, we have four main clinical models. We have more than that, but let's just cover four of them. Clinical NER, extracting clinical entities, and we have around more than 50 different NER models pertained. Clinical NER models, which means like the pertained on certain clinical NLP data sets that are being used in clinical NLP challenges like I2B2 or N2C2 data sets. And we named them like anatomy NER, PHA NER, Pathology NER, GSL NER, so on and so forth. To extract the meaningful chunk given the task, we use these NER models. And on the other hand, clinical entity linking is assigning some SNOMED, ARICSTORM, ICD-10 and the LOINC codes to these entities detect through NER. So the assertion a very important model in the healthcare domain. It's like if the clinical note is talking about the patient's father, not the patient itself, we need to know that information so that we can assign different features. I mean, maybe we will create different features accordingly, or if it is not that, we can just filter out those symptoms maybe. For the identification, we remove or mask the insensitive information according to HIPAA rules. We need to hide or conceal some of the information and these are called the sensitive information and we need to de-identify or obfuscate them. Obfuscate means like replacing with the fake names or entities. So, we will also have some, we also have some pre-trained models that you can plug and play and combine with the public models. Okay. Next slide. We keep Spark NLP up to date and upgrade in every two weeks with releases. We have released more than 75 times. The main focus is here to have a single unified library for all NLP or NLU tasks, NLU needs. We are using Slack actively and also on the right side you see some comparisons between the NLP libraries with ours like spaCy, NLTK, Core NLP and Hugging Face. So let me show you a sample output of the Spark NLP model, Spark NLP pipeline. As you see, we can extract NERs, age, entity, gender entity, clinical finding entity, relative date entity, and dosage of the drug, and the frequency of the drug. the date entity and dosage of the drug and the frequency of the drug. And as you see, we can not just extracting these NERs, we can also map the codes of this some entities like clinical finding and the assertion status here, past SNOMED, ICD-10 and UMLS codes here. And also you see some drug related codes, ARIXNORM, NDC and we can calculate ACC risk adjustment score. These are the out-of-the-box results of structure of the healthcare NLP pipelines. First of all, everything starts with NER because it is the minimum meaningful chunk of any clinical text. The importance of NER might not be that much in other domain, but in healthcare, that's very important. Everything builds upon clinical NER models. NER means, as you know, the name entity recognition. And we have many others that, I mean, we have many others that have nothing to do with NER, but when it comes to extract some knowledge to get a sense of what's going on inside the document, inside the clinical document, we start with clinical NER model and then use the other models like I mean did the other models use the output of NERs as an input. What are these downstream models? These are clinical entity linking, assertion status, the identification and relation extraction. For example, assertion status model assigns some status if the favor and sort route is a problem entity that coming from NER. And association starts to try to find if it's absent or present. I mean, given the text and given the sentence that chunk lives in, NER chunk lives in. And if we can give another example, relation extraction models try to find relation between these entities okay this can be the temporal relation or the cause and effect like cause and effect relations okay so this is the why NER is very important and highly valuable in healthcare domain. Here is another example, and I just want to mention this. I mean, it seems to me annotating the data and NER are the brain and heart of the NLP body. And they manage every other tasks and subtasks. Recognition of name entities is basically the classification of tokens. NER tries to locate and classify predefined categories such as persons, locations, organizations, I mean hospitals, medical centers, measurements, so on and so forth. We use NER to downstream the related tasks to answer real-world questions like what? Which hospital and department have been admitted by the patient? Which clinical tests have been applied to this patient? And what are the results? What are the test results, which medication or procedure has been started and when. Relation extraction is another very important part of the NLP and we will mostly use this relation extraction to create knowledge graphs. And I mean, it is the task of predicting semantic relationships from the text. Relationships usually occur between NER chunks. occur between NER chunks and it is the core component to build relational knowledge graphs. I mean, the role of this relation extraction in NLP applications are question answering and summarizing and et cetera. are question answering and summarizing and etc. Clinical NER plays a key role in clinical NLP tasks to extract information from healthcare reports. You can use it for multiple purposes such as detecting temporal relationships between clinical events to drug-to-drug interactions, and the relation between medical problems to treatment or medication interactions, and so on and so forth. I will not go deeper into the importance of the relation extraction medical studies, but it is not less important than the ners so uh putting the clinical facts on a timeline this is this is the very big challenge in uh in lp world and the i will i will share some of my ideas about this task. I mean, let me talk about a little bit this issue. The automatic extraction of a patient's natural history from clinical texts or EHRs is a critical step. It helps to build intelligent systems that can reason about clinical variables and support decision-making. Any intelligence system should be able to extract medical concepts, data expressions, temporal relations and the order of temporal, I mean, the temporal ordering of medical events from the clinical text. But this task is, as I said before, is a challenge. I mean, it's very hard to tackle due to the domain-specific nature of clinical data, due to the domain-specific nature of clinical data, such as writing quality and the lack of structure and the presence of redundant information more generally. I think that we can handle this task by combining some rule-based methods, unified NLP library like Spark NLP, and the power of knowledge graph. To my mind, this combination is the most appropriate and the quickest way to put the clinical facts on a timeline. to put the clinical facts on a timeline. So let me give you some brief introduction to Neo4j. As you see, Emre likes Neo4j. As you see, I mean, it consists of the Neo4j. I mean, the graph databases consists of nodes and relationships and nodes and edges. Edges keep the relationship on it and they connect the nodes according to this relationship. Neo4j is a highly scalable native graph database and the core belief behind it, and the core belief behind it, connections between data are as important as the data itself. This is very nice, this is a very important point. The connections between data are as important as the data itself. I'm pretty sure that using the connections between data will create a competitive advantage to produce actionable insights in healthcare. So you see some features and properties of the graph database and the as you see it's nodes and mainly it's nodes and edges i just keep the relationships on it and the and connect the nodes and you you can traverse on the nodes and the edges according to your query so you see another, I mean the other kind of features of the Neo4j, it provides SSTX and it can be, and you can get license like MySQL, MongoDB, and you can run on it HeroQ, so on and so forth. I mean, you can get much more detailed information from the official website of Neo4j. And Cypher. Cypher is Neo4j's GraphQL language. It is a declarative pattern matching language. The declarative meaning you just tell it what you want, not how to get it. This is the same as SQL. As you see, it's also designed for graphs. This is not SQL, but this is cypher query languages okay sequel when we talk about the knowledge graphs or the man we talk about the graph databases it's all about patterns I mean you I mean you you just traverse according to your patterns on the on the graph database okay and as you see i mean the there is a two notes here and one relationship and this is the actor acted in this movie okay and this is the example of the uh cypher query uh the movie database. I mean match actor, the relationship kind type is acted in and the movie and you will return, you will get the name of the actor and the roles of the I mean the roles in the movie and title of the movie. roles in the movie and title of the movie. When we start talking about the knowledge graphs, here is important distinction. As you see, these are not graphs, these are charts. Okay? I mean, it's good to start using the right terminology because these are not graphs. These are just charts. Graph database is different from what these are. So, the purpose of the clinical knowledge graph, what should be? We need to create data points which accurately represents the patient history rather than creating a patient history which contains a variety of data points. This is not easy and because healthcare data are complex as I said before exist in multiple places the redundancy and structured and unstructured, and that you can find very, I mean, inconsistent definitions in the same text, so on and so forth. Okay, but the aim of the knowledge graph is, I mean, represent the patient's history, I mean, accurately represent the patient's history. And so it should be a representation and structure and a method, and it should organize the entire body of the clinical knowledge and also it should contain high relevancy in clinical data and it should be universally accessible. So this is the illustration of one of my conceptual graph models and here is the tip of the day especially for clinical ones before creating a knowledge graph you should spend some time working on conceptual graph models this is my experience otherwise it is very easy to create graphs that are irrelevant to your purpose. I mean, it will be completely a mess otherwise. Because it is very easy to connect the nodes to each other. After creating the knowledge graph, you should validate the graph by querying and cross-checking with the raw data, like an input and output analysis. And here is another example of the knowledge graph. As you see we just feed this text to Spark NLP and we create this knowledge graph. graph. It has like six echelons right here. It is not easy, but you can create a clinical knowledge graph using Spark and AlpineL4J. It's a good combination. So we end up with the presentation and let me jump into the live demo. First of all, I have created a notebook for you. This is the Spark NLP. I mean, this is Spark NLP and Neo4j notebook. How we can create, I mean, to show you how we can create a knowledge graph, clinical knowledge graph using Spark NLP and Neo4j. First of all, you can get Spark NLP free trial license from our website. And you can just try, I mean, shot for this for this task I think first of all you will just upload your license keys like this And after that, you will install some libraries, our Spark NLP library. I mean, the public version and the GSL and licensed version. PySpark 3.1.2, we are still using this version. And also, Spark NLP display library, if you want to display some visualizations uh i mean they are i mean using the output of the uh pipeline and then you will start the spark nlp gsl okay i'm using the 3.3.2 versions of the public one and the licensed one. And here is our pipeline. I mean, this is the relation extraction pipeline. This is post-logical relation extraction pipeline because we will feed the text about the drugs, okay? We have documenter. This is document assembler and sentence detector. We will apply tokenizer and we will get the word embeddings. And after that, we will get part of speech tags. And after that, we will apply medical NER converter to merge the NER tags, to create NER chunks. And we will apply dependency parser. And after that, lastly, we will use postulation relation extraction model to create the semantic relationship between the NER chunks. So let me run it. You can also reach this notebook from our website. I mean, the GitHub repo under the work I mean under the certification workshops okay and this is our helper function and this is our text While these are running, let me show you what is Neo4j and how can you use it, how can you interact with the Spark and LPN Neo4j at the same time. Okay, first of all, you can go to neo4j.com slash sandbox. You can launch a free sandbox and you will get this page and after that you will just select the blank sandbox it means an empty graph database and you can create it in a minute it's very short and as you see it's ready and you have three days and you can extend it uh for time for additional uh seven days and you will see some connection details here and we will use them okay so uh let me let me install some uh libraries to uh interact with the neo4j 4G graph database. I mean with these sandbox. This one and this is the helper function. Another helpful function and the update data is another helper function. And this is one of our main function and I will just talk about with this daily and you will see some I mean credentials here first of all you and your eye here is the URL okay you will just copy and paste all these all these stuffs from the right side to left side it will just copy and paste here. And here is the password. And the username is Neo4j. And when we run this code snippet, we will create a connection between this notebook and the Neo4j graph database. So what is this function what what this function do first of all check the uh check our relation extraction results okay here uh you will see some columns here relation and to one and to one begin and end uh chunk one and two and the others and you will see some confidence here so you can see that the chunk one is a relationship between the chunk tool in this manner okay and also chunk one is a duration chunk two is a drug I mean we will create a relationship between 5-day and Amok Selim using this function. Add NURs and relations. Okay and you will see some batch size here I mean it is for the efficiency and you can just walk on the I mean you can just I mean elaborate on this part okay later and the you can here is the here is the our main function here to create this relationship notes okay and we will create chunk two also as notes and we will create a relationship between them and this will be the edge okay we are connecting node one to not to okay so let me let me run it it's ready and we can go down first of all I highly suggest you to drop all constraints and create your own constraints at the beginning of this task because otherwise it will be a create and it will be a completely a mess. I mean, it can be a mess in a graph database site and you can just run this part and you can delete all the nodes, drop all the nodes. And we will create a constraint here. Constraint means means I mean just just to assure about the each any are is created only once I mean they are unique okay they will be unique at this part and I will feed relation DF here this this the data frame so this function and it's done in just in just in four seconds okay and we will see this not and edges if we try to visualize the graph database and let me run some queries on it these are the old nodes and the relationship relationship between them okay and here is the another another query I mean the node will be the mode one will be the add bill I mean add will related those its duration so on so forth okay okay as you see this patient used the advil I mean the 12 units sometimes 40 units for five days and one unit okay and you can just walk through the graph database according to your cipher query this is the cipher query and you can just spend some time about this visually visualizations okay and this is another one it's another one another one so I'm so sure okay let me show you the the visualize the graph you will just hit here the visualization visualization part is opening. By the way, while it's firing up, let me just show you another example. And I will not go through this example today, but you can just reach out from our GitHub repo. The first example is about the health care library. And the second example is the public one, the free one. You can also use graph extraction here, graph extraction annotator in SparkyNAPI to create graphs. Like what? Like this. The person, Jon Snow, was born in England and Jon Snow lives in New York. You can just find everything here like this. You can just try this part by yourself and you will see this graph after running this code snippets. Let me just visualize the graph database and we can just finish this webinar. Right. Here, as you see all the nodes and the relationships between nodes has been created and you can just visualize them using Neo4j. Thank you for listening. We can go through the Q&A part. | Creating a Clinical Knowledge Graph with Spark NLP and Neo4j - Webinar | 2,101 | John Snow Labs | 20211118 | The knowledge graph represents a collection of connected entities and their relations. A knowledge graph that is fueled by machine learning utilizes natural language processing to construct a comprehensive and semantic view of the entities. A complete knowledge graph allows answering and search systems to retrieve answers to given queries. In this study, we built a knowledge graph using Spark NLP models and Neo4j. The marriage of Spark NLP and Neo4j is very promising for creating clinical knowledge graphs to do a deeper analysis, Q&A tasks, and get insights. | 2024-08-24T09:20:56.915766 |
https://www.youtube.com/watch?v=k_ztCgQVcxw | So good afternoon everyone. It's a pleasure to talk with you today and thank you very much for your time to listen to a very important topic for myself and that is how generative AI knowledge graphs are going to transform clinical trial programming. programming. So just to let everyone know that I'm not incentivized in any way with the work that we've been doing with Virizion and that the work is covered with my employer under an NDA. And also that the views and opinions in this presentation are solely those of my own and don't necessarily represent those of my employer. Here's an overview of what I'll be presenting to you today. I'll be giving you my thoughts and opinions on all of these aspects around the use of this technology in clinical stat programming. And we'll go through three major sections, provide an introduction for you, and then see how the challenges of our industry bridge into the topic of today, which is developing our future systems while we're, so to speak, driving the car. So I think it's important to consider on a very fundamental level the reason that we're here today, and that is around the topic of communication. If you think about what we do for clinical step programming, it really is about attempting to communicate the outcomes of our clinical trials. outcomes of our clinical trials. So that is quantitative measures of our medicines and therapies to communicate the safety and efficacy measurements that we've observed and collected from patients and to communicate those to third parties like regulatory authorities and other stakeholders. And we do that for the evaluation of the health value of our therapies and medicines. And ultimately, what I'm seeking in this presentation is to change for the better how we do this. So we or I have some ideas on how this can be done more effectively. And I hope in sharing these with you, it might also not only give you more to work with, but you might have some ideas to share back. And ultimately, in some kind of exchange and working together, we might be able to improve and change for the better how we're communicating our clinical data to regulatory authorities for the benefit of patients. So please keep that in mind throughout the rest of what I'm sharing with you. It's nice to make a presentation a little bit more personal to identify really with the people doing the work. Work is only done by people. And so if we can gain some sympathy with the challenges that they face, then we might be able to more effectively help them. So I'm introducing a fictional character here, Surya. She's a statistical programming analyst and her team, they write SAS code to get their work done. So they're all SAS experts and they manage large quantities of code each and every day. So yeah, a so-called mountain of SAS code is something that they need to manage, maintain, modify, create, and so forth on a daily basis. So, they do this to deliver or to create the tables, listings, and figures, which collectively support everything that's contained in the clinical study report. So this is the goal of Surya and her team. Now, the challenges that Surya faced in 2024 seem to be quite similar to the challenges that she faced in 1994. And I'm making this statement based on the fact that the technology has not changed much over the last 20 or 30 years. Data storage hasn't evolved much further than tables held in SAS 7 BDAT files. And then these files are stored on a file system in some kind of structured directory hierarchy. And on a daily basis, this is time-consuming and labor-intensive to work with already at the very outset. Now, what Syria needs to do with her data, the transformation from the collected data that comes from sites through to table listings and figures and onto the regulatory authorities, this all takes place using a statistical programming language and a proprietary one at that, most commonly still today, the SAS programming language. So that's labor intensive and time intensive and requires specialized training. The way that data storage is managed and the way that the data are transformed have not evolved either in the decades since these processes began. So this is usually performed by one person. They'll be contributing to a study, and they'll undertake these processes alone. And on top of that, a second programmer will undertake what's called double programming, that is, they will reprogram from scratch any new transformations which need to be done. And then the two results are compared. And if they're the same, then the code is considered to be correct and the program is considered to do what it's expected to do. Of course, it could be that they've both made precisely the same mistake and it may not be so unreasonable to consider this outcome. This is is labor intensive, and it's not a process which allows change and innovation. So going further on the point C, the solo programmer, it brings about a mindset of siloing everything. So working on everything alone, keeping all of the data carefully disconnected from other systems and other people. And I think to some degree, this is reasonable in the sense that working with Working with patient data does require a lot of care, given that it represents personal measurements of people's personal, most personal and intimate information, that is their health. And so it brings about, though, a mindset from the very fundament of the process of keeping systems disconnected and not allowing an open way of exchanging, collaborating and developing more effective ways to perform the work that we perform. that we perform. So these are the challenges which hinder Surya and her team from innovating, working more effectively, and working more quickly, ultimately to bring therapies to patients more effectively, in a more trustful way, and more quickly. effectively in a more trustful way and more quickly. So since entering the pharma industry what surprised me was that the tech world and the software development industry have some great tools and products in place to develop their code very effectively. And they've been developing these tools for decades now. And the most obvious thing to me was version control. I did not see really any awareness around committing code at certain states and forking code to develop features and merging back after code review these features back into the production branch. So the fact that programmers are working solo and not having any awareness of version control was really just such an obvious win to me that could be gained here and today. And the same also applies to data. Although this is a more recent development, the ability to version databases is something which I think would also very much level up how Surya and her team can work effectively knowing which version of the data that their code is running on and what the providence of that particular snapshot of the data is would be tremendously helpful for ensuring quality and ensuring quality and truthfulness about specific code and data combinations. Agile development methods. The ability for teams to collaborate, so build off each other's ideas, critically assess other people's ideas and review the code of our colleagues, provide suggestions and iteration. So to continually redo what has been done to improve it. This is really the foundation of modern software development and something which I think is sorely lacking. And I'm a huge fan of the agile development methodologies, stand-ups and sprint cycles, code reviews, all of these things, I think offer a lot of benefits to simply doing better what is already being performed by Surya and her team. is already being performed by Surya and her team. And the last point here is to turn back the tide of siloing of all aspects of our processes and to move to an, well, I call it an open-ish source model. model, certain aspects of the work that we do needs to remain hidden from many stakeholders like the compound structures that we work with. But the tools that we use, so the programming languages and the processes that we use these, we don't make money from those specific things. So it doesn't make sense to hide them, but rather to democratize them and to collectively improve the tools and processes that we work with to get our tables, listings, and figures and other documents to the regulatory authorities. I make just a mention here that blockchain has been considered to democratize the data, patient data that we work with. So this would in a trustless way, allow regulators to gain value to patient data and at the same time allow patients to control or have more granular control in who might be able to use their data and for what purpose they might be able to use their data. Okay, so even though there are some steep challenges to Surya's work, and despite the existence for some time now of many tools and processes used by other industries, How are we going to get from where we stand today to where we would like to be in the future where we can be much faster in supporting the requirements of patients and deliver what they need in a more trustful and higher quality way? way. We need to consider what's holding us back and the incumbents of our industry. So these are legacy technology companies and infrastructure. These are a tremendous hurdle. Hundreds of millions of dollars have been invested to develop and maintain the infrastructure that we have today. And the companies and people who have done this have a vested interest to keep doing it. And so there's considerable resistance by these parties to moving things forward. by these parties to moving things forward. And we need to take this on board if we're going to attempt to make changes to what we have today. So the mindset that exists is what I consider a closed source mindset. And this is largely the way that people think. And people, oftentimes, again, they're not given to change. And with the great benefits from the profits of the industry, a lot of managers are quite comfortable doing things the way they've been doing them. And would rather avoid risk and just keep doing things the way that they've been doing them in the past. So mindset is contributing a lot there. In that regard, generation change is limiting us. So our rate of change at the moment seems to match the rate of our species producing new generations. It's really only the younger talent now that come with new ideas that are really motivated to bring change. And so this is also something we can't wait another 20 years to bring about the changes which are ready today. And so let's be mindful of that. The code bases that we have in place, so referring to all of the SAS code that Siri and her team have written for decades now, these are astonishingly complex. So we have many macros and nested macros, which obfuscate or hide much of the code that is actually executed for a specific task. And trying to untangle these interdependencies is really an enormous task, even for those who are deeply familiar with the study, let alone anyone who might have joined a serious team recently, or might be going back and looking over their old code from five years ago. So this just creates a huge inertial resistance to seeing how we can change what we're doing today. So lastly is motivation. And I think I've always mentioned to my children that if they're not, if they tell me they're not capable of doing something, I usually challenge them by saying that they're not motivated enough to change something. And Tony Robbins, I think the quote from him, that change happens when the pain of staying the same is greater than the pain of change, I think is something important to keep in mind here. How motivated are we? Well, I think if you're watching this, I think you are motivated to make that change. So let's dive in and see where we can go. I also have a quote here from one of my favorite stoic philosophers, and that is simply to accept what we have and be willing to have and be willing to commit to what's needed to change. And so no matter how motivated you are, we need people who can commit the time and the money and the people to make the changes that are needed, because it will take considerable resources to make the changes that stand in front of us. So we need a way to move forward, which is achievable and which is going to bring value to us. So I'm proposing that if we can get a handle on our data and code using a semantic data model to create knowledge graphs, this will be a huge step forward for the industry. Note that I've highlighted overlay because, as I'll show you in a moment, this is the approach that's been taken most recently. in a moment. This is the approach that's been taken most recently. But just to mention that data and code really in the end, in its essence, is ETL or extraction, transformation, and loading. We could be doing this much better. At the moment, we're using SAS. We're writing SAS programs to do the extraction, transforming, and, so to speak, loading between capture of the data from sites through to the tables, listings, and figures, or TLFs. But we could be doing that much better. Now, previous efforts or efforts until today to build the ontology that will be needed for a knowledge graph have been top down. So standards around the data models and the transformations that should be made, These are all written by people for people and handed down to programmers to then implement. And the way that knowledge graphs have been built have been based on these human texts. And so don't necessarily apply or don't translate precisely to the SAS code, which has been written. So that's why I refer to this as a top down effort to develop a knowledge graph ontology, and then see how that can be applied to the code. So here's an example of a knowledge graph, and I won't dwell on the details here, but suffice to say that each of the circles there represents some kind of some data point, discrete data point in a clinical trial data. So here we're working with vital signs and the lines that join the circles represent relationships between the different data points. So we bridge across here the different data models of ADAM and SDTM in the different colors there and so we have all of these relationships linking the data together. And what I want to highlight here is that on the right side there, we have the concept of vital signs. And that maps all the way over to the measurements which are taken from the patients. And so a number of transformations take place, which are all interconnected. And if you make one change in one part of the graph, then it's because of the built-in relationships, this is directly represented on the other side of the graph. So one change propagates thanks to the linking of the data to all other connected data points. And this is only a very small example. And as a complete clinical study would be about 500 times this size. So you can imagine that even one single change to one of these data points can affect the rest of the study. And without having any linking in place, what will typically happen when Surya is running her programs is that the program will, there'll be an error and she'll have to untangle the entire execution run of her SAS code to try to find out what change, where has caused an error that she's observing. And with knowledge graphs, we have this information almost, well, essentially instantaneously. So this is the reason why we want to, why I propose that knowledge graphs will be our bridge into the future of clinical stat programming. And one example that's available today, which I think is the best to date, is built by the OpenStudy Builder team. It's an open source initiative from developers or some key contributors here from Nova Nordisk and besides also one or two from Bayer, the company where I'm working at. One thing to note here with this slide is that there's no free lunch to developing knowledge graphs top-down. There's an enormous amount of work required to build the ontology, which then allows a proper understanding and configuration of the semantic model for the clinical data. Okay. So we're talking about teams of specialized programmers who are familiar with graph theory, who would be required to continue developing and contributing this across the industry. So there will be much work required to bring this forward. So if building an ontology and semantic data model is highly specialized and deeply labor intensive, and we would like to do this. We would like to accelerate this so that we can bring the benefits of knowledge graphs to start programming more quickly. There might be another way that we can do this. And the proposition here is to build a semantic data model bottom up. And by bottom up, I mean from the code. And it won't only be faster but there's a lot more that will be gained from it as well. So I have this sketch of how I understand the process of clinical stat programming to be related across the various levels of information. So at the top here, we have what's called the business process. So these are the transformations of a variable from the incoming raw data from the site all the way through to the clinical study report and finally onto what the regulatory authorities will receive. Here I have variable X, so this could be any variable in a clinical trial, and I represent the number of transformations that it will go through towards its goal. So that is the data level. In the middle here, I've placed the SAS code. So this is the SAS code that is written, which effectively is the business execution, transforming the data in order to be in the correct format for regulatory authorities to be able to review effectively. Now, in order to generate this SAS code, Surya and her team are taking a mixture of, I guess, pseudocode and English text. So talking about SDTM metadata standards and Atom metadata standards, we have also define XML, which is part of the development process for both SDTM and Adam, and there'll be other things as well. These, these, this information is derived from purely English text, so documentation, statistical analysis plan definitions, and other study specifics, which which have been created by the study statistician and other stakeholders in the study. So these documents are used by Surya and her team to create the SAS code. When I talk about bottom up creation of knowledge graphs, I'm referring to using the SAS code to generate the knowledge graph. At the moment, systems like Open Study Builder use the top-level standards, the SAP definitions and so forth, to create an ontology and then from the ontology, build a knowledge graph. But if we have a system which can read SAS code, we can generate the knowledge graphs directly based on what logic is held in the SAS code because the logic in the SAS code perfectly represents the intentions of the documentation here. So let me explain a little bit further how this might work and also go further into the benefits that I think this approach offers. So I've just described both of those points. So I've just described both of those points. And yeah, okay. So then the bottom-up knowledge graph will underpin the semantic data model for existing data and code. So I think one really important aspect here is that by building knowledge graphs based on the SAS code, there's no loss of information that can occur because the business code, sorry, the SAS code is the business execution that has taken place or will take place in a study in delivering the tables, listings, and figures. So yeah, there's no room for error there and no loss or possibility of loss of information when translating between text and code. The other thing is that once you have such a system in place that can read the SAS code, you can apply this to the current work that you're doing, but you can just as easily apply it to all previous SAS code that's been written. And so that's why I refer to the bottom-up approaches, being able to change the wheels while the car is driving because there's no need to spend time building ontologies. You can take a machine that can read SAS code to generate the knowledge graphs, and there's no extra work required. There's no extra work required. And I think another really important aspect of taking this approach, as LLMs have now come into the picture in the last year and a half, a lot of developers have considered how an LLM might perform to read SAS code and see what kind of insights can be gained from that and also other applications. code in a deterministic manner is critical in being able to extract all aspects of the code logic and not have any room for error. So just a note on what it means to have it deterministically read. Just a moment. deterministically read. Just a moment. Yeah, deterministically passed code. So determinism versus probabilism. These differ in the in by way of a deterministic system has, if you provide the same input and the same transformation, then you will always get the same output. And there's no randomness involved in that outcome. A probabilistic system takes, if you keep the transformation the same and you keep the input the same, you may not necessarily get the same output because this incorporates some amount of randomness. And this is what makes LLMs valuable is being able to distill outcomes from somewhat noisy or data that incorporates some or systems which incorporate randomness. They can still produce insights. But these insights come with a probability of likelihood, which is less than 100%. And, okay, sometimes 90% or 95% probability of an outcome is enough for what we're doing, or better than what we've been doing. But when it comes to reading SAS code that describes the creation of tables, listings, and figures, we need 100% surety. And that only comes with a deterministic system. So machine-read, machine-passed code is also fully automatable this is something that essentially doesn't require any human input once once you've built a system it only has to be checked for consistency and and so there is of course some there, but we're talking about much, much, much less compared to having to build knowledge graphs from the top down. So again, I want to highlight the fact that this provides an opportunity for us to, we can keep doing what we're doing, but we can do it much, much more effectively if we allow a machine to create knowledge grass from our existing SAS code. Because it's deterministic, it's 100% explainable and there are no hallucinations. universal logic from the code, then we also have a clear understanding of the logic and this can be used for other processes. So this single source of truth, so to speak, can feed documentation creation. So definitions and descriptions for humans to read. So here you start to see that rather than spending time creating standards and documentation, which then we hand down to the SAS programmers, we can now start to have the existing SAS programs and new SAS programs generate draft documentation from this source of business execution, the SAS code. With universal code logic, we are halfway to translating to any other language, well, any other open source language, certainly. So R on Python becomes quite easily achievable to have the programs that are created be available in R on Python. And this could go a long way to help us step or break the stranglehold, so to speak, that SAS code has around our industry. I wouldn't propose this as a motivation, but it's still something that can be appealing to some teams and companies. I think I made it clear when going through the knowledge graph that we gain end-to-end traceability once we have a complete dependency map. So in this regards, how one data point or transformation or line of code affects any other data point or transformation in a study, we know this immediately without any searching or hidden lines of code in macros or other hard-to-reach places. or other hard-to-reach places. So we were thinking, perhaps, many of us earlier last year, that LLMs might make stat programmers redundant or less valuable, that they might replace humans in some regard. Is this the future that I see? It's not. I don't see this future. And for the reasons that are described in the previous couple of slides, because they're probabilistic and that determinism is really at the heart of the drug development process, and especially if we would like to understand outcomes in patients, how that might relate back to the way the drug works, something that the industry has been trying and continues to try harder to do, to not just take empirical results, but to actually understand why their drugs are working the way that they are and then to use that information to better inform the drug development process. Probabilistic systems, essentially one way. So LLMs won't allow us to take the information. They won't produce any improvement in feeding back information from patients back into the development process. So that's one reason where I see their exclusion. And I think the way that they can help us do repetitive tasks in many aspects of drug development, however, will be valuable. So if they're going to accelerate human efforts, of drug development, however, will be valuable. So if they're gonna accelerate human efforts, not so far seeing anything revolutionary here, but if from my experience, here's already where we will see some value from this technology. Most obvious one being human text generation. And the most valuable one for us in this discussion is the CSR. So being able to instantaneously produce a draft text that can then be reviewed quickly by, well, not reviewed quickly, but generated quickly and available for review. So in generating drafts quickly, we can already save potentially weeks of time. So that really is a value add from LLMs. Same applies to documentation of code. To be able to produce drafts of this quickly and on the fly is also saving Surya time and her team from having to document or create documents of the code that they've written by hand. And the same also goes when they're writing their SAS code to have a virtual assistant provide some recommendations on the fly. That's also something that is accelerating and improving the code of quality that's being written. And being able to navigate more effectively the tabular and relational database stores that we have. Typically, and relational database stores that we have. Typically, medically trained people have Excel open and might be clumsily or working to avoid being clumsy, spending the time and effort to carefully filter those tables or search for those tables is really a time-consuming process. And being able to simply speak or type exactly what you're looking for through a large language model interface will accelerate those processes. Lastly is the creation of synthetic data. So where our patient data is tightly access controlled to ensure compliance with legal requirements of a country or area, it means that we can't really work easily with clinical data. And that limits what data scientists can do easily with clinical data. So typically, it's very difficult to work with more than one study at a time. So it's hard to gain broader insights from clinical data. It's not to say it's impossible, but there's really a lot of additional effort required to do that. And the result is that not much data science takes place on clinical data today that, for example, takes place on other non-human data like bus schedules or price data from other industries. So if we can create synthetic versions of those data which which are not in any way identifiable with with the original data set so truly completely new new data sets not not merely de-identified data sets then then this will really help data scientists it of course what's critical here is that the data retains the interdependencies between variables and features and intercorrelations so that analytical outcomes from machine learning models that we would find in the synthetic data mirror or would be the same as what we would find in the original data set. This is a delicate balancing act. And here, LLMs have shown potential and promise to take this part of or this aspect of our domain further. So, I've seen some preliminary results there that are very promising for this technology. To summarize what I've presented for you today, the challenges which Surya's team have faced to communicate clinical trial outcomes are really now limiting our efforts to bring medicines to patients. And so we need to do better. We need to change the way we're doing stat programming today in order to meet the business opportunities and the health needs of patients. So I think we now have the motivation, the tools have been there and are becoming available as we speak and I think it's more important to understand that the software development industry and companies like Verizion are available and open to partnering with us to help us get where we need to go. So that, yeah, I'm a deep believer in the use of knowledge graphs to help us gain these advances. So we need something that's practical and achievable but at the same time we need that effort to to deliver results so in this regard an automated knowledge graph generation from code this will allow us to upgrade our processes on the fly without any tools required. And lastly, LLMs are finding many use cases outside those requiring explainability. So, but I don't so far see them offering the kind of innovations that we need to really make big steps forward. Lastly, just to thank Thomas, CEO of Farizian, for all of the exchange that we've had around this topic and the work that his team are doing together with Bayer to develop some knowledge graph technology for clinical step programming. And of course, to Emil Hittfeldt for using the slide theme that he's developed. And onto Q&A. And in case you're interested to contact me after this, I'm quite happy to take questions. Thank you. | How Generative AI and Knowledge Graphs Will Transform Clinical Trial Programming | 2,667 | Verisian | 20240530 | The advent of modern clinical trials to evaluate drug safety and effectiveness ushered in the age of clinical data programming. To date the pharmaceutical industry relies mainly on proprietary statistical programming languages to curate, transform and apply statistical tests to create the documents regulatory authorities require to assess a drugs safety and effectiveness for patients. We will yet depend on this process for years to come. However, the number of drugs and the clinical data they generate is increasing and the time to bring them to market to meet patient needs is decreasing.
New approaches are thus required to communicate clinical data insights to regulatory authorities with increased speed and confidence. This presentation describes the main challenges we see today and considers where the opportunities exist to overcome these. We focus on the application of machine learning technologies including semantic data modelling or 'knowledge graphs'. Additionally, with the recent availability of LLM technology, popularly known as chatGPT, we also consider if and how this will support the domains evolution to meet the demands of current and future clinical trial data processing.
Speaker: Karl Brand, Bayer
Karl is originally from Ontario, Canada. He graduated as a geneticist from Melbourne University in Australia and obtained his PhD in circadian genomics from the Erasmus Medical Center (EMC) in Rotterdam, The Netherlands. A subsequent post-doctoral fellowship within the Dept. of Bioinformatics, EMC supported by the Dutch Center for Translational Molecular Medicine identified predictive biomarkers in patients with heart disease through the commercial partnering of both clinical and basic research. At Bayer since 2016 and now in his role as a study biomarker lead, he uses his experience with open source and emerging digital technologies and frameworks to drive innovation around clinical trial data towards faster, deeper, more trustful insights on behalf of patients mana
Resources
- Learn more about us: https://www.verisian.com/
- Join the Verisian Community: https://www.verisian.com/community
- Follow us on LinkedIn: https://www.linkedin.com/company/verisian/ | 2024-08-24T09:24:28.748579 |
https://www.youtube.com/watch?v=ZbOs_Njx2ko | Hi everyone. My name is John. I'm a developer advocate here at Arise. I'm going to walk you guys through the new integration that we've just rolled out with Haystack that you can use to trace and evaluate your different Haystack applications and pipelines. This integration works with both Arise Phoenix as well as Arise. It's used the same underlying architecture. I'll show you in Phoenix, but we just know we also have a page here for Arise that we'll walk through how to use that platform as well. Very similar on both sides. And what we'll do is we've got a RAG pipeline that we've set up with Haystack that we'll sort of trace, and then we'll also show evaluating that as well too. So I'm adding in some packages here, and we'll make this notebook available in the description down below. And then I'm going to use a cloud instance of Phoenix today as opposed to running a local instance. That means that I have an API key, so I'm going to connect to that. I'll show you guys what that looks like. But if you haven't, if you want to try a cloud instance and you haven't already, you can just go to phoenix.arise.com and sign up for one there. It's totally free. And then once you're in your instance here, you can go to keys and that's where I've gotten my API key here from. So jumping back over, I've set those already along with an OpenAI API key. And then now I need to actually connect my application through to my Phoenix instance so I can send all of my trace data to over to Phoenix. So there's a bunch of boilerplate code that's going on here. We actually have a helper method to kind of collect all of this and just make it a one line kind of thing. Everything that you see, excuse me, everything you see here, but I'll blow it up here so you can see what's actually happening. Essentially, we're just setting some environment variables behind the scenes that we want to point to, including setting our Phoenix API key. And then we're using open telemetry here, boilerplate code to actually go through and set up Phoenix as a endpoint that we want to send all of our choice data to. So if you've used Phoenix before, you already have it running. You probably already have all of this in your application or some variant of it. And so this is just to connect our application up to Phoenix and then to actually do all of the automatic instrumentation that we want to do with Haystack, aka all of the new stuff in the integration here. We just have to use this one line. And what this will do is it will go through the whole application. Essentially, whenever a call is made to a Haystack pipeline, then we will listen in, grab all the right information that we need to create all the telemetry data and send that over to Phoenix so you can make it visible there. One caveat here is that we hook this up to the pipeline sort of object within Haystack. So if you're not wrapping your code in a pipeline in Haystack, then you might not see all of the information you expect to, in which case you can do some additional couple lines to add in some instrumentation there. But just if you're using pipelines, you should see all that information come through automatically. So once I have that done, that will connect my application through my Phoenix instance and get us ready to trace any kind of Haystack pipelines that we're going to run throughout our application. So that's really all you have to do to connect it up there. I'll keep going and showing you guys what that looks like in practice here. So next up, I have this Haystack app, kind of just a quick start Haystack app here where we have a document store that we're creating in memory here, just with a few kind of documents of these lines that we're going to use for our rag pipeline. We have a prompt template which is going to take the documents that are retrieved to answer a given question, combine them with the question, and then prepare that to be sent over to our model. And you can see we're creating our different objects here within Haystack. We're using 3.5 Turbo in this case as our model. And then we create a pipeline within Haystack here. So this is where we're adding our different components, connecting them all together, kind of getting everything ready. And then once we have that defined, we can go ahead and make queries over to our pipeline. So we can say, who lives in Paris? And that will pull relevant documents, and it will try to answer our question from there. And so if I jump over into Phoenix, you can see in our default project, we've now got a trace, and that's our first run-through of our application. So I can go click on that, and you see we've got the Haystack pipeline, and then we've got the in-memory document retriever pulling, in this case, just all of our documents through here. And you can see we automatically will score how useful that document is in some cases here, too. So you can actually see that happening automatically there. And then our prompt builder gets built based on those documents. And then finally, the prompt is generated. If I show you here, the prompt is generated by the prompt builder that gets passed into our OpenAI call, and you can see we get our answer here. So it gives you visibility into everything that's going on just with that one instrument call. And that's cool, and that's useful. But really what gets powerful here is once you have all this information, you can start running evaluations and start running some tests within that information. So given that we're running a RAG pipeline here, we'll just do one to start here that's evaluating the relevancy of the documents that are pulled back. And so here I'll go ahead and I will import Phoenix. Okay, and just now that's finished. So we'll go through here and we'll connect to our Phoenix instance. And one thing that's really nice is we have these kind of built-in methods to help make some popular kind of evaluation a little easier. So in this case, this get retrieved documents method here, will basically pull all of the documents that have been retrieved by all the different spans within our application. So we've just got our one run-through, so you can see we've pulled each of the three documents that were retrieved, and those get paired up with the input, which makes it really easy to evaluate. because now we've got this data frame with the input and the reference document. And we can actually go through and say, OK, let's let's just use those two to essentially decide whether or not the document is relevant. So that's what our next method will do. And it's going to set up again. We have a helper class here. That's a relevance evaluator. You don't have to use this one. It just makes it a little bit easier to code. And what we'll do is we'll run evaluations using that relevance evaluator and that retrieve documents data frame that you see up here. And really all it's going to do is take each row of that data frame, so that retrieve documents data frame here, and pass it over to our relevance evaluator that has a built-in method that it will use. It'll look for the input and reference column, and then it will plug both of those into a prompt, send it over to the model that we specify here, so in this case, GPT-40-mini, and then get an evaluation response from the model. We've also actually asked it to provide explanations, so it'll come back with an explanation as to why it gave that particular rating. So this is a built-in one. Again, you could run any kind of evaluations that you wanted to, either using these built-in methods or just kind of running it yourself with a data frame. And then this gives us back, this retrieves documents' relevance. And so you can see it's labeling the documents, it's scoring them, and then just has the explanation. And so you can see it's labeling the documents, it's scoring them, and then just has the explanation. And so we can take this and we can send it back up into our Phoenix instance. And again, if you wanted to run a different evaluation, you can do the same kind of flow here, where you're basically exporting span information from Phoenix, running an evaluation, ending up with a data frame, and we use this actual context span ID value here to kind of connect it back. So if you wanted to run different evaluations, as long as you get back to a data frame. And we use this actual context span ID value here to kind of connect it back. So if you wanted to run different evaluations, as long as you get back to a data frame that has that span ID value, you can send them right back up into Phoenix here. And I'll show you what that looks like in the UI in just a second here. So again, built-in ones, we've got some primitives that help with it, but if you want to run your own kind of evaluation, it's easier from there as well too. So let me refresh this. And so now the first thing you'll see is we've got these markers up here. This gives us kind of our rolled up total score values on some of those evaluations. So we're calculating a few of these for our relevance of our documents here. Again, we just have the one run, so we don't have too much data beyond that first one. But then if I actually click into the run and I go to our retrievers. Now you'll see that we're actually scoring the relevance and the hit rate basically for each of these documents. So as I go down, you've got this one's relevant. And in this case, we're just using a one, zero kind of binary classification of whether it's relevant or not. And then these other two have been scored as unrelated. So again, kind of a contrived example here with just the one kind of toy run through, but as you get a bigger application with many more documents, this makes it really easy to say, okay, what's my actual hit rate on documents I'm pulling back? Maybe I'm retrieving way more information than I need to. Maybe I'm retrieving a lot of irrelevant information and I can maybe do some work to streamline my retrieval pipeline and improve the performance of my application. Again, if you just want to trace things with Haystack, so just trace your Haystack pipelines. You don't need to do any of this evaluation stuff. This is just a kind of extra bonus that you can use to improve your pipelines and get your applications really ready for prime time and ready for production. So Phoenix is totally open source. And if you're enjoying it, please give us a star. It really helps us put more resources behind it, keep these updates coming and keep these integrations coming. If you have any questions about the integration, feel free to jump into our Slack. We've got a really active community there and we love hearing questions and feedback from everybody. Until then, have a great day and thanks for all the attention. | Trace and Evaluate Haystack Pipelines with Phoenix | 542 | Arize AI | 20240819 | Figuring out how to improve your LLM applications can be like finding a needle in a Haystack. Luckily, tracing and evaluations from Phoenix make this much easier.
Check out this latest walkthrough showing how you can trace and evaluate a Haystack RAG pipeline with Phoenix,
Notebook: https://colab.research.google.com/drive/10-Jeu5kb72pzf9fYoHXIFsX3aRMFJisc?usp=sharing
Docs: https://docs.arize.com/phoenix/tracing/integrations-tracing/haystack | 2024-08-26T11:55:47.286369 |
https://www.youtube.com/watch?v=knDDGYHnnSI | Tanya Cushman Reviewer Reviewer I basically dedicated my professional life towards getting developers to be able to build better applications and build applications better by leveraging not just individual data points kind of retrieved at once, like one at a time, or summed up or grouped, calculated averages, but individual data points connected by relationships. And today I'm going to talk about that applied in the world of LLMs and Gen AI. So before I do that, though, I'm going to take a little bit of a detour. I'm going to talk about search, the evolution of search. Everyone here in this room knows that the vast majority of web searches today are handled with Google. But some of you know that it didn't start that way. It started this way. Who here recognizes this web page? Right, yeah. Who here recognizes AltaVista as a name? Like a few people, right? Back in the mid-'90s, there was dozens of web search companies, dozens plural, like 30, 40, 50 web search companies, and they all used basically the same technology. They used keyword-based text search, inverted index type search, BM25-like, for those of you who know what that means and it worked really really well until it didn't and The Alta Vista effect kicked in which was the notion that you search for something you got a half Thousand or thousands of hits back and you have to look through page after page Until you found the result that was relevant to you the Alta Vista effect you got too much back from the internet That wasn't a problem in the beginning, because most of the things you searched for when I went onto the internet in the beginning got zero results back, because there was no content about that on the internet. But the Alta Vista effect, too many search results, was solved by Google. This is Google's press release mid-2000. They talk about a billion URLs they've indexed, right? But they also talk about the technology that they use behind the scenes, the technology called PageRank, that delivers the most important search results really early on. In fact, the top ten blue links on that first page, right? That technology, PageRank, is actually a graph algorithm which is actually it's called eigenvector centrality and the innovation that Google did was applying that to the scale of the Internet and the scale of the web, right? PageRank. That ushered in and created honestly the most valuable company on the planet for quite quite some while, the PageRank era. That lasted for about a decade, about a dozen years, until in 2012, Google wrote this blog post, which is an amazing blog post, introducing the knowledge graph, things, not strings, where they said, you know what, guys? We've done an upgrade on the back end of our search technology, the biggest one since we invented PageRank, where we're moving away from not just storing the text and the links between the documents, but also the concepts embedded in those documents, things, not just strings. And we all know what the Knowledge Graph looks like visually. When you search for something on Google today, Moscone Center, just around the corner from here, you're going to get this little panel right on the right-hand side. If you look at that panel, it has a combination of unstructured text, in this case from Wikipedia, with structured text. It has the address, the owner of the Moscone building, you know, that kind of stuff. This thing is backed on the back end by the data structure looking like this. It has this concept, the rings that we call nodes, that are connected to other nodes through relationships. And both the nodes and the relationships have key value properties. You can attach 2, 3, 1,000, 10,000 on both the nodes, and very importantly, also on the relationships. This is a knowledge graph, and that was the next decade or so, 12 years, of Google's dominance. Until a few months ago. A few months ago, at Google I.O., they took the next step, ushered in by the AI engineers conference a year ago. Well, not quite, but of course, the entire craze around Gen AI. And this is one of the examples that they did, the classic travel itinerary. They helped me plan out this travel. Everyone here in this room knows that this is backed by an LLM. And it is backed by an LLM in combination with this knowledge graph data structure. Graph rag. This is ushering in the next era of web search, the graph rag era. What I'm going to talk to you about today is how can you use, well, first of all, should you, and if so, how can you use graphag for your own rag-based applications? So what is GraphRag? It is very, very simple. GraphRag is rag where on the retrieval path, you use a knowledge graph. Very, very simple. It doesn't say you only use a knowledge graph, but you use a knowledge graph. Maybe in combination with other technologies like vector search. So let's take the classic example of a customer service bot, right? And let's say that you are working at a company that is building Wi-Fi routers, for example, right? And you have a bunch of support articles, right? And they've been stored in text files, right? And then you are tasked with building a bot that either gives direct end users access to it or your own customer service agents, employees, like access to this information. And you know how to do this because you live in the LLM world and the Gen AI world, so you're going to use RAG for this, right? And so you have that data. It's text documents. You've added that text onto the properties of particular nodes, right? So you have a node per article, but then you've also said that, you know what, this article is about this particular WiFi product, right, so you have a relation to that WiFi product, and that WiFi product sits in a hierarchy of other WiFi products, and it's written by this particular customer service engineer. You know, that kind of stuff. And then the end user has a question. Hey, my Wi-Fi lights are flashing yellow and my connection drops. Like, what should I do? Something like that. I think we all know how we do this. We vectorize the search, right? We get some kind of vector embedding back. We use vector search to get the core documents. But here's where the graph rag part kicks in. You get those core articles back, which are linked to the nodes. Actually, the text is on the nodes. But then you use the graph to traverse from there and retrieve more context around it. Maybe it's not just that particular article for that particular Wi-Fi, but something else in that family. Maybe you use the fact that this particular engineer has very highly ranked content, and then you rank that higher. You retrieve more context than what you get out of the ANN-based search from your vector store. And you pass that on to the LLM, along with the question, you get an answer back, and you hand it to the user. So the core pattern is actually really, really simple, but really, really powerful. You start with doing a vector search. I think of this almost as a primary key. It's of course not a primary key, but almost like a primary key lookup into the graph. You use that vector search. You get an initial set of nodes. Then you walk the graph, and you expand that and find relevant content based on the structure of the graph. Then you take that and you return it to the LLM, or optionally, maybe that gives you 1,000 or 10,000 nodes back, and then you do what Google did. You rank that, you get the top K based on the structure of the graph. Maybe you even use PageRank. You get that, you pass the top K based on the structure of the graph. Maybe you even use PageRank, right? You get that, you pass it on to the LLM. Really really simple, but really really powerful. And then there's a number more advanced patterns, but that's kind of the next talk I'll do in a year. It's more sophisticated graph retrieval patterns, right? But the core one, very, very simple. Okay, so if that's what GraphRag is, what are the benefits of GraphRag? When should you use it? When should you not use it? The first and most stark benefit is accuracy. It's directly correlated to the quality of the answer. There's been a ton of research articles about this in the last six months, something like that. I believe the first one was this one by data.world. I just picked out three out at random here that I like. This is the first one that I know of by data.world, which is a data cataloging company based on a knowledge graph. And they proved out across I think 43 different questions that on average the response quality, the accuracy, was three times higher if they used a knowledge graph in combination with vector search. I love this paper by LinkedIn. It shows a very similar type. I think it's like 75% or 77% increase in accuracy. But it also has a great architecture view, so you can take the QR code right there and look at that paper, which combines various components and also the flow through that that I thought was just really pedagogical. But by and large, it's showing the same thing, a little bit of different numbers, but significantly higher accuracy when it used graph in combination with vector search. And then Microsoft had a fantastic blog post and subsequently, I think, two academic papers. The blog post was in February of this year, where they also talk about the increased quality of response, but also beyond that, hey, you know what? Graph rag enables us to answer another important class of questions that we couldn't even do with vector search alone or baseline rag alone. So first benefit, higher quality response back. The second one is easier development. And this one is a little bit interesting because there's an asterisk in there. Because what we hear very clearly from our users is that it's easier to build RAG applications with graph RAG compared to baseline RAG. But we also hear it's like, it's actually hard. And what's the new one say? Well, the new one says, if you already have a knowledge graph up and running. So there's a learning curve where people need to learn, how do I create the knowledge graph in the first place? Once you have that, it graph up and running. So there's a learning curve where people need to learn, how do I create the knowledge graph in the first place? Once you have that, it's a lot easier. But how do you create that knowledge graph, right? So let's put a little pin in that. If I rush through the next few slides quickly enough, I'm gonna show you hopefully a demo on that. But let's put a little pin in that. So this is an example, This is from a very high growth stage fintech company that is very cutting edge in AI. And they started playing around with GraphRag a few, about six months ago. And they took an existing application and they said, you know what, we are going to port this from a vector database to Neo4j. And most of the operations yield a better result. They can calculate the embeddings on a database level, getting related actions. This is as simple as following the relationships between nodes. And this one I love. The cache, and the cache here is their application. They call it the cache, can be visualized. This is an extremely valuable debugging tool. And in the parenthesis, I actually already fixed a couple of bugs just thanks to this. Right? Amazing. Like, once you've been able to create that graph, it's a lot easier to build your rag application. And why is that? Right? So let's talk a little bit about representation. Let's say we have the phrase in there, apples and oranges are both fruit, and we want to represent that in vector space and in graph space. In graph space, we already talked about this. Apple is a fruit. Orange is a fruit. Pretty easy. That's the representation in graph space. In vector space, it looks like this. Maybe. Or maybe this is something else, like we actually don't know. Two different ways of representing that phrase. And then we can run similarity calculations in different ways using these both representations that I'm not going to go through right now. We can search in different ways. These are not competing ways of doing it. They're complementary ways of doing it. One is not better than the other. Except I will make one statement, which is, when you sit down and you write your application, when you build your application, I'm not going to make the statement that one of them is superior. This vector space representation is completely opaque to a human being. But the graph representation is very, very clear. It is explicit, it's deterministic, it's visual. You can see it, you can touch it as you build your applications. This is the, I already fixed a couple of bugs thanks to this. Just by porting it from a vector-only store to GraphRag, they were able to see and work with the data. And that is really frickin' powerful. That shows up in development time as you're building your applications. It's also showing up for our friends in IT who worry about things maybe that is not directly related to building the application, which is explainability, which is auditability, which is auditability, which is governance, that explicit data structure has knock-on effects over there that are really, really powerful once you're up and running in production and you're able to explain why something happened. So higher accuracy, better answers, easier to build once you're through the hump of creating the knowledge graph, and then increased explainability and governance for IT and the business. Those are the three things. So how do you get started with GraphRag? Well, I've talked a lot about this already. Like how do you create the knowledge graph in the first place? So a little bit of nuance here. So basically there are three types of data out in the world that I care about when I think about knowledge graph creation. The first one is structured data. So this is your data in your Snowflake or something like that or Postgres, right? The other one is unstructured data, PDF files, raw text from a web page. And the other one, the third one is unstructured data, PDF files, raw text from a web page. And the third one is mixed. People tend to call this semi-structured, but it's not. Hit me up afterwards, and I'll tell you why it's not. But basically what this one is is structured data where some of the fields are long-form text. Basically, we're great in the first bucket in the graph world. It's very easy to go from Snowflake or Postgres or MySQL or Oracle into a property graph model. The unstructured one is really freaking hard, right? It's hard to do in theory. It's also had immature tooling for a long time. The middle one is actually where the majority of at least enterprise production use cases are in the real world. So, man, two and a half minutes, this is rough. There are two types of graphs and I'm not gonna talk about them. I wanna talk about them, lexical graphs and domain graphs. It's actually really relevant, but I really wanna get to this demo. So I've talked about creating graphs with unstructured information. So we just built this new tool that we launched just a few weeks ago called the Knowledge Graph Builder. And you see it here. Can you see the screen OK? So basically here, you can drag and drop your PDF files. You can put in YouTube links, Wikipedia links. You can point it to your kind of cloud service buckets, right? And it's going to extract the data from there and create the graph. So here I added a few things. I added a PDF of Andrew Ning's newsletter, The Batch. I added the Wikipedia page for OpenAI. And I added the YouTube from Swix and Alessio's, you know, the Four Wars Latent Space Podcast. So I added all that and I uploaded it into this Knowledge Graph Builder. And when I do that, it creates if, let's see here, I knew the Ethernet connection was going to do it. It automagically created a little knowledge graph. If it renders. Wait for it. It says one minute here. It better render pretty soon. All right. Let me do this again. Please work. Oh, no. Yeah. Oh, man. My Wi-Fi isn't. Oh, crap. Oh, no. Yeah. Oh, man, my Wi-Fi isn't. Oh. Oh, crap. Oh, no. And it's ticking down. All right. Wait for it. Wait for it. All right. You can do it. Didn't do it. And I was, like, trying to keep it alive in the thing too. All right. Okay. Let's see. I think we are here. And then it says show me a graph. And it's not going to show me the graph. Oh, yeah, it will. Come on. You can do it. All right. Yes! So, what we have here, check this shit out. I would love to sit here and just drink in your applause, but we need to look at this data. So check this out. This is the document, the four wars document. Here are the various chunks. And then you can take a chunk and you can expand that. This I put in the embedding, and you can... I'll zoom out here, and you can see that it takes the logical concept elements out of that chunk, right? Machine learning, they talk about something that is developed in a similar fashion, I don't even know, there's some company there, right? And you get that entire graph of all this information. On top of that, I really don't have time to show it, but there's also, I really don't have time to show it, there's a chatbot in here that you can use and you can introspect the result that gets back. One more second. Take up your phones. If you think this looks cool, take a photo of this QR code, and you're going to have an amazing landing page where you have access to all of this information. You can get up and running yourself. Thank you for the additional minutes. Thank you, Emil. Thanks, everyone, for paying attention. you | GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem | 1,154 | AI Engineer | 20240828 | A famous poet once said "Natural language is most powerful when it can draw from a rich context." Ok fine, I said that. But that's true of both poetry, and of LLMs! Well, Knowledge Graphs excel at capturing context. How can combining Knowledge Graphs with RAG – an emerging technique known as GraphRAG – give context to your RAG application, and lead to more accurate and complete results, accelerated development, and explainable AI decisions? This talk will go deep on the why and how of GraphRAG, and where best to apply it. You’ll get concepts, examples, and specifics on how you can get started. You’ll walk away with an understanding of how GraphRAG can improve the context you pass to the LLM and the performance of your AI applications.
Recorded live in San Francisco at the AI Engineer World's Fair. See the full schedule of talks at https://www.ai.engineer/worldsfair/2024/schedule & join us at the AI Engineer World's Fair in 2025! Get your tickets today at https://ai.engineer/2025
About Emil
Emil Eifrem is Neo4j’s Co-Founder and CEO. He sketched what today is known as the property graph model on a flight to Mumbai way back when dinosaurs ruled the earth and has devoted his professional life to building, innovating, and evangelizing graph databases and graph analytics. He is also co-author of the O'Reilly book Graph Databases. Neo4j today helps more than 75 of the Fortune 100, and a community of over hundreds of thousands of practitioners find hidden relationships and patterns across billions of connections deeply, easily, and quickly. Emil plans to change the world with graphs and own Larry's yacht by the end of the decade. | 2024-08-28T20:09:01.379029 |
https://www.youtube.com/watch?v=oFtjKbXKqbg | So I was trying to figure out how to do photorealistic AI photos, and Stable Diffusion by itself is not doing that well. The faces look all mangled, and it doesn't have enough resolution or something to do that well. But I started seeing these base models, these fine-tuned models, and people would train them on porn, and I would try them, and they would be very photorealistic. They would have bodies that actually made sense, like body anatomy. But if you look at the photorealistic models that people made sense, like body anatomy. But if you look at the photo realistic models that people use now still, there's still core of porn there, like of naked people. So I need to prompt out the naked, and everyone needs to do this with AI startups, with imaging, you need to prompt out the naked stuff. You have to keep reminding the model, you need to put clothes on the thing. Yeah, don't put naked, because it's very risky. I have Google Vision that checks every photo before it's shown to the user to like check for NSFW. Like a nipple detector? Oh, NSFW detector. Because you get, the journalists get very angry. The following is a conversation with Peter Levels, also known on X as Levels.io. He is a self-taught developer and entrepreneur who designed, programmed, shipped, and ran over 40 startups, many of which are hugely successful. In most cases, he did it all by himself, while living the digital nomad life in over 40 countries and over 150 cities, programming on a laptop while chilling on a couch, using vanilla HTML, jQuery, PHP, and SQLite. He builds and ships quickly and improves on the fly. All in the open, documenting his work, both his successes and failures, with the raw honesty of a true indie hacker. Peter is an inspiration to a huge number of developers and entrepreneurs who love creating cool things in the world that are hopefully useful for people. This was an honor and a pleasure for me. This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Peter Levels. You've launched a lot of companies and built a lot of products. As you say, most failed, but some succeeded. What's your philosophy behind building the startups that you did? I think my philosophy is very different than most people in startups. Because most people in startups, they build a company and they raise money, right? And they hire people and then they build a product and they find something that makes money. And I don't really raise money. I don't use VC funding. I do everything myself. I'm a designer. I'm the developer. I make everything. I make the logo. So for me, I'm much more scrappy. And because I don't have funding, like I need to go fast. I need to make things fast to see if an idea works, right? I have an idea in my mind and I build it. Build like a micro, mini startup. And I build it, build it like a micro mini startup. And I launch it very quickly, like within two weeks or something of building it. And I check if there's demand and if people actually sign up and not just sign up, but if people actually pay money, right? Like they need to take out their credit cards, pay me money. And then I can see if the idea is validated. And most ideas don't work, like as you say, most fail. So there's this rapid iterative phase where you just build a prototype that works, launch it, see if people like it, improving it really, really quickly to see if people like it a little bit more enough to pay and all that. That whole rapid process is how you think of... Yeah, I think it's very rapid. And it's like if I compare it to, for example, Google, like our big tech companies, especially Google right now is kind of struggling. They made Transformers. They invented all the AI stuff years ago and they never really shipped. They could have shipped ChatGPT, for example, I think I heard in 2019. And they never shipped it because they were so stuck in bureaucracy. But they had everything. They had the data, they had the tech, they had the engineers, and they didn't do it. And it's because these big organizations, it can make you very slow. So being alone by myself on my laptop, like, you know, in my underwear in a hotel room or something, I can ship very fast, and I don't need to, like, I don't need to ask that legal for, like, oh, can you vouch for this? You know, I can just go and ship. Do you always code in your underwear? Your profile picture, you're, like, slouching you always code in your underwear your profile picture you're like slouching yeah couching your underwear chilling on a laptop no no but it's i would do wear like shorts a lot and i usually just wear shorts i know t-shirts because i'm always too hot like i'm always overheating thank you for showing up not just in your underwear but yeah wearing shorts and no you know i'm still wearing this for you but thank you um thank you for dressing up i think it it's because I, since I go to the gym, I'm always too hot. What's your favorite exercise in the gym? Man, over press. Over press, like shoulder press. Yeah. Okay. But it feels good because you're doing like, you do, you win. Cause when you, what is it? I do 60 kilos. So it's like 120 pounds or something. Like it's, it's my only thing I can do well, you know, in the gym. And you stand like this and you're like, I did it, you know, like a winner pose. Yeah. Victory pose. I do bench press squats deadlifts hence the uh the mug yeah talking to my therapist yeah it's a deadlift yeah because it acts like therapy for me you know yeah it is it's controversial to say like if i say this on twitter people get angry physical hardship is a kind of therapy yeah i just re-watched uh happy people year in the taiga that warner herzog film where they document people that are doing trapping they're essentially just working for survival in the wilderness year round yeah and there's a deep happiness to their way of life because they're so busy in it in nature yeah like there's something about that physical physical yeah toil yeah my dad taught me that my dad always does like a construction in the house like he's always renovating the house he breaks through one room and then he goes to the next room and he's just going in a circle around the house for like the last 40 years so but so he's always doing construction the house and it's his hobby and he like he taught me when i'm depressed or something um he says like get a big like what he called like a big uh mountain of sand or something from construction just get a shovel and uh bring it to the other side and just you know do like physical labor do like hard work and do something like set a goal do something and i i kind of did that with startups too yeah construction is not about the destination, man. It's about the journey. Yeah. Yeah, sometimes I wonder people who are always remodeling their house, is it really about the remodeling? No, no, it's not. Is it about the project? The puzzle of it. No, he doesn't care about the results. Well, he shows me, he's like, it's amazing. I'm like, yeah, it's amazing. But then he wants to go to the next room, you know? But I think it's very metaphorical for work because I also, I never stop work. I go to the next website or I make? But I think it's very metaphorical for work. Because I also, I never stop work. I go to the next website or I make a new one, right? Or I make a new startup. So I'm always like, it gives you something to wake up in the morning and like, you know, have coffee and kiss your girlfriend. And then you have like a goal. Today I'm going to fix this feature. Today I'm going to fix this bug or something. I'm going to do something. You have something to wake up to, you know? And I think maybe especially as a man, also women, but you need a hard work, you know? You need like an endeavor, I think. How much of the building that you do is about money? How much is it about just a deep internal happiness? It's really about fun. Because I was doing it when I didn't make money, right? That's the point. So I was always coding. I was always making music. I made electronic music, I was making music. I made electronic music, drum and bass music like 20 years ago. And I was always making stuff. So I think creative expression is like a meaningful work. It's so important. It's so fun. It's so fun to have like a daily challenge where you try to figure stuff out. But the interesting thing is you've built a lot of successful products and you never really wanted to take it to that level where you scale real big and sell it to a company or something like this. Yeah. The problem is I don't dictate that, right? Like if more people start using it, millions of people suddenly start using it and it becomes big, I'm not going to say, oh, stop signing up to my website and pay me money. But I never raised funding for it. And I think because I don't like the stressful life that comes with it like i have a lot of founder friends and they tell me secretly like with hundreds of millions of dollars in funding and stuff and they they tell me like next time if i'm gonna do it i'm gonna do it like you because it's more it's more fun it's more indie it's more chill it's more creative they don't like this. They don't like to be manager, right? You become like a CEO, you become a manager. And I think a lot of people that start startups, when they become a CEO, they don't like that job actually, but they can't really exit it, you know? But they like to do the groundwork, the coding. So I think that keeps you happy, like doing something creative. Yeah, it was interesting how people are pulled towards that, the scale to go really big. And you don't have that honest reflection with yourself, like what actually makes you happy. Because for a lot of great engineers, what makes them happy is the building, the quote-unquote individual contributor, like where you're actually still coding or you're actually still building. And they let go of that and then they become unhappy. But some of that is the sacrifice needed to have an impact at scale, if you truly believe in a thing you're doing. But look at Elon. He's doing things a million times bigger than me, right? Would I want to do that? I don't know. You can't really choose these things, right? But I really respect that. I think Elon's very different from VC founders, right? I don't know. You can't really choose these things, right? But I really respect that. I think Elon's very different from VC founders, right? VC start is like software. There's a lot of bullshit in this world, I think. There's a lot of dodgy finance stuff happening there, I think. And I never have concrete evidence about it, but your gut tells you something's going on with companies getting sold to friends and VCs, and then they do reciprocity and shady financial dealings. With Elon, that's not. He's just raising money from reciprocity and there's shady financial dealings. With Elon, that's not. He's just raising money from investors and he's actually building stuff. He needs the money to build stuff, you know, hardware stuff. And that I really respect. You said that there's been a few low points in your life. You've been depressed and the building is one of the ways you get out of that. But can you talk to that? Can you take me to that place, that time when you were at a low point? So I was in Holland and I graduated university and I didn't want to like get a normal job. And I was making some money with YouTube because I had this music career and I uploaded my music to YouTube and YouTube started paying me like with AdSense, like $2,000 a month, $2,000 a month. And all my friends got like normal jobs and we stopped hanging out because people would like in university hang out, you know, utilities at each other's houses, you go party. But when people get jobs, they only party like in the weekend and they don't hang anymore in the week because you need to be at the office. And I was like, this is not for me. I want to do something else. And I was starting getting this like, I think it's like Saturn return. You know, when you turn 27, it's like some concept where Saturn returns to the same place in the orbit that it was when you're born. Man, it's like- It's some astrology thing, you know? So many truly special artists died when they were 27. Exactly, someone was 27, man. And it was for me, like I started going crazy because I didn't really see like my future in Holland, buying a house, going living in the suburbs and stuff. So I flew out. I went to Asia, I started digital nomading and did that for a year. And then that made me feel even worse, you know, because I was like alone in hotel rooms, like looking at the ceiling. Like, what am I doing with my life? Like this is like I was working on startups and stuff on YouTube, but it's like, what is the future here? You know, like, is this is this something while my friends in Holland were doing really well and with the future here, you know? Like, is this something, while my friends in Holland were doing really well and we have a normal life, you know? So I was getting very depressed and like, I'm like an outcast, you know? My money was shrinking. I wasn't making money anymore a lot. I was making $500 a month or something. And I was, you know, looking at the ceiling, thinking like, now I'm like 27, I'm a loser. And that's the moment when I started building like startups. And it was because my dad said, like, if you're depressed, you need to, you know, get sand, get a shovel, start shoveling, doing something. You can't just sit still, which is kind of like a interesting way to deal with depression. You know, like, it's not like, oh, let's talk about it. It's more like, let's go do something. And I started doing a project called 12 Startups in 12 Months, where every month I would make something like a project and I would launch it with Stripe so people could pay for it. So the basic format is try to build a thing, put it online, and put Stripe to where you can pay money for it. Yeah, add a Stripe check. I'm not sponsored by Stripe, but add a Stripe checkout button. Is that still the easiest way to just pay for stuff, Stripe? 100%, I think so, yeah. It's a cool company. They just made it so easy. You can just click. Yeah yeah and they're really nice like the ceo patrick is really nice behind the scenes it must be difficult to like actually make that happen because that used to be a huge problem like merchant just just adding a thing a button where you can like pay for a thing dude it's dude i know this because when i was a trustworthy nine years old i was making websites also. And I tried to open a merchant account. There was like before Stripe, you would have like, I think it was called WorldPay. So I had to like fill out all these forms. And then I had to fax them to America from Holland with my dad's fax. And my dad had to, it wasn't my dad's name. And he just signed for this. And he started reading these terms and conditions. It was just like, he's liable for like 100 million in damages. And he's like, I wasn't my dad's name, and he had to sign for this, and he started reading these terms and conditions, which was like, he's liable for like 100 million in damages, and he's like, I don't want to sign this. I'm like, dad, come on, I need a merchant account. I need to make money on the internet, you know? And he signed it, and we sent it, we faxed it to America, and I had a merchant account, but then never, nobody paid for anything, so that was the problem, you know? But it's much easier now. You can sign up, you add some codes, and yeah. So 12 startups in 12 months. Yeah. So what, how do you, startup number one, what was that, what were you feeling? What were you, you sit behind the computer, like how much do you actually know about building stuff at that point? I could code a little bit because I did the YouTube channel and I made a website for, I would make websites for like the YouTube channel, it was called Panda Mix Show. And it was like these electronic music mixes, like dubstep or drum and bass or techno house. I saw one of them had like Flash. Were you using Flash? Yeah. My album, my CD album was using Flash. Yeah. Yeah. I sold my CD. Yeah. Kids, Flash was a software. This is like the break. Like grandpa, you know, but Flash was cool. Yeah. And there was, what's it called? Boy, I should remember this, ActionScript. There's some kind of programming language. Yeah, ActionScript. Oh, yeah. It was in Flash. Back then, that was the JavaScript, you know? The JavaScript, yeah. And I thought that's supposed to be the dynamic thing that takes over the internet. I invested so many hours in learning that. And Steve Jobs killed it. Steve Jobs killed it. Steve Jobs said, Flash sucks, stop using it. And everyone's like, okay. That guy was right, though, right? Yeah. I don't know. Yeah. Well, it was a closed platform, I think. Closed. But this is ironic because Apple, you know, they're not very open. Right. But back then, Steve was like, this is closed, we should not use it. And it's security problems, I think, which sounded like a cop-out. Like, I just wanted to say that to make it look kind of bad. But Flash was cool. Yeah. Yeah, it was cool for a time. Yeah. Listen, animated GIFs were cool for a time too. Yeah. They came back in a different way. Yeah. As a meme though. I mean, like, I remember when GIFs were actually cool. Not ironically cool. Yeah. Like, on the internet, you would have like a dancing rabbit or something like this. And that was really exciting. You had like the, you know, Lex homepage. Yeah. Everyone was centered. Yeah. And that was really exciting. You had like the, you know, Lex homepage. Yeah. Everyone was centered. Yeah. And you had like Peter's homepage and then the, on the construction. Yeah. GIF, which was like a guy with a helmet and the lights. It was amazing. And the banners. Yeah. That's how, before like Google AdSense, you would have like banners for advertising. It was amazing. Yeah. And a lot of links to porn, I think. Or porn-y type things. I think that was where the merchant accounts people would use for. People would make money a lot of links to porn, I think. Yeah. Or porny type things. I think that was where the merchant accounts people would use for. People would make money a lot. The only money made on Nintendo was like porn or a lot of it. Yeah. It was a dark place. It's still a dark place. Yeah. But there's beauty in the darkness. Anyway, so you did some basic HTML. Yeah. Yeah, but I had to learn the actual coding. So this was good. It was a good idea to every month launch a startup so I could learn to code, learn basic stuff, but it was still very scrappy because I didn't have time to, which was on purpose, I didn't have time to spend a lot of I had a month to do something, so I couldn't spend more than a month and I was pretty strict about that. And I published it as a blog post so people, I think I put it on Hacker News and people would check like, kind of like, oh, did you actually, you know? I felt like accountability because I put it public that I actually had to do it. Do you remember the first one you did? I think it was Play My Inbox. Because back then my friends, we would send, we would send like cool, it was before Spotify, I think. We would send like, 2013, we would send music to each other, like YouTube links. Like, this is a cool uh like this is a cool song this is a cool song and it was these giant email threads on gmail and they were like unnavigatable so i made an app that would log into your gmail get the emails and find the ones of youtube links and then make like kind of like a gallery of your your songs like essentially spotify and my friends loved it was it scraping it like what Like what was it? No, it uses like POP, like POP or IMAP, you know, it would actually check your email. So that like privacy concerns, cause it would get all your emails to find YouTube links, but then I wouldn't save anything. Um, but that was fun. It was like, and that, that first product already would get like pressed. Like it went on, I think like, um, some tech media and stuff. And I was like, that's cool. Like it didn't make money. There was no payment button, but it didn't make money there was no payment button but it was it was actually people using it I think tens of thousands of people used it that's a great idea I wonder why like why why don't we have that why don't we have things that access Gmail and extract some useful aggregate information yeah you could tell Gmail like don't give me all the emails just give me the ones with YouTube links, you know, or something like that. Yeah. I mean, there is a whole ecosystem of like apps you can build on top of the Google, but people don't really do this. They build, I've seen a few, like Boomerang, there's a few apps that are like good, but just, I wonder what, maybe it's not easy to make money. I think it's hard to get people to pay for these like extensions and plugins, you know? Because it's not like a real app, so it's hard to get people to pay for these like extensions and plugins you know because it's not like a real app so it's not like people don't value it people vote oh this and a plugin should be free you know when i want to use a plugin in google sheets or something i'm not going to pay for it like it should be free which is but if you go to a website and you actually okay i need this product i'm going to pay for this because it's a real product so even though it's the same code in the back it it's a plugin. Yeah, I mean you can do it through extensions like Chrome extensions from the browser side. Yeah, but who pays for Chrome extensions, right? Like barely anybody. That's not a good place to make money probably. Yeah, that sucks. Chrome extensions should be an extension for your startup. You have a product, oh we also have a Chrome extension. I wish the Chrome extension would be the product. I wish Chrome would support that, like where you could pay for it easily. Cause like imagine, I can imagine a lot of products that would just live as extensions, like improvements for social media. Yeah. It's like GPTs, you know? GPTs, yeah. Like these chat GPTs, they're going to charge money for it now. You get a rev share, I think, for an opening eye. I made a lot of them also. Why? We'll talk about it. So let's rewind back. It's a pretty cool idea to do 12 startups in 12 months. What's it take to build a thing in 30 days? Like at that time, how hard was that? I think the hard part is like figuring out what you shouldn't add, right? What you shouldn't build because you don't have time. So you need to build a landing page. Well, you need to build the product, actually, because it needs to be something they pay for. Do you need to build a login system? Maybe no. Maybe you can build some scrappy login system. Like for Photoi, you sign up, you pay with a Stripe checkout, and you get a login link. And when I started, there was only a login link with a hash, and that's just a static link. So it's very easy to log in. It's not so safe, you know, what if you leak the link and now I have real Google login, but that took like a year. So keeping it very scrappy is very important to, cause you don't have time. You know, you need to focus on, um, what you can build fast. So money, Stripe, uh, build a product, build a landing page. Um, you need to think about how are people going to find this? So are you going to put it on going to find this so are you going to put it on Reddit or something how are you going to put it on Reddit without being looked at as a spammer right like if you say hey this is my new startup you should use it no nobody gets deleted you know maybe if you find a problem that a lot of people on Reddit already have on subreddits you know like and you solve that problem say some people I made this thing that might solve your problem and maybe it's free for now you know like uh that could work you know but you need to be very you know um narrow it down what you're building time is limited yeah actually can we go back to the you laying in a room feeling like a loser yeah i still feel like a loser sometimes. What's, what can you, can you speak to that feeling, to that place of just like feeling like a loser? And I think a lot of people in this world are laying in a room right now listening to this and feeling like a loser. Okay. So I think it's normal if you're young that you feel like a loser, first of all. Especially when you're 27. Yes. Yeah. Especially. There's like a peak. Yeah. Yeah. I think 27 is the peak. And when you're 27. Yes. There's like a peak. Yeah, I think 27 is the peak. And so I would not kill yourselves. It's very important. Just get through it, you know? But because you have nothing, you have probably no money, you have no business, you have no job. Like Jeremy Peterson said this. I saw it somewhere. Like the reason people are depressed is because they have nothing. They don't have a girlfriend, they don't have a boyfriend. You need stuff. You need like a family. You need things around you. You need to build a life for yourself. nothing they don't have a girlfriend they don't have a boyfriend they don't have you need stuff you need like a family you need things around you need to build a life for yourself you don't build a life for yourself you'll be depressed so if you're alone in asia in a hostel looking at the ceiling and you don't have any money coming in you don't have a girlfriend you don't of course you're depressed it's logic but back then if you're in the moment you think there's not logic there's something wrong with me you know yeah um and and also i think i started going i started getting And also, I think I started going, I started getting like anxiety, and I think I started going a little bit crazy, where I think travel can make you insane. And I know this because I know that there's like digital nomads that they kill themselves. And I haven't checked like the comparison with like baseline people, like Suze Ray, but I have a hunch, especially in the beginning when it was a very new thing, like 10 years ago, that it can be very psychologically taxing. And you're alone a lot back then when you travel alone. There was no other digital nomads back then a lot. So you're in a strange culture. You look different than everybody. Like I was in Asia. Everybody's really nice in Thailand, but you're not part of the culture. You're traveling around. You're hopping from city to city. You don't have a home anymore. You feel disrooted. And you're constantly an outcast in that you're different from everybody else. Yes, exactly. But people treat you like Thailand. People are so nice, but you still feel like outcast. And then I think the digital nomads I met then were all kind of like, it was like shady business, you know, but they were like vigilantes because it was a new thing and like one guy was selling illegal drugs was an American guy was selling illegal drugs via UPS to Americans you know on this website they were like a lot of drop shippers doing shady stuff um there's a lot of shady things going on there and they were they didn't look like very balanced people they didn't look like people I wanted to hang with you know so I also felt outcast from other foreigners in Thailand, other digital nomads. And I was like, man, I made a big mistake. And then I went back to Holland and then I got even more depressed. You said digital nomad. What is digital nomad? What is that way of life? What is the philosophy there? And the history of the movement? I struck upon it on accident because I was like, I'm going to graduate university and then I'm going to, I need to get out of here. I'll fly to Asia because I've been before in Asia. I studied in Korea in 2009, like study exchange. i i was like i'm gonna graduate university and i'm gonna i need to get out of here i'll fly to asia because i've been before in asia i studied in korea in 2009 like study exchange so it's like asia is easy thailand's easy i'll just go there figure things out and it's cheap it's very cheap chiang mai i would live like for 150 per month rent for like a private room pretty good so i struggled on this on accident i was like okay there's other people on laptops working on their startup or working remotely. Back then, nobody worked remotely, but they worked on their businesses, right? And they would, you know, live in like Colombia or Thailand or Vietnam or Bali. They would live kind of like in more cheap places. And it looked like a very adventurous life. Like you travel around, you build your business. There's no pressure from like your home society, right? Like you're American. So you get pressure from america's telling you kind of what to do like you need to buy a house or you need to do this stuff i had this in holland too and you can get away from this pressure you can find it kind of feel like you're free you're kind of there's nobody telling you what to do but that's also why you start feeling like you go crazy because you are you are free you're disattached from anything and anybody um you're disattached from your culture you're disattached from anything and anybody. You're disattached from your culture, you're disattached from the culture you're probably in because you're staying very short. I think Franz Kafka said, I'm free, therefore I'm lost. Man, that's so true. Yeah, that's exactly the point. And yeah, freedom is like, it's the definition of no constraints, right? Like anything is possible, you can go anywhere. And everybody's like, oh, that must be super nice. You know, like freedom. You must be very happy. And it's the opposite. Like, I don't think that makes you happy. I think constraints probably make you happy. And that's a big lesson I learned then. But what were they making for money? So you're saying they were doing shady stuff at that time? For me, you know, because I was more like a developer. I wanted to make startups kind of. And it was like drugs being shipped to America make startups kind of and and it was like um it was like drugs being shipped to america like diet pills and stuff like non-fda proof stuff you know and they would like there was no like effort they were like they would save beers they would laugh about like all the dodgy kind of they're doing you know uh that part of it okay kind of vibe you know like kind of sleazy ecom vibe i'm not saying all e-commerce please you know but right but you know this vibe it could be a vibe. And your, your vibe was more build cool shit. That's ethical. You know, the guys with sports cars in Dubai, these people, you know, e-comm like, oh, bro, you got to drop ship. Yeah. You make a hundred million a month. Those people was this shit. And I was like, this is not my people. Yeah. I don't, I mean, there's nothing wrong with any of those individual components, but, but there's a foundation. That's not my people. Yeah. I mean, there's nothing wrong with any of those individual components. No, no judgment. But there's a foundation that's not quite ethical. What is that? I don't know what that is. But yeah, I get you. No, I don't want to judge. It was more, I know that for me, it wasn't my world. It wasn't my subculture. I wanted to make cool shit, you know, but they also think their cool shit is cool. So, you know, but I wanted to make like real like startups and that was my thing. I would read Hacker News, you know, like Y Combinator and they were making cool stuff. but i wanted to make like real like startups and that was my thing i would read hacker news you know like white culminator and they were making cool stuff so i wanted to make cool stuff i mean that's a pretty cool way of life just if you romanticize it for a moment it's very romantic man it's very it's colorful you know like if i think about the memories what are some happy memories just like working working cafes or working in just the freedom that that uh envelops you for that way of life because anything is possible you just get up i think it was amazing like we would work like you would make friends and we would work until you know 6 a.m in bali for example with like um with andre my friend, who is still my best friend and another friend. And we would work until like the morning when the sun came up because at night, the coworker space was silent, you know, there was nobody else. And I would wake up like 6 PM or 5 PM. I would drive to the coworker space on a motorbike. I would buy like 30 hot lattes from a cafe how many? 30 because there was like there was like 6 people coming or we didn't know sometimes people would come in did you say 3-0-30? yeah nice and we would drink like 4 per person or something you know man it's Bali I don't know if they were powerful lattes you know but they were lattes and we would put them in a plastic bag and then we would drive there and all the coffee was like falling you know everywhere and then we'd go and have these coffees here and we'd put it in a plastic bag and then we'd drive there and all the coffee was like falling, you know, everywhere. And then we'd go and have these coffees here and we'd work all night. We'd play like techno music and everybody would just work in there. Like this was literally like business people. They would work in a startup and we'd all try and make something. And then the sun would come up and the morning people, you know, the yoga girls and yoga guys would come in, the yoga class at six and they say, hey, good morning. And we're like, we look like this, and we're like, what's up, how are you doing? And we didn't know how bad we looked, but it was very bad. And then we'd go home, sleep in like a hostel or a hotel and do the same thing and again and again and again. And it was this lock-in mode, like working. And that was very fun. So it's just a bunchin mode, you know, like working, and that was very fun. So it's just a bunch of you techno music blasting all through the night, yeah. More like... Oh, so rapid-paced. Not like this cheesy... See, for me, it's such an interesting thing because the speed of the beat affects how I feel about a thing. So the faster it is, the more anxiety I feel, but that anxiety is channeled into productivity. if it's a little too fast i start the anxiety overpowers you don't like drum and bass music probably not no it's too fast i mean for working as a i have to play with it it's like you can actually like i can adjust my yeah level of anxiety this must be a better word than anxiety. It's like productive anxiety. Yeah. Like whatever that is. It also depends what kind of work you do, right? Like if you're writing, you probably don't want drum and bass music. I think for codes, like industrial techno, this kind of stuff, kind of fast, it works well because you really get like locked in and combined with caffeine, you know, you go deep, you know? And I think you balance on this edge of anxiety because this caffeine is also hitting your anxiety and you want to be on the edge of anxiety with this techno running. Sometimes it gets too much, like stop the techno, stop the music. It's like, but, uh, but those are good memories, you know, and also like travel memories. Like you go from city to city and it feels like it's kind of like jet set life. Like it's, it's feels very beautiful. Like you. Like you're seeing a lot of cool cities. What was your favorite place that you remember that you visited? I think still like Bangkok is the best place. And back in Chiang Mai, I think Thailand is very special. Like I've been to the other place, like I've been to Vietnam and I've been to South America and stuff. I still think Thailand wins in how nice people are, how easy of a life people have there. Everything's cheap? Yeah. Well, Bangkok is getting expensive now. But Chiang Mai is still cheap. I think when you're starting out, it's a great place. Man, the air quality sucks. It's a big problem. And it's quite hot. But that's a very cool place. Pros and cons. I love Brazil also. My girlfriend is Brazilian, but I do love, not just because of that, but very cool place. Pros and cons. I love Brazil also. My girlfriend is Brazilian but I do love, not just because of that but I like Brazil. The problem still is the safety issue. It's like in America, it's localized. It's hard for Europeans to understand safety is localized to specific areas. So if you go to the right areas, it's amazing. Brazil is amazing. If you go to the wrong areas, maybe you die. Yeah. I mean, that's true. But it's not true in Europe. In Europe, it's much more average. That's true. That's true. You're right. You're right. It's more averaged out. Yeah. I like it when there's strong neighborhoods. When you're like, you cross a certain street and you're in a dangerous part of town. Man, yeah. I like it. I like there's certain cities in the United States like that. Yeah. I like that. And you're saying Europe is more meltdown. But you don't feel scared? Well, I don't. I like it. I like there's certain cities in the United States like that. Yeah. I like that. And you're saying Europe is more meltdown. But you don't feel scared? Well, I don't. I like danger. But you did BJJ. No, not even just that. I think danger is interesting. Yeah. So danger reveals something about yourself, about others. Also, I like the full range of humanity. Yeah. So I don't like the mellowed out aspects of humanity. I have friends. There's a lot of friends that are exactly like this. Like they go to like the kind of broken areas, you know? Like they like this reality. They like this authenticity more. They don't like luxury. They don't like... Oh yeah, I hate luxury. Yeah, it's very European of you. Wait, what's that? That's a whole nother conversation. So you quoted Fre freya stark quote to awaken quite alone in a strange town is one of the most pleasant sensations in the world yeah uh do you remember a time you awoken in a strange town and felt like that we're talking about small towns or big towns or man anywhere i think i wrote it in some blog post and like, it was a common thing when you would wake up and this was like, because I have this website. I started a website about this digital nomads like called nomadlist.com and there was a community. So it was like 30,000 other digital nomads because I was feeling lonely. So I built this website and I stopped feeling lonely. Like I started organizing meetups and making friends. And it was very common that people would say they would wake up and they would forget where they are. Yeah. Like for the first half minute and they had to look outside like where am I? Which country? Which sounds really like privileged but it's more like funny. Like you literally don't know where you are because you're so disrooted. But there's something, man, it's like Anthony Bourdain, you know? There's something pure about this kind of vagabond travel thing, you know? Like, it's behind me, I think. I don't, like, now I travel with my girlfriend, right? It's very different. But it is a romantic, like, memories of this kind of, like, vagabond individualistic solo life. But the thing is, it didn't make me happy. But it was very cool. But it didn't make me happy, right? It made me anxious. There's something about it that made you anxious. I don't know. I still feel like that. It's a cool, but it didn't make me happy, right? It made me anxious. There's something about it that made you anxious. I don't know, I still feel like that. It's a cool feeling. It's scary at first, but then you realize where you are and you, I don't know, it's like you awaken to the possibilities of this place when you feel like that. That's it. It's like, great. And it's even when you're doing some basic travel. I go to San Francisco or something. Yeah, you have like the novelty effect. Like you're in a new place, like here things are possible.. Yeah, you have like the novelty effect. Like, you're in a new place. Like, here things are possible. You know, you don't get bored yet and that's why people get addicted to travel, you know? Back to startups. You wrote a book on how to do this thing and gave a great talk on it. How to do startups. The book's called Make Bootstrappers Handbook. Yeah. I was wondering if you could go through some of the steps. It's idea, build, launch, grow, monetize, automate, and exit. There's a lot of fascinating ideas in each one. So idea stage. Yeah. How do you find a good idea? So I think you need to be able to spot problems. So for example, you can go in your daily life, like when you wake up and you're like, what is stuff that I'm really annoyed with? That's like in my daily life, that when you wake up and you're like, what is stuff that I'm really annoyed with that's like in my daily life that doesn't function well? And that's a problem that you can see, okay, maybe that's something I can write code about, you know, code for, and it will make my life easier. So I would say make like a list of all these problems you have and like an idea to solve it and see which one is like viable. You can actually do something and then start building it. So that's a really good place to start. Become open to all the problems in your life. Like actually start noticing them. I think that's actually not a trivial thing to do, to realize that some aspects of your life could be done way, way better. Yeah. Because we kind of very quickly get accustomed to discomforts. Exactly. Like for example, like do to discomforts. Exactly. Like for example, like doorknobs. Yeah. Like design of certain things. Like- New Lex Freeman doorknob. 50 times. That one I know how much incredible design work has gone into. It's a really interesting, doors and doorknobs. Just the design of everyday things, forks and spoons. It's going to be hard to come up with a fork that's better than the current fork designs. And the other aspect of it is you're saying like in order to come up with interesting ideas, you got to try to live a more interesting life. Yeah. But that's where travel comes in. Because when I started traveling, I started seeing stuff in other countries that you didn't have in Europe, for example, or America even. Like if you go to Asia, like dude, especially 10 years ago, nobody knew about this. Like the WeChat, all these apps that they already had before we had them, these everything apps, right? Like now Elon's trying to make X this everything app, like WeChat, same thing. Like in Indonesia or Thailand, you have one app that you can order food with, you can order groceries, you can order massage, you can order groceries you can order massage uh you can order car mechanic um anything you can think of is in the app and that stuff for example you you know that's called like arbitrage you can go to back to your country and build that same app for your country for example so you start seeing um problems you start seeing solutions that other countries already other people already did in the rest of the world. And also traveling in general just gives you more problems because travel is uncomfortable, you know. Airports are horrible. Airplanes are not comfortable either. There's a lot of problems you start seeing just getting out of your house, you know. But also you can, I mean, in the digital world, you can just go into different communities and see what can be improved by the others in that. Yeah, yeah. But what specifically is your process of generating ideas? Do you like do idea dumps? Like do you have a document where you just keep writing stuff? Yeah, I used to have like a, because when I wasn't making money, I was trying to like make this list of ideas to see like, so I need to build, I was thinking statistically already, like I need to build all these things and one of these will work out probably, you know? So I need to have a was thinking statistically already like I need to build all these things and one of these will work out probably you know so I need to have a lot of things to try and I did that right now I think like because I already have money I can do more things based on technology so for example AI when I found out about when stable diffusion came or chat GPT and stuff all these things were like I didn't start working with them because i had a problem i had no problems but i was very curious about technology and i was like playing with it and figuring out like first just playing with it and then you find something like okay this generates stable fusion generates houses very beautiful and interiors you know so it's less about problem solving it's more about the possibilities of new things you can create yeah but that's very risky because that's the famous like solution trying to find a problem yeah and usually it doesn't work and that's very common with with startup founders i think they they have tech but actually people don't need to tech right can you actually explain it'd be cool to talk about some of the stuff you created can you explain um this photoai.com? Yeah. So it's like Fire Your Photographer. The idea is you don't need a photographer anymore. You can train yourself as an AI model, and you can take as many photos as you want, anywhere, in any clothes, with facial expressions like happy or sad or poses, all this stuff. So how does it work? This is a link to a gallery of ones done on me. On the left, you have the prompts, the box. So you can write like, so model is your model, this is Lex Friedman. So you can write like model as a blah, blah, blah, whatever you want. Then press the button and it will take photos. It will take like one minute. What are you using for the hosting for the compute? Replicate. Replicate.com. They're very, very good. Okay, it's cool, like this interface-wise, it's cool that you're showing how long it's going to take. This is amazing. So it's taking a, I'm presuming you just loaded in a few pictures from the internet. Yeah, so I went to Google Images, typed in Lex Friedman, I added like 10 or 20 images. You can open them in the gallery and you can use your cursor as to yeah so some don't look like you so the hit and miss rate is like i don't know let's say like 50 50 or something but when i was watching your tweets like it's been getting better and better and better it was very bad in the beginning it was so bad but still people signed up to it you know there's there's two lexes it's great it's getting more and more sexual it's making me very uncomfortable man but that's the problem with these models because no we need to talk about this because the models in Stability Fusion so the photorealistic models that are like fine-tuned they were all trained in porn in the beginning and there was a guy called Hassan so I was trying to figure out how to do photorealistic AI photos and it was Stability Fusion by itself is not doing that well. The faces look all mangled, and it doesn't have enough resolution or something to do that well. But I started seeing these base models, these fine-tuned models, and people would train on porn, and I would try them, and they would be very photorealistic. They would have bodies that actually made sense, like body anatomy. But if you look at the photorealistic models that people use now still there's still core of porn there like of naked people so i need to prompt out the naked and everyone needs to do this with ai startups of imaging you need to prompt out the naked stuff you need to put a you know naked um you have to keep reminding the model you need to put clothes on yeah don't put naked because it's very risky. I have Google Vision that checks every photo before it's shown to the user to like check for NSFW. Like a nipple detector? Oh, NSFW detector. Because you get, the journalists get very angry if they, you know. If you sexualize. There was a journalist, I think, that got angry, that used this and was like, oh, it made me, it showed like a nipple because Google Vision didn't detect it. So, there's like, these kind of problems you need to deal with, you know? That's what I'm i'm talking about this is with cats but look at the cat face it's also kind of mangled you know i'm i'm uh i'm a little bit disturbed you zoom in on the cat if you want like like yeah this is a very sad cat it doesn't have a nose it doesn't have a nose but this man but this is the problem with AI startups because they all act like it's perfect. Like this is groundbreaking. But it's not perfect. It's like really bad, you know, half the time. So if I wanted to sort of update model as... Yeah, so you remove this stuff and you write like whatever you want, like in Thailand or something or in Tokyo. In Tokyo? Yeah. And then... you could say like at night with neon lights like you could add more detail I'll go in Austin do you think you'll know in Texas in Austin, Texas cowboy hats in Texas yeah as a cowboy as a cowboy it's gonna go so towards the porn direction it's man I hope not it's the end of my career or the beginning it depends as a cowboy. It's going to go so towards the porn direction. Man, I hope not. This is the end of my career. Or the beginning. It depends. We can send you a push notification when your photos are done. Yeah, all right, cool. Yeah, let's see. Oh, wow. So this whole interface you've built. Yeah. This is really well done. It's all jQuery. Do I still use jQuery? Yes. The only one? Still. After 10 years? To this day, you're not the only one. The entire web is PHP. It's PHP and jQuery the only one still after 10 years to this day you're not the only one the entire web is PHP it's PHP and jQuery and SQLite you're just like one of the top performers from a programming perspective that are still like openly talking about it but everyone's using PHP like if you look most of the web is still probably PHP and jQuery 70% it's because of WordPress right because the blogs yeah that's true. Yeah. That's true. I'm seeing a revival now. People are getting sick of frameworks. Like all the JavaScript frameworks are so like, what do you call it? Like wieldy. Like they're so... It takes so much work to just maintain this code. And then it updates to a new version. You need to change everything. PHP just stays the same and works. Yeah. And... Can you actually just speak to that stack? You build all your websites, apps, startups, projects, all of that with mostly vanilla HTML. JavaScript, jQuery, PHP, and SQLite. That's a really simple stack, and you get stuff done really fast with that. Can you just speak to the philosophy behind that? I think it's accidental because that's the thing I knew. Like I knew PHP, I knew HTML, CSS, you know, because you make websites. And when my startup started taking off, I didn't have time to, I remember putting on my to-do list, like learn Node.js because it's important to switch, you know, because this obviously is a much better language than PHP. And I never learned it. I never did it. Because at the end of time, these things were growing like this, and I was launching more projects, and I never had time. It's like one day, you know, I'll start coding properly, and I never got to it. I sometimes wonder if I need to learn that stuff. It's still a to-do item for me to really learn Node.js or Flask or these kind of... React, Fugitive. Yeah, React. And it feels like a responsible software engineer should know how to use these. But you can get stuff done so fast with vanilla versions of stuff. Yeah. It's like software developers if you want to get a job and there's like, you know, people making stuff like startups. And if you want to be entrepreneur, probably, maybe you shouldn't. I wonder if there's like, I really want to measure performance and speed. I think there's a deep wisdom in that. Yeah. I do think that frameworks and just constantly wanting to learn the new thing this complicated way of software engineering gets in the way i'm not sure what to say about that because definitely like you shouldn't build everything from just vanilla javascript or vanilla c for example yeah c plus plus when you're building systems engineering is like there's a lot of benefits for a pointer safety all that kind of stuff. So I don't know, but it just feels like you can get so much more stuff done if you don't care about how you do it. Man, this is my most controversial take, I think. And maybe I'm wrong, but I feel like there's frameworks now that raise money. They raise a lot of money. Like they raise $50 million, $100 million, $30 million. And the idea is that you need to make the developers, the new developers, like when you're 18 or 20 years old, right? Get them to use this framework and add a platform to it, like where the framework can... It's open source, but you probably should use the platform, which is paid, to use it. And the cost of the platforms to host it are a thousand times higher than just hosting it on a simple AWS server or a VPS on DigitalOcean, right? So there's obviously like a monetary incentive here. Like we want to get a lot of developers to use this technology and then we need to charge them money because they're going to use it in startups and then the startups can pay for the bills um but what that it kind of destroys the the information out there about learning to code because they you know they pay youtubers they pay influencers developer influencers a big thing to like um and same thing what happens with like nutrition and fitness or something same thing happens in developing they pay these influencers to promote the stuff use it it, make stuff with it, make demo products with it. And then a lot of people are like, wow, use this. And I started noticing this because when I would ship my stuff, people would ask me, what are you using? I would say, just PHP, jQuery, why does it matter? And people would start kind of attacking me, like, why are you not using this new technology, this new framework, this new thing? And I say, I don't know because this PHP thing works and I don't really, I'm optimizing for anything, it just works. And I say, I don't know, because this PHP thing works and I don't really I'm optimizing for anything, just do it, just works. And I never understood like why, like I understand there's new technologies that are better and there should be improvement, but I'm very suspicious of money, just like lobbying. There's money in this developer framework scene, there's hundreds of millions that goes to ads or influencers or whatever it can't all go to developers you don't need so many developers for a framework and it's open source to make a lot of more money on these startups so that's a really good perspective but in addition to that is like when you say better it's like can we get some data on the better because like I want to know from the individual developer perspective, and then from a team of five, team of 10, team of 20 developers, measure how productive they are in shipping features, how many bugs they create, how many security holes. PHP was not good at security for a while, but now it's good. In theory, is it though but now it's good. In theory. In theory. Is it, though? Now it's good. Now, as you're saying it, I want to know if that's true. Because PHP was just the majority of websites on the internet. Could be true. Is it just overrepresented? Same with WordPress. Yes, there's a reputation that WordPress has a gigantic number of security holes. I don't know if that's true. I know it gets attacked a lot because it's so popular. It definitely does have security holes, but maybe a lot of other systems have security holes as well. Anyway, I just sort of questioning the conventional wisdom that keeps wanting to push software engineers towards frameworks, towards complex, like super complicated sort of software engineering approaches that stretch out the time it takes to actually build a thing. 100%, and it's the same thing with big corporations. 80% of the people don't do anything. It's like, it's not efficient. And if your benchmark is like people building stuff that actually gets done and like for society, right? Like if we want to save time, we should probably use technologies that's simple, that's pragmatic, that's like, that works, that's not overly complicated, doesn't make your life like a living hell, you know? And use a framework when it obviously solves a problem, a direct problem that you... Of course, yeah, of course. I'm not saying you should code without a framework. You should use whatever you want, but... Yeah, I think it's suspicious, you know? And I think it's suspicious. When I talk about it on Twitter, there's a lot... There's this army comes out, you know? There's these framework armies. Yeah. Man, something my gut tells me. I want to ask the framework army, what have they built this week? It's the Elon question. What did you do this week? Yeah, and did you make money with it? Did you charge users? Is it a real business? Yeah. So, going back to the cowboy. First of all, every... Some don't look like you, right? But some do. Every aspect of this is pretty incredible. I'm also just looking at the interface. It's really well done. So, this is all just jQuery. Yeah. This is really well done. So, take me through the journey of photo AI. You don't know much. Most of the world doesn't know much about stable diffusion or any of this. Any of the generative AI stuff. So you're thinking, okay, how can I build cool stuff with this? What was the origin story of photo AI? I think it started because stable diffusion came out. So stable diffusion like the first generative image model, AI model. And I started playing with it. Like you could install it on your Mac. Like somebody forked it and made it work for MacBooks. So I downloaded it and cloned the repo and started using it to generate images. And it was like amazing. Like it would, I found it on Twitter because you see things happen on twitter and i would post what i was making on twitter as well and you could make any image you could write a prompt essentially write a prompt and then it generates a photo of that or image of that in any style like they would use like artist names to make like a picasso kind of style and stuff and i was trying to see like what is it good at is it good at? Is it good at people? No, it's really bad at people, but it was good at houses. So architecture, for example, I would generate like architecture houses. So I made a website called thishousedoesnotexist.org. And it generated like, they call it like house porn. Like house porn is like a subreddit. So this was Stable Diff Fusion, like the first version. So it looks really... You can click for another photo. So it generates like all these kind of non-existing houses. It is house porn. But it looked kind of good, you know? Like especially back then. It looks really good. Now things look much better. It's really, really well done. Wow. And it also generates like a description and you can upvote is it nice upvote it yeah man there's so much to talk to you about like the choices here it's really this is very scrappy in the bottom there's like a ranking of the most upvoted houses so these are the top voted if you go to all time you see quite beautiful ones yeah so this one is my favorite the number one it's like kind of like a Fodded houses. So these are the top Fodded. And if you go to old time, you see quite beautiful ones. Yeah. So this one is my favorite. The number one. It's like kind of like a... How is this not more popular? It was really popular for like a while, but then people got so bored of it. I think because I was getting bored of it too. Like just continuous house porn. Like everything starts looking the same. But then I saw it was really good at interior. is house porn like everything starts looking the same but then i saw it was really good at interior so i pivoted to interiorai.com where um i tried to like upload first generate interior designs and then i tried to do like it was a new technology called image to image where you can input an image like a photo and it would kind of modify the thing so you see it looks almost the same as photo as It has the same code essentially. Nice. So I would upload a photo of my interior where I lived and I would ask like, change this into like, I don't know, like maximalist design, you know? And it worked and it worked really well. So I was like, okay, this is a startup because obviously interior design, AI, and nobody's doing that yet. So I launched this and i was successful and made like in within a week made 10k 20k a month and now still makes like 40k 50k a month uh and it's been like two years so then i was like how can i improve this interior design i need to start learning fine tuning fine tuning is where you have this existing ai model and you fine tune it on the specific goal you wanted to do so i would find really beautiful interior design make a gallery and train a new model that was very good interior design and it worked and i used that as well and then for fun i uploaded photos of myself and here's where it happened uh and to train myself like and this would never work obviously and it worked and, it started understanding me as a concept. So my face worked and you could do, like, different styles. Like me as a, like, very cheesy, medieval warrior, all this stuff. So I was like, this is another startup. So now, I did avatar.ai.me. I couldn't get to.com. And this was, this was, yeah, avatar.ai.me. Well, now it's forwards to photo.ai because it pivoted. Got it. But this was... Is it still up? Yeah, avatar.ai.me. Well, now it's forwards to Photoi because it pivoted. Got it. But this was more like cheesy thing. So this is very interesting because this went so viral. It made like, I think like 150K in a week or something. So most money I ever made. And then big, this is very interesting, the big VC companies like Lenza, which are much better at iOS and stuff than me. I didn't have an iOS app. They quickly built an iOS app that does the same, and they found technology. And it's all open technology, so it's good. And I think they made like $30 million with it. They became like the top-grossing app after that. How do you feel about that? I think it's amazing, honestly. And it's not like... You didn't have a feeling like, ah, fuck. No, I was a little bit sad. Because all my products would work out. And I never had real fierce competition. And now I have fierce competition from a very skilled, high-talent iOS developer studio. And they already had an app. They had an app in the App Store for, I think, retouching your face or something. So they were very smart. They added these avatars to there. It's a feature. They had the users. They had an app in App Store for, I think, retouching your face or something. So they were very smart. They add these avatars to there. It's a feature. They had the users. They do a push notification to everybody. We have these avatars. Man, I think they made so much money. And I think they did a really great job. And I also made a lot of money with it. But I quickly realized it wasn't my thing because it was so cheesy. It was like kitsch, you know? It's kind of like me as a Barbie or me as a, you thing because it was so cheesy. It was like kitsch, you know? It's kind of like me as a Barbie or me as a, you know, it was too cheesy. I wanted to go for like, what's a real problem we can solve? Because this is going to be a hype. It's going to be, and it was a hype, these avatars. It's like, let's do real photography. Like, how can you make people look really photorealistic? And it was difficult. And that's why these avatars worked because they were all like in a cheesy, you know, Picasso style. And art is easy because you interpret the, all the problems that AI has with your face are like artistic, you know, if you call it Picasso. But if you make a real photo, all the problems with your face, like it just, you look wrong, you know? So I started making Photo AI, which was like a pivot of it, where it was like a pivot of it where it was like a photo studio um where you could take photos without actually needing a photographer needing a studio you don't just you know you just type it and i've been working on it for like the last yeah it's really incredible that journey is really incredible let's go to the beginning of photo ai though because i remember seeing a lot of really hilarious photos i think you were using yourself as a case study right yeah yeah so what uh there's a tweet here sold 100 000 in ai generated avatars and it's a lot like it's a lot for anybody it's a lot for me like uh making 10k a day on this you know that's That's amazing. That's amazing. And then the nested tweet, like that's the launch tweet. And then before that, it's like me hacking on it. Oh, I see. So that, okay. So October 26, 2022. I train an ML model on my face. Because my eyes are quite far apart. I learned when I did YouTube, I would put like a photo of like my DJ photo, you know, my mixture. People would say I'd look like a hammerhead shark. It's like the top comment. So then I realized my eyes are far apart. Yeah, the internet helps you figure out how you look, you know? Boy, do I love the internet. So first trap. Well, what is this? Wait. It's water from the waterfall. But the waterfall is in the back, you know? So what's going on? So this is, how much of this is real? It's all AI. It's all AI. Yeah. That's pretty good though for the early days. Exactly. So, but this was hit or miss. So you had to do a lot of curation because 99% of it was really bad. So these are the photos I uploaded. How many photos did you use? Only these. I will try more up to date pics later. These are the, these are the only photos you uploaded? Yeah. Wow. Wow. Okay. So like, you were learning all this super quickly. What, what are some like interesting details you remember from that time for what you had to figure out to make it work? And for people just listening, he uploaded just a handful of photos that don't really have a good capture of the face. And he's able to... I think it's cropped. It's cropped by the layout. But they're square photos, so they're 512 by 512. Because that's stable diffusion. But nevertheless, not great capture of the face. It's not like a collection of several hundred photos that is like 360. Exactly. I would imagine that too when I started. I was like, oh, this must be like some 3D scan technology, right? Yeah. So I think the cool thing with AI, it trains the concept of you. So it's literally like learning, just like any AI model learns, it learns how you look. So I did this and then I was getting so much, I was getting DMs, like telegram messages, like, how can I do the same thing? I want these photos. My girlfriend wants these photos. So I was like, okay, this is obviously a business, but I didn't have time to code it, make a whole like app about it. So I made an HTML pageml page um registered domain name and this was not even it was a stripe payment link which means you have a legit link to stripe to pay but there's no code in the back so all you know is you have customers that paid money then i added like a some uh a type form link so type form is a site where you can create like your own um input from like google forms so they would get an email with a link to the type form or actually just a link after the checkout and they could upload their photos so enter their email upload the photos and um and i launched it and i was like here first sale so it's october 2022 and i think within like the first 24 hours was like, I'm not sure, it was like a thousand customers or something. But the problem was I didn't have code to automate this. So I had to do manually. So the first few hundred, I just literally took their photos, trained them, and then I would generate the photos with the prompts and had this text file with the prompts and I would do everything manually. And this quickly became way too much. But that's another constraint. Like I was forced to code something and I would do everything manually. And this quickly became way too much. But that's another constraint. I was forced to code something up that would do that. And that was essentially making it into a real website. So at first it was the type form and they uploaded through the type form. Stripe checkout type form. And then you were like, that image is downloaded. Did you write a script to export? No, it's download the image myself. It's a zip file. It's literally. And you unz the zip file and you unzipped it yeah unzipped it yes and then no I'm because you know do things don't skill Paul Graham says right so and then I would train it and then I would email them the photos I think from my personal email say here's your here's your avatar and they liked it they were like wow it's amazing you emailed them with your personal email because I didn't have an email address on this domain and this is like a hundred people yeah and then you know who signed up uh like a man i cannot say but really famous people like really really like billionaires famous tech billionaires did it and i was like wow this is crazy and i sent i was like so scared to mess them so i said thanks so much for using my sites you know he's like yeah amazing app great work. He's like, yeah, amazing app, great work. So I was like, this is different than normal reaction. It's Bill Gates, isn't it? Can't say anything. Just like shirtless pics. GDPR, you know, like privacy, European regulation. I cannot share anything. But I was like, wow. But this shows like, so you make something and then if it takes off very fast, you're like, it's validated. You know, you're like, here's something that people really want, but then also thought this is hype, this is going to die down very fast and it did. Cause it's too cheesy. But you had to automate the whole thing. How'd you automate it? So like what's the AI component? Like how hard was that to figure out? Okay. So that's actually in many ways, the easiest thing, cause there is all these platforms already back then there was platforms for, uh, fine-tune stable to figure out. Okay, so that's actually in many ways the easiest thing because there is all these platforms already back then. There was platforms for fine-tune stable diffusion. Like now I use Replicate. Back then I used different platforms which was funny because that platform when this thing took off I would tweet because I tweet always like how much money these websites make. And then so the you call it vendor, right? The platform that did the GPUs. They increased their price for training from $3 to $20 after they saw that I was making so much money. So immediately my profit is gone because I was selling them for $30. And I was in a slack with them, like saying, what is this? Like, can you just put it back to $3? They say, yeah, maybe in the future. We're looking at it right now. I'm like, what are you talking about? Like, you just took all my money, you know? And they're smart. Well, they're not that smart because like, you're also have a large platform and a lot of people respect you. So you can literally come out and say that. But they're not sure. I think it's like kind of dirty to cancel a company or something. I prefer just bringing my business elsewhere, but there was no elsewhere back then. Right. So I started talking to other AI model model ml platforms so replicate was on those platforms and i started dming the ceo say can you please create like it's called dream booth this fine tuning of yourself can you add this to your site because i need this because i'm being price guards and he said no because it takes too long to run it takes half an hour to run and we don't have the gpus for it i said please, please, please, please. And then after a week, they said, we're doing it. We're launching this. And then this company became, it was like not very famous company, it became very famous with this stuff because suddenly everybody was like, oh, we can build similar apps like Avatar apps and everybody started building Avatar apps and everybody started using Replicate for it. And it was from these early DMs with like the CEO, like Ben Furrish, very nice guy. And he was like, they never priced gouge CEO, Ben Furrish, very nice guy. And he was like, they never prized gouge me. They never treated me bad. They've always been very nice. It's a very cool company. So you can run any ML model, any AI model, LLMs, you can run on here. And you can scale. Yes, they scale. Yeah, yeah. And I mean, you can do now. You can click on the model and just run it already. It's super easy. You log in with GitHub. That's great. And by running it on the website, then you can click on the model and just run it already. It's like super easy. You log in with GitHub. That's great. And by running it on the website, then you can automate with the API. You can make a website that runs the model. Generate images, generate text, generate video, generate music, generate speech. Video. Find two models. They do anything, yeah. It's a very cool company. Nice. And you're like growing with them, essentially. They grew because of you, because it's like a big use case. Yeah. The website even looks weird now. it started as like a machine learning platform that was like i didn't even understand what it did it was just too too ml you know like you would understand because you're in the ml world i wouldn't know it's noob friendly yeah exactly and i didn't know how it worked and um but i knew that they could probably do this and they did it they built the models and now i use them for everything and we trained like i think now like 36 000 models 36 000 people already but is there some tricks to fine-tuning to like the collection of photos that are provided like how do you like yes man so many hacks the hack it's like 100 hacks to make it work what what's what is my secret? Well, not, not the secrets, but the more like insights maybe about the human face and the human body. Like what, what kind of stuff gets messed up a lot? I think people, well, man, it's a living, people don't know how they look. So, um, yeah, they generate photos of themselves and then they say, ah, it doesn't look like me. Yeah. But it does. Then I, you know, you can check the training process. It does look like you, but you don't know how you look. So there's a face dysmorphia of yourself that you have no idea how you look. Yeah, that's hilarious. I mean, I've got one of the least pleasant activities in my existence is having to listen to my voice and look at my face. So I get to really have to sort of come into terms with the reality of how I look and how I sound. And everybody. People often don't, right? Really? You have a distorted view, perspective. I know that like I would, if I would make a selfie, how I think I look, that's nice. Other people think that's not nice. But then they make a photo of me. I'm like, that's super ugly. But then they're like, no, that's how you look and you look nice. You know, so how other people see you is nice. Other people think that's not nice. But then they make a photo of me. I'm like, that's super ugly. But then they're like, no, that's how you look and you look nice. You know? So how other people see you is nice. So you need to ask other people to choose your photos. Yeah, yeah, yeah. You shouldn't choose them yourself because you don't know how you look. Yeah, you don't know what makes you interesting, what makes you attractive, all this kind of stuff. And a lot of us, this is a dark aspect of psychology, we focus on some small flaws. Yeah. This is why I hate plastic surgery, for example. People try to remove the flaws when the flaws are the thing that makes you interesting and attractive. I learned from the hammerhead shark eyes, this stuff about you that looks ugly to you. And it's probably that what makes you original makes you nice and people like it about you. And it's not like, oh my God. And people notice it. People notice your hammerhead eyes, you know. But it's like, that's not like, oh my God. And people notice it. People notice your hammerhead eyes, you know, but it's like, that's me, that's my face. So I love myself and that's confidence and confidence is attractive. Yes. Right? Confidence is attractive. But yes, understanding what makes you beautiful. It's the breaking of symmetry makes you beautiful. It's the breaking of the, the average face makes you beautiful. All of that. Yeah. And obviously different for men and women, a different age, all this kind of stuff. But underneath it all, the personality, all of that, when the face comes alive, that also is the thing that makes you beautiful. But anyway, you have to figure all that out with AI. Yeah, one thing that worked was like, people would upload full body photos of themselves. So I would crop the face, right? Because then the model knew better that we're training mostly the face here, but then I started losing resemblance of the body because some people are skinny, some people are muscular, whatever. So you want to have that too. So now I mix full body photos in the training with face photos, face crops, and it's all automatic. And I know that other people, they use, again, AI models to detect what are the best photos in this training set and then train on those. But it's all about training data and that's with everything in AI. How good your training data is is in many ways more important than how many steps you train for, like how many months or whatever with the GPUs. The goals. Do you have any guidelines for people of how to get good data, how to give good data to fine tune on? Like the photos should be diverse. So for example, if I only upload photos with a brown shirt or green shirt, the model will think that I'm training the green shirt. So the things that are the same every photo are the concepts that are trained. What you want is your face to be the concept of strength, um, and everything else to be diverse, like different. So diverse lighting as well. Diverse everything. Yeah. Outside, inside. Uh, but there's no, like, this is the problem. There's no like manual for this and nobody knew we were all just, especially two years ago, we're all hacking, trying to test anything, anything you can think of and, uh, it's, anything you can think of. It's frustrating. It's one of the most frustrating and also fun and challenging things to do with AI because it's a black box. Carpati, I think, says this. We don't really know how this thing works, but it does something, but nobody really knows why. We cannot look into the model of an LLM. What is actually in there? We just know it's like a 3D matrix of numbers, right? So, it's very frustrating, because some things you think they're obvious, that they will improve things, will make them worse. And there's so many parameters you can tweak. So, you're testing everything to improve things. I mean, there's a whole field now of mechanistic interpretability that like studies that tries to figure out how it tries to break thing apart and understand how it works but you know there's also the data side and the actual like consumer facing product side of figuring out how you get it to generate a thing that's beautiful or interesting or naturalistic all that kind stuff. And you're like at the forefront of figuring that out about the human face. And humans really care about the human face. They're very vain. Like me, you know, like I want to look good in your podcast, for example. Yeah, for sure. And one of the things I actually would love to like rigorously use photo AI because for the thumbnails, I take portraits of people. I didn't, I don't know shit about photography. I basically used your approach for photography. I like Google. How do you take photographs? Camera, lighting. And also it's tough because maybe you could speak to this also, but like with photography, no offense to any, they're true artists, great photographers, but like with photography no offense to any they're true artists great photographers but like people like take themselves way too seriously think you need a whole lot of equipment you definitely don't want one light you need like five lights and like and you have to have like the lenses and i talked to to a guy an expert of uh shaping the sound in a room okay and because i was thinking i'm gonna do a podcast studio whatever i should probably like treat the do a sound treatment on the room and like when he showed up and analyzed the room he thought everything i was doing was horrible. And that's when I realized like, you know what? I don't need experts in my life. Did he kick him out of the house? No, I didn't kick him. I said, thank you. Thank you very much. Thank you. Great tips. Bye. I just felt like there is, you know, focus on whatever the problems are. Use your own judgment. Use your own instincts. Don't listen to other people. And only consult other people when there's a specific problem. And you consult them not to offload the problem onto them, but to gain wisdom from their perspective. Even if their perspective is ultimately one you don't agree with, you're going to gain wisdom from that. And I ultimately come up with a PHP solution. PHP and jQuery solution to my PHP studio. I got a little suitcase. I use like just the basic sort of consumer type of stuff. One light. It's great. Yeah. And look at you. You're like one of the top podcasts in the world and you get millions of views and it works. And the people that spend so much money on optimizing for the best sound, for the best studio, they get like 300 views you know so what is this about this is about that either you do it really well or also that a lot of these things don't matter like what matters is probably the content of the podcast like you get the interesting guests focus on stuff that matters yeah and i think this is very common they call it gear acquisition syndrome like gas like people in any industry do this they just buy all the stuff there was a meme recently like you buy what's the name for the guy that buys all the stuff before he even started doing the hobby right um marketing you know marketing does that to people they want to buy this stuff yeah but like man you can make a you can make a hollywood movie on an iphone you know? If the content is good enough, and it will probably be original because you would be using an iPhone for it, you know? So that said, so the reason I brought that up with photography, there is wisdom from people. And one of the things I realized, you probably also realized this, but how much power light has to convey emotion convey emotion you just take one light and move it around so you're sitting in the darkness move it around your face the different positions are having a second light potentially you can play with how a person feels just from a generic face it's interesting like you can make people attractive you can make them ugly you can make them scary you can make them lonely all of can make them scary. You can make them lonely, all of this. And so you kind of start to realize this. And I would definitely love AI help in creating great portraits of people. Guest photos, yeah. Guest photos. For example, that's a small use case. But for me, I suppose it's an important use case because I want people to look good, but I also want to capture who they are, maybe my conception of who they are, what makes them beautiful, what makes their appearance powerful in some ways. Sometimes it's the eyes. Oftentimes it's the eyes. But there are certain features of the face can sometimes be really powerful. It's also kind of awkward for me to take photographs. So I'm not collecting enough photographs for myself to do it with just those photographs. If I can load that off onto AI and then start to play with like lighting. You should do this and you should probably do it yourself. Like you can use photo, but it's even more fun if you do it yourself. So you train the models. You can learn about like control net. Control net is where, for example, your photos in your podcast are usually like from the angle, right? So you can create a control net face pose that's always like this. So every model, every photo you generate uses this control net pose, for example. I think it would be very fun for you to try out that stuff. Do you play with lighting at all? Do you play with lighting, with pose? Actually, this week or recently, there's a new model came out that can adjust the light of any photo, but also AI image with stable diffusion. I think it's called ReLight. And it's amazing. You can upload kind of like a light map. So for example, red, purple, blue, and use that light map to change the light on the photo you input. It's amazing. So there's for sure a lot of stuff you can do. What's your advice for people in general on how to learn all the state-of-the-art AI tools available? Like you mentioned, new models coming out all the time. Yeah. How do you pay attention how do you stay on top of everything I think you need to join Twitter X you know X is amazing now and the whole AI industry is on X and they're all like anime avatars so it's funny because my friends ask me this like what who should I follow to stay up to date and I say go to X and follow all the AI anime models that this person is following or follows. And I send them some URL and they all start laughing. Like, what is this? But they're real like people hacking around in AI. They get hired by big companies and they're on X. And most of them are anonymous. It's very funny. They use anime avatars. I don't. But those people hack around and they publish what they're discovering um they talk about papers for example uh so yeah definitely x it's great almost exclusively all the people i follow are ai people yeah it's a good time now well but also just brings happiness to my to my soul because there's so much turmoil on twitter yeah like politics and stuff there's battles going on it's like a war zone and it's nice to just go into this happy place to where people are building stuff yeah 100 i like twitter that for that most like building stuff like yeah seeing other because it inspires you to build and it's it's just fun to see other people share what they are discovering and then you're like okay i'm gonna make something's it's just fun to see other people share what they are discovering and then you're like okay i'm gonna make something too it's just super fun and so if you want to start going x and then i would go to replicate and start trying to play with models and when you have something that kind of you manually enter stuff you set the parameters something that works you can you can make an app out of it or a website can you speak a little bit more to the process of it becoming better and better and you speak a little bit more to the process of it becoming better and better and better? So I had this photo guy and a lot of people using it. There was like a million or more photos a month being generated. And I discovered, I was testing parameters like increase the step count of generating a photo or changing the sampler, like a scheduler. Like you have DPM, two caras, all these things I don't know anything about, but I know that you can choose them when you generate an image, and they have different resulting images. But I didn't know which ones were better. So I would do it myself, test it. But then I was like, why don't I test on these users? Because I have a million photos generated anyway. So on like 10% of the users, I would randomly test parameters. And then I would see if they would, because you can favorite a photo or you can download it. I would see if they would, because you can favorite a photo or you can download it. I would measure if they favorite or like the photo. And then I would A-B test and you test for significance and stuff, which parameters were better and which were worse. So you started to figure out which models are actually working well. Exactly. And then if it's significant enough data, you switch to that for the whole, you know, all the users. And so that was, that was like the breakthrough to make it better. Just use the users to improve themselves. And I tell them when they sign up, we do sampling, we do testing on your photos with random parameters. And that worked really well. I don't do a lot of testing anymore because it's like, I kind of reached like a diminishing point where it's like, it's kind of good. But that's, there was a breakthrough. Yeah. So it's really about the parameters, the models that choose and letting the users help do the search in the space of models and parameters for you. Yeah, yeah. But actually, so like Stable Diffusion, I use 1.5, 2.0 came out, Stable Diffusion XL came out, all these new versions, and they were all worse. And so the core scene of people are still using 1.5 because it's like, it's also not like what you call neutered people are still using 1.5 because it's like, it's also not like what you call neutered, like they neutered like to make it super like with safety features and stuff. Yeah. So most of the people are still on Stable Diffusion 1.5 and meanwhile Stable Diffusion, the company went like, the CEO left, a lot of drama happened because they couldn't make money. And, yeah, so they gave us, it's very interesting, they gave us this open source model that everybody uses. They raised like hundreds of millions of dollars. It all, they didn't make any money with it or not a lot. They did an amazing job and now everybody uses open source model for free and they did, you know, it's amazing. Like it's, it's amazing. You're not even using the latest one. No, and the strange thing is that this company raised hundreds of millions, but the people that are benefiting from it are really small. Like people like me will make these small apps that are using the model. And now they're starting to charge money for the new models, but the new models are not so good for people. They're not so open source, right? Yeah, it's interesting because open source is so impactful in the AI space, but you wonder like, what is the business model behind that? But it's enabling this whole ecosystem of companies that they're using the open source models. It's kind of like those frameworks, but then they didn't, you know, bribe enough influencers to use it and they didn't charge money for the platform, you know? Okay, so back to your book and the ideas. We didn't even get to the first step uh generating ideas so you had notebook and you're filling it up how do you know when an idea is a good one like what you have this just flood of ideas how do you pick the one that you actually try to build man mostly you don't know like mostly i choose the ones that are most viable for me to build. Like I cannot build a space company now, right? Would be quite challenging. But I can build something. Did you actually write down like space company? No, I think asteroid mining would be very cool. Because like you go to an asteroid, you take some stuff from there, you bring it back, you sell it. You know, it's, but then you need to do, and you can hire someone to launch the thing. So all you need is like the robot that goes to the asteroid. You know, and the robotics interesting you can hire someone to launch the thing. So all you need is like the robot that goes to the asteroid, you know, and the robotics interesting. Like I want to also learn robotics. So maybe that could be, I think both the asteroid mining and the robotics. Yeah. Together. I feel like. No, exactly. This is, this is the, we do this not because it's easy, but because we thought it would be easy exactly that's me with that's me with asteroid mining exactly that's why I should do this it's not nomadlist.com it's not it's asteroid mining you have to like build stuff you have to gravity is really hard to overcome yeah but it seems man I sound like idiots probably not but it sounds quite approachable like relatively approachable you don't have to build the rockets. Oh, you use something like SpaceX to get out of space. Yeah, you hire SpaceX to send your, you know, this dog robot or whatever. So, is there actually exist a notebook where you wrote down asteroid mining? No, I used, back then I used Trello. Trello. Yeah, but now I don't really. I use Telegram. I write down like saved messages and I have like idea. You type to yourself on Telegram. You know, like, because you use WhatsApp, right, I think. So, you have like message to yourself right down there. You type to yourself on Telegram. You know like because you use WhatsApp right I think. So you have like message to yourself thing also. Yeah so you talk to yourself on Telegram. Yeah use like a notepad not forget stuff and then I pin it you know. I love how like you're not using super complicated systems or whatever. You know people use Obsidian now there's a lot of these Notion where you have systems for note taking you're not your notepad your notepad. You're not, you're Notepad, you're notepad.exe guy, if you're a Windows user. Man, I saw some YouTubers doing this like, there's a lot of these productivity gurus also, and they do this whole like iPad with a pencil. And then I also had an iPad and I also got the pencil and I got this app where you can like draw on paper, like draw like a calendar, you know, like people, students use this and you can do coloring and stuff. And I'm like, dude, I did this for a week. And I'm like, what am I doing in my life? Like I could just write it as a message to myself and it's good enough, you know? Speaking of ideas, you shared a tweet explaining why the first idea sometimes might be a brilliant idea. The reason for this, you think, is the first idea submerges from your subconscious and was actually boiling in your brain for weeks, months think, is the first idea submerges from your subconscious and was actually boiling in your brain for weeks, months, sometimes years in the background. The eight hours of thinking can never compete with the perpetual subconscious background job. This is the idea that if you think about an idea for eight hours versus the first idea that pops into your mind. And sometimes there is subconscious stuff that you've been thinking about for many years. That's really interesting. I mean like emerges. I wrote it wrong because I don't know. I'm not native English, but it emerges from your subconscious, right? It comes from the, like a water is your subconscious. In here it's boiling. And then when it's ready, it's like ding, a second microwave comes out. And there you have your idea. You think you have ideas like that? Yeah. All the time. 100%. It's just stuff that's been like there. Yes. Yeah. And I also, it comes up and I bring it, I send it back, you know, like send it back to the kitchen to boil more. Yeah. And it's like a soup of ideas that's cooking. It's 100%. This is how my brain works. And I think most people. But it's also about the timing. Sometimes you have to send it back not just because you're not ready but the world is not ready. Yes. So many times like startup founders are too early with their idea. Yeah, 100%. the timing. Sometimes you have to send it back, not just because you're not ready, but the world is not ready. Yes. So many times, like, startup founders are too early with their idea. Yeah, 100%. Robotics is an interesting one for that because, like, there's been a lot of robotics companies that failed. Yeah. Because it's been very difficult to build a robotics company, make money, because there's the manufacturing, like, the cost of everything. The intelligence of the robot is enough. It's not sufficient to create a compelling enough product from which to make money. So there's this long line of robotics companies that have tried, they had big dreams, and they failed. Yeah, like Boston Dynamics. I still don't know what they're doing, but they always upload YouTube videos, and it's amazing. But I feel like a lot of these companies don't have a solution looking for a problem for now. Military, obviously, is useless. But do looking for a problem for now, you know? Military obviously uses, but like, do I need like a robotic dog now for my house? I don't know, like it's fun, but it doesn't really solve anything yet. I feel the same kind of with VR, like it's really cool, like Apple Vision Pro is very cool. It doesn't really solve something for me yet. And that's kind of the tech looking for a solution, right? But one day it will. When the personal computer, when the Mac came along, there was a big switch that happened. It somehow captivated everybody's imagination. The application, the killer apps, became apparent. You can type in a computer. But did they become apparent immediately? Back then, they also had this thing where, like, we don't need these computers. They're like a hype. And it also went in kind of like waves. Yeah, but the hype is the thing that allowed the thing to proliferate sufficiently to where people's minds would start opening up to it a little bit, the possibility of it. Right now, for example, with the robotics, there's very few robots in the homes of people. Exactly, yeah. The robots that are there are Roombas, so the vacuum cleaners, or they're Amazon Alexa. Yeah, or dishwasher. I mean, it's essentially a robot. Yes, but the intelligence is very limited, I guess, is one way we can summarize all of them. Except Alexa, which is pretty intelligent, but is limited with the kind of ways it interacts with you. That's just one example. Yeah. I sometimes think about that as like if some people in this world were kind of born in the whole existence, it's like they were meant to build the thing. Yeah. You know? I think I sometimes wonder like what I was meant to do. Because you have these plans for your life, you have these dreams. I think you're meant to build robots. Okay, me personally. Maybe. That's the sense I've had, but it could be other things. Hilariously enough, be the thing I was meant to be is to talk to people. Yeah. Which is weird, because meant to be is to talk to people yeah which is weird because i always was anxious about talking to people it's like a really yeah i'm scared of this i was scared yeah exactly i'm scared of you it's just anxiety throughout social interaction in general i'm an introvert that hides from the world so yeah it's really strange yeah but that's that's also kind of life. Like life brings you to, it's very hard to super intently kind of choose what you're gonna do with your life. It's more like surfing. You're surfing the waves, you go in the ocean, you see where you end up, you know? Yeah, yeah. And there's universe has a kind of sense of humor. Yeah. I guess you have to just, yeah, allow yourself to be carried away by the waves. Exactly, yeah, yeah. Have you felt that way in your life? Yeah, all the time. Like, yeah. That's like, I think that's the best way to live your life. So allow whatever to happen. Like, do you know what you're doing in the next few years? Is it possible that it'll be completely, like, changed? Possibly. I think relationships, like, you want to hold a relationship, right? You want to hold your girlfriend, you want to become wife and all this stuff. But you should, I think you should stay open to where, like, for example, where you want to live. Like, I don't know, we don't know where we want to live, for example. That's something that will figure itself out. It will crystallize where, you know, you will get sent by the waves to somewhere where you want to live, for example. What are you going going to do i think that's a really good way to live your life it's i i think most stress comes from trying to control like hold things like um it's kind of buddhist you know you need to like lose control let it loose and things will happen like when you do mushrooms when you do drugs like psychedelic drugs the people that starts that are like control freaks get bad trips, right? Because you need to let go. Like, I'm a pretty control freak, actually. And when I did mushrooms when I was 17, it was very good. And then at the end, it wasn't so good because I tried to control it. It was like, ah, now it's going too much. You know, now I need to, let's stop. Bro, you can't stop it. You need to go through with it, you know? And so I think it's a good metaphor for life. I think that's a very tranquil way to lead your life. Yeah, actually, when I took ayahuasca, that lesson is deeply within me already that you can't control anything. I think I probably learned that the most in jiu-jitsu. So just let go and relax. And that's why I had just an incredible experience. There's literally no negative aspect of my ayahuasca experience or any psychedelics I've ever had. Some of that could be with my biology, my genetics, whatever, but some of it was just not trying to control. Yeah. Just surf the wave. For sure. I think most stress in life comes from trying to control. So once you have the idea, step two, build. How do you think about building the thing once you have the idea? I think you should build with the technology that you know. So for example, Nomad List, which is like this website I made to figure out the best cities to live and work as digital nomads. It wasn't a website. It launched as a Google spreadsheet. So it was a public Google spreadsheet anybody could edit. And I was like, I'm collecting like cities where we can live as digital nomads with the internet speed, the cost of living, other stuff. And I tweeted it. And back then, I didn't have a lot of followers. I had a few thousand followers or something. And it went viral for my skill viral back then, which was five retweets. And a lot of people started editing it. And there was hundreds of cities in this list from all over the world with all the data. It was very crowdsourced. And then I made that into a website. So figuring out like what technology you can use that you already know. So if you cannot code, you can use a spreadsheet. If you cannot use a spreadsheet, like whatever, you can always use, for example, a website generator like Wix or something or Squarespace, right? Like you don't need to code to build a startup. All you need is an idea for a product, build something like a landing page or something, put a stripe button on there and then make it. And if you can code, use the language that you already know and start coding with that and see how far you can get. You can always rewrite the code later. Like the tech stack, it is not the most important of a business when you're starting out a business. The important thing is that you validate that there's a market, that there's a product that people want to pay for. So use whatever you can use. If you can't code, use spreadsheets, landing page generators, whatever. Yeah, and the crowdsourcing element is fascinating. It's cool. It's cool. It's cool when a lot of people start using it. You get to learn so fast. Yeah. I've actually did the spreadsheet thing. You share a spreadsheet publicly, and I made it editable. Yeah. It's so cool. It's interesting things start happening. Yeah. I did it for a workout thing because I was doing a large amount of push-ups and pull-ups. Yeah, I remember this, man. And like, and well, also Google Sheets is pretty limited in that everything's allowed. So people could just write anything in any cell and they can create new sheets, new tabs. And it just exploded. And one of the things that I really enjoyed is there's very few trolls because actually other people would delete the trolls. There would be like this weird war. They want to protect the thing. It's an immune system that's inherent to the thing. It becomes a society in the spreadsheet. And then there's the outcasts who go to the bottom of the spreadsheet and they would try to hide messages. And they're like, I don't want to be with the cool kids up at the top of the spreadsheet. So I'm going to go to the bottom. Self-organizing. Yeah. It's fast. I mean, but that kind of crowdsourcing element is really powerful. And if you can create a product that use that to its benefit, that's really nice. Like any kind of voting system, any kind of rating system for A and B testing is really, really, really fascinating. So anyway, so Nomad List is great. I would love for you to talk about that. But one sort of way to talk about it is through you building hood maps. So you did an awesome thing, which is document yourself building the thing and doing so in just a handful of days like three, four, five days. So people should definitely check out the video in the blog post. Can you explain what Hood Maps is and what this whole like this process was? So I was traveling and I was still trying to find like problems, right? And I would go, I would discover that like everybody's experience of a city is different because they stay in different areas. Yeah. So I'm from Amsterdam. And when I grew up in Amsterdam, or didn't grow up, but I lived there, university, I knew that center is like, in Europe, the centers are always tourist areas. So they're super busy. They're not very authentic. They're not really Dutch culture. It's Amsterdam tourist culture, you know? Dutch culture, it's Amsterdam tourist culture, you know? So when people would travel to Amsterdam, I would say, don't go to the center, go to, you know, southeast of the center, the Jordan or the pipe or something, more hipster areas, like a little more authentic culture of Amsterdam. That's where I would live, you know, and where I would go. And I thought this could be like an app where you can have like a Google Maps and you put colors over it. You have like areas that are like color-coded, like red is tourist, green is rich, you rich you know green money yellow is hipster you can figure out where you need to go in the city when you travel because i was traveling a lot i wanted to go to the cool spots so just use color color yeah yeah and i would use a canvas so i thought okay what i need i need to did you know that you would be using a canvas no i didn't know it was possible because i didn't know i This is the cool thing. People should really check it out. Is this how it started? Because you honestly capture so beautifully the humbling aspects, the embarrassing aspects of not knowing what to do. It's like, how do I do this? And you document yourself. Yeah, you're right. Dude, I feel embarrassed about myself. It's called being alive nice um so you're like you don't know anything about so canvas is a way html yes html5 thing that allows you to draw shapes draw images just draw pixels essentially so yeah and that's that was special back then because before you could only have like elements right so you So you want to draw a pixel, use a convos. And I knew I needed to draw pixels because I need to draw these colors. And I thought like, okay, I'll get like a Google maps, iframe embeds, and then I'll put a div on top of it with the colors and I'll do like opacity 50, you know, so it kind of shows. So I did that with convos and then I started drawing. And then I felt like, obviously other people need to edit this because I cannot draw all these things myself. So I crowdsourced it again and I, you would draw on the map and then it would send the pixel data to the server. It would put it in a database. And then I would have a robot running like a cron job, which every week would calculate or every day would calculate like, okay, so Amsterdam center, there's like six people say it's tourist, this part of the center, but two people say it's like hipster. Okay, so the tourist part wins, right? It's just an array. So find the most common value in a little pixel area on a map. So if most people say it's tourist, it's tourist and it becomes red. And I would do that for, you know, all the GPS corners in the world. Can I just clarify, do you have to be, as a human that's contributing to this, do you have to be in that location to make the label? No, people just type in cities and go, like, go berserk and start drawing everywhere. Would they draw shapes or would they draw pixels? Man, they drew, like, crazy stuff. Like, offensive symbols, I cannot mention. They would draw penises. I mean, that's obviously a guy thing. I would do the same thing, draw penises. That's the first thing. When I show up to Mars and there's no cameras, I'm drawing a penis on the same thing. Man, I did it in the snow, you know? But the penises did not become a problem because I knew that not everybody would draw a penis and not in the same place. So most people would use it fairly. So just if I had enough crowdsourced data, so you have all these pixels on top of each, it's like a layer of pixels and then you choose the most common pixel so yeah it's just like a poll but in visual format and it works and within a week I had enough data and and it was like cities that did really well like Los Angeles a lot of people started using it like most data is in Los Angeles because Los Angeles has defined neighborhoods yeah and not just in terms of the the official labels but like what they're known for yeah what are the do you did you provide the categories that they were allowed to use as labels the colors yeah as colors so it's just like I think you can see there's like hipster tourist rich business there's always a business area right and then there's like hipster, tourist, rich business. So there's always a business area, right? And then there's a residential. Residents is gray. So I thought those were the most common things in the city, kind of. And a little bit meme-y, like it's almost fun to label it. Yeah. I mean, obviously it's simplified, but you need to simplify this stuff. You know, you don't want to have too many categories. And it's essentially just like using a, you know, paintbrush where you select a color in the bottom, you select the category, and you start drawing. There's no instruction. There's no manual. And then I also added tagging so people could write something on a specific location. So don't go here or here's nice cafes and stuff. And man, the memes that came from that. And I also added uploading so that the tags could be uploaded. So the memes that came from that, and I also added uploading so that the tags could be uploaded. So the memes that came from that is like amazing. Like people in Los Angeles would write crazy stuff. It would go viral in all these cities. You can allow, allow your location and it will probably send you to Austin. Okay. So we're looking, uh, oh boy. Uh, drunk hipsters. Airbro and bros. Airbro and bros. Hipster girls who do cocaine. I saw a guy in a fish costume get beaten up here. Yep. That seems also accurate. Overpriced and underwhelming. Let me see. Let me make sure this is accurate uh let's see dirty sixth for people who know austin know that that's important to label sixth street is uh famous in austin dirty six drunk fat boys accurate drunk fat bros continued on six very well on the rose west sixth drunk douche bros from fret to douche douche i mean it's very Drunk Fat Bros Continued on 6th Very well known West 6th Drunk Doucha Bros They go from frat to douche Douche I mean it's very accurate So far They only let Hot people live here That's I think that might be accurate It's like the district Exercise freaks on the river Yeah that's true Dog runners Accurate Saw a guy in the fish costume get beat up here i want to know this story like so that's that's all user contributed yeah and that's like stuff i couldn't come up with because i don't know austin i don't know the memes here in the subcultures and then me as a user can upvote or downvote this so this is completely that's like because i read it you know upvote downvote to get from there and that's really really really powerful single people with dogs accurate at which point did it go from colors to the actually showing the text I think I added the text like a week a week after and so here's like the pixels so that's really cool the pixels how do you go from there that's a huge amount of data so there's we're now looking at an image where it's just a sea of pixels that call it different colors in a city so how do you combine that to be a thing that actually makes it some sense i think here the problem was that you have this data but it's like it's not locked to one location yeah so i had to normalize it so when you click when you draw on the map it will show you the specific pixel location and you can convert the pixel, it will show you the specific pixel location and you can convert the pixel location to a GPS coordinate, right? Like a lattice of longitudes. But the number will have a lot of commas or a lot of decimals, right? Because it's very specific. Like it's like this specific part of the table. So what you want to do is you want to take that pixel and you want to normalize it by removing like decimals, which I discovered. So that you're talking about this neighborhood or this street, right? So that's what I did. I just took the decimals off and then I saved it like this. And then it starts going to like a grid. And then you have like a grid of data. You get like a pixel map kind of. And you said it looks kind of ugly. So then you smooth it. Yeah, I started adding blurring and stuff. I think now it's not smooth again because I liked it better. People like the pixel look kind of. Yeah, a lot of people use it and it keeps going viral. And every time my Maps bill, like Mapbox, I had to stop using, I first used Google Maps, it went viral. And Google Maps, it was out of credits. So I, and I had to, so funny, during when I launched it, it went viral. Google Maps, the map didn't load anymore. It says over limit you need to contact enterprise sales and i'm like i but i need now like a map so and i don't want to contact enterprise sales i don't want to go on a call schedule with some calendar so i switched to mapbox and then had mapbox for years and then it went viral and i had a bill of twenty thousand dollars000. It was like last year. So they helped me with the bill. They said, you know, you can pay less. And then I now switched to like an open source kind of map platform. So it's a very expensive project and never made any dollar money. But it's very fun, but it's very expensive. What do you learn from that? So like from that experience, because when you leverage somebody else's through the API. Yeah. I mean, I don't think a map hosting service should cost this much, you know, but I could host it myself, but that would be, I don't know how to do that, you know, but I could do that. Yeah, it's super complicated. I think that the thing is more about like, you can't make money with this project. I tried to do many things to make money with it and it hasn't worked. You talked about like possibly can't make money with this project. I tried to do many things to make money with it and it hasn't worked. You talked about like possibly doing advertisements on it or some, or people sponsoring it, yeah. Well, it's really surprising to me that people don't want to advertise on it. I think map apps are very hard to like monetize. Like Google maps also doesn't really make money. Like sometimes you see these ads, but I don't think there's a lot of money there. You could put a banner ad, but it's kind of ugly. And the project is kind of cool. It's kind of fun to subsidize it. It's kind of a little bit part of Nomad List. I put it on Nomad List in the cities as well. But I also realized you don't need to monetize everything. Some projects are just cool. It's cool to have hood maps exist. I want this to exist, it's like, it's cool to have hood maps exist. I want this to exist, right? Yeah. There's a bunch of stuff you've created that I'm just glad exists in this world. That's true. And it's a whole nother puzzle. And I'm surprised to figure out how to make money off of it. I'm surprised maps don't make money, but you're right. It's hard. It's hard to make money. Because there's a lot of compute required to actually bring it to life. And also, where do you put the ad, right? Like, if you have a website, you can put like an ad box, or you can do like a product placement or something. But you're talking about a map app where 90% of the interface is a map. So what are you going to do? You're going to like, like, it's hard to figure out where is this. Yeah. And people don't want to pay for it. No, exactly. Because if you make people pay for it, you lose 99% of the user base and you lose the crowdsourced data. So it's not fun anymore. It stops being accurate. Right? So you kind of, they pay for it by crowdsourcing the data, but then, yeah, it's fine. You know, it doesn't make money, but it's cool. But that said, Nomad List makes money. Yeah. So what was the story behind Nomad List? So Nomad List started because I was in Chiang Mai in Thailand, which is now like the second city here. And I was, you know, working on my laptop. I met like other nomads there and I was like, okay, this seems like a cool thing to do, like working on your laptop in a different country, kind of travel around. But back then the internet everywhere was very slow. So the internet was fast in, for example, kind of travel around. But back then, the internet everywhere was very slow. So the internet was fast in, for example, Holland or United States. But in a lot of parts in South America or Asia, it was very slow, like 0.5 megabits. So you couldn't watch a YouTube video. Thailand weirdly had quite fast internet. But I wanted to find other cities where I could go to work on my laptop and travel. But we needed fast internet. So I was like, let's crowdsource this information with a spreadsheet. And I also needed to know the cost of living because I didn't have a lot of money. I had $500 a month. So I had to find a place where the rent was like $200 per month or something, where I had some money that I could actually rent something. And there was Nomad List and it still runs. I think it's now almost 10 years. So just to describe how it works, I'm looking at Chiang Mai here. There's a total score. It's ranked number two. Yeah, that's like a Nomad score. 4.82. Like by members, but it's looking at the internet. In this case, it's fast, fun, temperature, humidity, air quality, safety, food safety, crime, racism, or lack of crime, lack of racism, educational level, power grid, vulnerability, to climate change, income level. It's a little much, you know? English, it's awesome. It's awesome, walkability. Keep adding stuff because for certain groups of people certain things really matter and this is really cool yeah happiness I'd love to ask about that nightlife free wifi AC female friendly freedom of speech yeah not so good in Thailand you know values derived from national statistics I like how that one I need to do that because the data sets are usually national. They're not on city level, right? So I don't know about the freedom of speech between Bangkok or Chiang Mai. I know them in Thailand. I mean, this is really fascinating. So this is for city. Yeah. It's basically rating all the different things that matter to you in internet. And this is all crowdsourced. Well, so it started crowdsourced, but then I realized that you can download more accurate data sets from like public sources, like World Bank. They have a lot of public data sets, United Nations, and you can download a lot of data there, which you can, you know, freely use. Like I started getting problems with crowdsourced data where, for example, people from India, they really love India and they would submit the best scores for everything in India. And not just like one person, but like a lot of people, they would love to pump India. And I'm like, I love India too, you know, but that's not valid data. So you started getting discrepancies in the data between where people were from and stuff. So I started switching to datasets and that was mostly datasets. But one thing that's still crowdsourced is so people add where they are, they add their travels to their profile. And I use that data to see which places are upcoming and which places are popular now. So about half of the ranking you see here is based on actual digital nomads who are there. You can click on a city, you can click on people, you can see the people, the users that are actually there. And it's like 30,000 or 40,000 members. So these people are in Austin now. 1,800 remote workers in Austin now, of which eight plus members checked in. Members who will be here soon. Yeah, so we have meetups. So people organize their own meetups. And we have about, I think like 30 per month. So it's like one meetup a day. And I don't do anything. They organize themselves. So I just, it think like 30 per month. So it's like one meetup a day. And I don't do anything. They organize themselves. So I just, it's a whole black box. It just runs and I don't do a lot on it. It pulls data from everywhere and it just works. Cons of Austin is too expensive, very sweaty and humid. Now difficult to make friends. Difficult to make friends. Interesting, right? I didn't know that. Difficult to make friends. But it's all crowds, but mostly it's pros. Yeah. Pretty safe, fast internet. I don't understand why it says not safe for women to check the data set. It feels safe. The problem with a lot of places like United States is that it depends per area, right? So if you get like city level data or nation level data, it's like Brazil is the worst because the range in like, safe and wealthy and not safe is like huge. So you can't say many things about Brazil. So once you actually show up to a city, how do you figure out what area, like where to get fast internet? For example, like for me, it's consistently a struggle to figure out. Still. Hotels with fast Wi-Fi, for example. Like a place, okay, okay. I show up to Wi-Fi, for example. Like a place. Okay, okay. I show up to a city. There's a lot of fascinating puzzles. I haven't figured out a way to actually solve this puzzle. When I show up to a city, figuring out where I can get fast internet connection, and for podcasting purposes, where I can find a place with a table that's quiet. Right. That's not easy. Construction structure sounds all kinds of sounds you have to learn about all the sources of sounds in the world and also like the the quality of the room because the more um the emptier the room and like if it's just walls without any curtains or any of this kind of stuff then there's uh echoes in the room anyway but you you figure out that a lot of hotels don't have tables. They don't have like normal. They have this weird desk, right? Yeah, they have this desk. But it's not a center table. Yep. And if you want to get a nicer hotel where it's more spacious and so on, they usually have these boutique, fancy-looking, modernist tables. Yeah, it's too design-y. It's too design-y. They're not really real tables. They don't- It's too designy. It's too designy. They're not really real tables. What if you get an IKEA? Buy IKEA. Yeah, before you arrive, you order an IKEA. Like no mess do this, they get desks. I feel like you should be able to show up to a place and have the desk. Like it's not, unless you stay in there for a long time. Just the entire assembly, all that. Airbnb is so unreliable. It's the range in quality that. Airbnb is so unreliable. The range in quality that you get is huge. Hotels have a lot of problems, pros and cons. Hotels have the problem that the pictures somehow never have good representative pictures of what's actually going to be in the room. That's a problem. Fake photos, man. If I could have the kind of data you have on nomad list for hotels yeah man and i feel like you can make a lot of money on that too yeah the booking fees affiliate right i thought about this idea because we have the same problem like i go to hotels and there's specific ones that are very good and i know now the chains and stuff and but even if even if you go some chains are very bad in a specific city and very good in other cities. And each individual hotel has a lot of kinds of rooms. Yeah. Like some are more expensive, some are cheaper and so on. But you can get the details of what's in the room, like what's the actual layout of the room, what is the view of the room. 3D scan it. I feel like as a hotel, you can win a lot. So first you create a service that allows you to have like high resolution data about a hotel. Then one hotel signs up for that. I would 100% use that website to look for a hotel instead of the crappy alternatives that don't give any information. And I feel like there'll be this pressure for all the hotels to join that site. And you can make a shit ton of money because hotels make a lot of money. I think it's true but the problem is with these hotels, it's the same with the airline industry. Why does every airline website suck when you try to book a flight? It's very strange. Why does it have to suck? Obviously, there's competition here. Why doesn't the best website win? What's the explanation for that? Man, I thought about this for years. So I think it's like, I have to book the flight anyway. I know there's a route that they take and I need to book, for example, Qatar Airlines. And I know there's a route that they take and I need, like I need to book for example Qatar Airlines and I, and I need to get through this process. So, the, and with hotels, similar, you need a hotel anyway. So, do you have time to like figure out the best one? Not really, you kind of just need to get the place booked and you know, you need to get the flight and you'll go through the pain of this process and that's why this process always sucks so much with hotels and airline websites and stuff because they don't have an incentive to improve it because generally only for like a super upper segment of the market i think like super high luxury it affects the actual booking right i don't know i think that that's an interesting theory i think that must be a different theory. My theory would be that great engineers, like great software engineers, are not allowed to make changes. Basically, like, there's some kind of bureaucracy. There's way too many managers. There's a lot of bureaucracy. And great engineers show up. They try to work there, and they're not allowed to really make any contributions, and then they leave. And so you have a lot of mediocre software engineers that are not really interested in improving any other thing. And like literally they would like to improve the stuff, but the bureaucracy of the place, plus all the bosses, all the high up people are not technical people probably. They don't know much about web dev. They don't know much about programming. So they just don't give any respect. Like you have to give the freedom and the respect to great engineers as they try to do great things. That feels like an explanation. Like if you were a great programmer, would you want to work at America Airlines or? No, no. I'm torn on that because I actually, as somebody who lost program, would love to work at America Airlines so I can make the thing better. Yeah, but I would work there just to fix it for myself, you know? Yeah, for yourself. And then you just know how much suffering you alleviated. Yeah, for the world society. Imagine all the thousands, maybe millions of people that go to that website and have to click like a million times. It often doesn't work. It's clunky, all that kind of stuff. You're making their life just so much better. Yeah. But there must be an explanation that has to do with managers and bureaucracies. I think it's money. Do you know booking.com? Sure. It's the biggest booking website in the world. It's Dutch, actually. And they have teams, because my friend worked there, they have teams for a specific part of the website, like a 10 by 10 pixels area, where they run tests on this. So they run tests like and they're famous for this stuff like, oh, there's only one room left, right? With this red letters, like one room left, book now. And they got a fine from the European Union about this. Kind of interesting. So they have all these teams and they run the test for 24 hours. They go to sleep, they wake up next day, they come to the office and they see, okay, this performed better. This website has become a monster, but it's the most revenue generating hotel booking website in the world. It's number one. So that shows that it's not about like user experience. It's about like, I don't know, about making more money and, you know, not every company, but, you know, if they they're optimizing it's a public company if they're optimizing for money but you can optimize for money by disrupting like making it way better yeah but it's always startups they start with disrupting like booking all started as a startup in 1997 and then they become like the old shit again like you know Uber now starts to become like a taxi again right it was very good in the beginning now it's kind of like taxis now in many places are better they're nicer than Ubers right so it's like the circle i think some of it is also just it's hard to have ultra competent engineers yeah like stripe seems like a trivial thing but it it's hard to pull off like why was it so hard for Amazon to have buy with one click? Which I think is a genius idea. Make buying easier. Like, make it as frictionless as possible. Just click a button once and you bought the thing. Yeah. As opposed to most of the web was a lot of clicking and it often doesn't work. Like with the airlines. Remember the forms would delete? You could click next, submit, and it would 404 or something or your internet would go down, your modem. Yeah, man. And I would have an existential crisis. Like the frustration would take over my whole body. And I would just want it to quit life for a brief moment there. Yeah. I'm so happy the form stays in Google Chrome now when something goes wrong. But that's, so Google, somebody at Google improves society with that, right? Yeah. And one of the challenges of Google is to have the freedom to do that. They don't anymore. There's a bunch of bureaucracy. Yeah, at Google. There's so many brilliant, brilliant people there. But it just moves slowly. Yeah. I wonder why that is. Maybe that's the natural way of a company. But you have people like Elon who rolls in and just fires most most of the folks and always operate they push the company to operate as a startup even when it's already big yeah but I mean Apple does this like I studied in business school Apple does competing product teams that operate as startups so it's three to five people they make something they have multiple teams who make the same thing the best team wins so you need to I think you need to emulate like a free market inside a company to make it entrepreneurial you know and you need entrepreneurial mentality in a company to to come up with new ideas and do it better so one of the things you do really really well is learn a new thing like you're trying to you have an idea you try to build it and then you learn everything you need to in order to build it you have your current skills but you learn just the minimal amount of stuff so you're a good person to ask like what how do you learn how do you learn quickly and effectively in just the stuff you need you did uh just by way of example you did a 30 days learning session on 3d yeah where you documented yourself giving yourself only 30 days to learn everything you can about 3D. Yeah, I tried to learn virtual reality, because I was like, this was like, same as AI. It came up suddenly, like 2016, 2017, with, I think, HTC Vive, these big VR glasses before Apple Vision Pro. And I was like, oh, this is going to be big, so I need to learn this. So I know nothing about 3D. I installed, like, I think Unity and, like, Blender and stuff. And I started learning all this stuff because I thought this was like a new, you know, nascent technology that was going to be big. And if I had the skills for it, I could use this to build stuff. And so I think with learning for me, it's like, I think learning is so funny because people always ask me like, how do you learn to code? Like, should I learn to code? And I'm like, I don't know. Like every day I'm learning, it's kind of of cliche but every day i'm learning new stuff so every day i'm searching on google or asking out chat gpt how to do this thing how to do this thing every day i'm getting better at my skill so you never stop learning so the whole concept of like how do you learn well you never end so where do you want to be do you want to know a little bit do you want to know a lot do you want to do it for your whole life or so i think taking action is the best step to learn so making things like you know nothing just start making things okay so like how to make a website search how to make a website or nowadays you ask ggpt how do i make a website where do i start it generates code for you right copy the code put it in a file save it open it in google chrome or whatever you have a website and then you start tweaking with it and you start, okay, how do I add a button? How do I add AI features, right? Like nowadays. So it's like by taking action, you can learn stuff much faster than reading books or tutorials. Actually, I'm always curious. Let me ask perplexity. How do I make a website? I'm just curious what he would say. I hope it goes with like really basic vanilla solutions. How do I make a website? I'm just curious what he would say. I hope it goes with like really basic vanilla solutions. Define your website's purpose. Choose a domain name. Select a web hosting provider. Choose a website, a builder, a CMS. Website builder platform. It feels like Wix or Squarespace is what I said. Yeah. Landing page. What do I, how do I say if I want to program it myself? Design your website. Create essential pages. Yeah. Landing page. What do I, how do I say if I want to program it myself? Design your website, create essential pages. Yeah. Even tells you to launch it, right? Launch your website. Cool. Well, I mean, you could do that. Yeah, but this is literally it. Like it's, this is, if you want to make a website. This is the basis, like Google Analytics. But you can't make nomad lists with this way. You can. With Wix. Like with. Ah, no, you can, you can get pretty far, I think. You can get pretty far. think you can get pretty far these website builders are pretty advanced like all you need is a grid of images right yeah that are clickable that open like another page yeah you can get quite far how do i learn to program choose a programming language to start with. Free code camps is good. Work through resources systematically. Practice coding regularly for 30, 60 minutes a day. Consistency is key. Join programming communities like Reddit. Yeah, it's pretty good. Yeah. It's pretty good. So I think it's a very good starting ground because imagine you know nothing and you want to make a website, you want to make a startup. This is like, that's why, man, the power of AI for education is going to be insane. Like people anywhere can ask this question and start building stuff. Yeah, it clarifies it for sure. And just start building. Like keep, build, build. Like actually apply the thing. Whether it's AI or any of the programming for web development. Yeah. like keep build build like actually apply the thing whether it's AI or any of the programming for web development yeah just have a project in mind which I love the idea of like 12 startups in 12 months or like build a project almost every day just build the thing yeah and get it to work and finish it every single day that's a cool experiment I think that was the inspiration it was a girl who did 160 websites and get it to work and finish it every single day. That's a cool experiment. I think that was the inspiration. It was a girl who did 160 websites in 160 days or something, literally mini websites. And she learned to code that way. So I think it's good to set yourself challenges, you know? You can go to some coding bootcamp, but I don't think they actually work. I think it's better to do, like, for me, out of the dark, like self-learning and setting yourself like challenges and just getting in but you need discipline you know you need discipline to keep to keep doing it and coding you know coding is very it's a steep learning curve to get in it's very annoying working with computers is very annoying uh so it can be hard for people to keep doing it, you know? Yeah, that thing of just keep doing it and don't quit, that urgency that's required to finish a thing. That's why it's really powerful when you documented this, the creation of hood maps, or like a working prototype, that there's just a constant frustration, I guess. It's like, oh, okay, how do I do this? And then you look it up and you're like, okay, you have to interpret the different options you have. Yeah, man, I understand. And then just try it. And then there's a dopamine rush of like, ooh, it works. Cool. Man, it's amazing. And I live streamed it. It's on YouTube and stuff. People can watch it. And it's amazing when things work. Look, it's just like amazing that you, I look very not, I don't look far ahead. So I only look, okay, what's the next problem to solve? And then the next problem. And at the end, you have a whole app or a website or a thing, you know? But I think most people look way too far ahead. You know, they look, it's like this poster again. Like you shouldn't, you don't know how hard it's going to be. So you should only look like for the next thing, the next little challenge, the next step, and then see where you end up. And assume it's going to be easy. Yeah, exactly. Like be naive about it because you're going to have very difficult problems. A lot of the big problems won't be even tech, will be like public, right? Like maybe people don't like your website. Like you will get canceled for a website, for example. Like a lot of things can happen. What's it like building in public like you do? Like openly, where you're just iterating quickly and you're getting people's feedback. So there's the power of the crowdsourcing, but there's also the negative aspects of people being able to criticize. So man, I think haters are actually good because I think a lot of haters have good points. And it takes like stepping away from the emotion of like, your website sucks because blah, blah, blah. And you're like, okay, just remove this. Your website sucks because it's personal. What did he say? Why did he not like it? And you figure out, okay, he didn't like it because the signup was difficult or something. Or it wasn't the data. They say, no, this data is not accurate or something. Okay, I need to improve the quality of the data. This hater has a point. It's dumb to completely ignore your haters, you know? And also, man, I think I've been there when I was like 10 years old or something. You're on the internet just shouting crazy stuff. That's like most of Twitter, you know, or half of Twitter. So you have to take it with a grain of salt. Yeah, man, you need to grow a very thick skin like on Twitter, on X. Like people say, but I mute a lot of people. Like I found out I muted already 15,000 people recently. I checked. So in 10 years, I muted 15,000 people. So that's like- Like that's one by one manual? Yeah. Oh, wow. So 1,500 people per year. And I don't like to block because then they get angry. They make a screenshot and they say, ah, you blocked me. So I just mute and they disappear and it's amazing so you mentioned reddit so hood maps that make it to the front page of reddit yeah yeah it did yeah yeah yeah it did it was amazing and my server almost went down and i was checking like google analytics was like 5 000 people on the website or something crazy. And it was at night, it was amazing. Man, I think nowadays, honestly, TikTok, YouTube Reels, Instagram Reels, a lot of apps get very big from people making TikTok videos about it. So let's say you make your own app, you can make a video for yourself. Like, oh, I made this app. This is how it works, blah, blah, blah. And this is why I made it works blah blah blah and this is why I made it for example and this is why you should use it and if it's a good video it will take off and you will get man I got like $20,000 extra per month or something from a TikTok from one TikTok video like it made a photo AI by you or somebody else by some random guy so there's all these AI influencers that they write about they show AI apps and they then they ask money later like when a video goes viral all I do is do it again and send me $4,000 or something. I'm like, okay, I did that, for example. But it works. Like TikTok is a very big platform for user acquisition. Yeah. And organic, like the best user acquisition I think is organic. You don't need to buy ads. You probably don't have money when you start to buy ads. So use organic or write a banger tweet right that's can make an app take off as well well i mean yeah fundamentally create cool stuff and have just a little bit of a following enough to like for for the cool thing to be noticed and then it becomes viral if it's cool enough yeah and you don't need a lot of followers anymore because that on x and a lot of platforms because tiktok x I think Instant Reels also, they have the same algorithm now. It's not about followers anymore. It's about they test your content on a small subset, like 300 people. If they like it, it gets tested to 1,000 people and on and on. So if the thing is good, it will rise anyway. It doesn't matter if you have half a million followers or 1,000 followers or 100. What's your philosophy of monetizing? How to make money from the thing you build? Yeah, so a lot of startups, they do like free users. So you could sign up, you could use an app for free, which is, it never worked for me well, because I think free users generally don't convert. And I think if you have VC funding, it makes sense to get free users because you can spend your funding on ads and you can get like millions of people come in predictably how much they convert and give them like a free trial whatever and then they sign up but you need to have that flow worked out so well for you to make it work that you need like it's very difficult i think it's best to start and just start asking people for money in the beginning so show your app like what are you doing in your landing page like make a demo or whatever, a video. And then if you want to use it, pay me money, pay $10, $20, $30. I would ask more than $10 per month, like Netflix, like $10 per month. But Netflix is a giant company. They can afford to make it so cheap, relatively cheap. If you're an individual, like an indie hacker, like you are making your own app, you need to make like at least $30 or more on a user to make it uh worthy for you you need to make money you know and it builds a community of people that actually really care about the product also yeah making a community like making a discord is very normal now every ai app has a discord and you have the developers and the users together in like a discord they And they talk about, they ask for features, they build together. It's very normal now. And you need to imagine like if you're starting out, getting a thousand users is quite difficult. Getting a thousand pages is quite difficult. And if you charge them like $30, you have 30K a month. That's a lot of money. That's enough to like- Live a good life. Yeah, live a pretty good life. I mean, there could be a lot of costs associated with hosting. Yeah, so that's another thing. I make sure my profit margins are very high. So I try to keep the cost very low. I don't hire people. I try to negotiate with like AI vendors now. Like, can you make it cheaper, you know? Which is, I discovered this. You can just email companies and say, can you give me a discount? Because it's too expensive. And they say, sure, 50%. I'm like, wow, very good. And I didn't know this. You can just ask. And especially in like now it's kind of recession, you can ask companies like, I need a discount or I kind of need to like, you don't need to be an asshole about it. Say, you know, I kind of need a discount or I need to go maybe to another company. So maybe like a discount like here and there. And they say, sure. A lot of them will say yes. Like 25% discount, 50% discount. Because you think the price on the website is the price of the API or something. It's not. Like, you know. And also you're a public-facing person. Oh, that helps also. And there's love and good vibes that you put out into the world. Like you're actually legitimately trying to build cool stuff. So a lot of companies probably want to associate with you because you're trying to do. Yeah, it's like a secret hack. But I think even without... Secret hack. Be a good person. It depends how much discount they will give. They'll maybe give more. But that's why you should shitpost on Twitter so you get discounts maybe. Yeah, yeah. And also when it's crowdsourced, I mean, paying does prevent spam or help prevent spam. Also, and also the, when it's crowdsourced, I mean, paying does prevent spam or help prevent spam. Also, yeah. It gives you high quality users. High quality users. Free users are, sure, but they're horrible. Like, it's just like millions of people, especially with AI startups, you get a lot of abuse. So you get millions of people from anywhere just abusing your app, just, just hacking it and whatever. There's something on the internet you mentioned like 4chan discovered hood maps yeah but i love fortune i don't love fortune but you know what i mean like they're so crazy especially back then like that's it's kind of funny what they do you know i actually uh what is it there's a new documentary on netflix anti-social network or something like that. That was really, was fascinating. Just 4chan, just the, you know, the spirit of the thing, 4chan and HN. People misunderstand 4chan. It's so much about freedom and also like the humor involved in fucking with the system and fucking with the man. It's just anti-system for fun. The dark aspect of it is you're having fun, you're doing anti-system stuff but like the nazis always show up and it's somehow that shit started happening it's drifting somehow yeah it's school shootings and stuff so it's a very difficult topic but i do know it's especially early on i think 2010 i would go to fortune for fun and they would post like crazy offensive stuff and this was just to scare off people. So we showed to other people, say, hey, do you know this internet website 4chan? Just check it out. And they'd be, but dude, what the fuck is that? I'm like, no, no, you don't understand. That's to scare you away. But actually when you go through a scroll, there's like deep conversations. And they would already be, this was like a normie filter, like to stop. So kind of cool, but yeah. It goes dark. It goes dark. And if you have those people show up they'll for the fun of it do a bunch of racist things and all that kind of stuff you were saying but everything is I think it was never man I'm not a fortune but like it was always about provoking it's just provocateurs you know but the the provoking in the case of hood maps or something like this can damage the a good thing like you know a little poison in a town is always good it's like the tom waits thing but you don't want too much otherwise it destroys the town it destroys the thing they're kind of like pen testers you know like penetration testers hackers yeah they just test your app for you and then you add some stuff like i add like uh i add like a nsfw word list they would say like bad words so a, I add like a NSFW word list. They would say like bad words. So when they would write like a bad words, they would get forwarded to YouTube, which was like a video. It was like a very relaxing video that's like kind of ASMR with like glowing jelly streaming like this to relax them, you know, or cheese melting on the toast. Cheese melting, nice. Chill them out. I like it. But actually a lot of stuff, I didn't realize how much originated in 4chan in terms of memes. I didn't, Rickroll, I didn't understand, I didn't know that Rickroll originated in 4chan. There's just so many memes, like most of the memes that you think. The word roll I think comes from 4chan, like not the word roll, but like in this case, in the meme use, like you would get like roll doubles because every, it was like post IDs on 4chan. So they were kind of like random. So if I get doubles, like this happens or something. So you'd get like two, two. Anyway, it's like a betting market kind of on these doubles and these post IDs. It's so much funny stuff. Yeah. I mean, that's the internet, that's purist. But yeah, again, the dark stuff kind of seeps in. Yeah. And it's nice to keep the dark stuff to like some low amount. It's nice to have a bit of noise in the darkness, but not too much. Yeah. But again, like you have to pay attention to that with, I mean, I guess spam in general. You have to fight that with Nomad List. How do you fight spam? Man, I use GPT-4 now. It's amazing. So I have like user input. I have reviews. People can review cities and I don't need to actually sign up. It's anonymous reviews. And they write like whole books about like cities and what's good and bad. So I run it through GPT-4 now and I ask like, is this a good review? Like, is it offensive? Is it racist or some stuff? And then it sends me a message on Telegram when it rejects reviews and I check it and it's, man, it's so on point. Automated. Yes, and it's so accurate. It understands double meanings. I have GPT-4 running on the chat community. It's a chat community of 10,000 people and they're chatting and they start fighting with each other. And I used to have, human moderators was very good, but they would start fighting the human moderator. Like this guy's biased or something. I have GPT-4 and it's, it's, it's really, really, really, really good. It understands humor and understands like, like you could say something bad, but it's kind of like a joke and it's kind of not like offensive so much. So it shouldn't be deleted. Right. It understands that, you know, so it's, i would love to have a gpt4 based filter of like of different kinds of for like x yeah i thought this week like i tweeted like a fact check like you can click fact check and then gpt4 look gpt4 is not always right about stuff right but it can give you a general fact check on a tweet. Usually what I do now, when I write something difficult about economics or something or AI, I put in GPT-4 and say, can you fact check it? Because I might have said something stupid. And the stupid stuff always gets taken out by the replies. Like, oh, you said this wrong. And then the whole tweet kind of doesn't make sense anymore. So I ask GPT-4 to fact check a lot of stuff. So fact check is a tough one. Yeah. But it would be interesting to sort of rate a thing based on how well thought out it is and how well argued it is. Yeah. That seems more doable. That seems like more doable. Like it seems like a GPT thing because that's less about the truth and it's more about the rigor of the thing. Exactly. And you can ask that. You can ask in the prompt, like, I don't know, like, for example, do you think, create like a ranking score of X Twitter replies where I should this post be if we rank on like, I don't know, integrity, reality, like fundamental deepness or something, interestness. And it will give you that pretty good score probably. I mean, Elon can do this with Croc, right? He can start doing, using that to to check replies. Because the reply section is like chaos. Yeah. And actually the ranking of the replies doesn't make any sense. Doesn't make sense. And I like to sort in different kinds of ways. Yeah. And you get too many replies now. If you have a lot of followers, I get too many replies. I don't see everything. And I a lot of stuff I just miss. And I don't, I want to too many replies now. If you have a lot of followers, I get too many replies. I don't see everything. And a lot of stuff I just miss. And I want to see the good stuff. And also the notifications or whatever. It's just complete chaos. It'd be nice to be able to filter that in interesting ways, sort it in interesting ways. Because I feel like I miss a lot. And what surfaced for me, I just took a random comment by a person with no followers that's positive or negative it's like okay if it's a very good comment it should happen but it should probably look a little bit more like do these people have followers because they're probably more engaged in the platform right oh no if it's i don't even care about how many followers if you're ranking by the quality of the comment, great. Yeah. But not just like randomly, like chronological, just a sea of comments. Yeah, yeah, yeah. It does make sense. Yeah. Yeah. X could be very improved with that, I think. One thing you espouse a lot, which I love, is the automation step. So like once you have a thing, once you have an idea and you build it, and it actually starts making money and it's making people happy there's a community of people using it you want to take the automation step of automating the things you have to do as little work as possible for it to keep running indefinitely can you like explain your philosophy there what you mean by automate yeah so the general theory of starters would be that when when it starts like you start making money you start hiring people to do stuff right do stuff that you like marketing for example do stuff that you would do in the beginning yourself um and whatever community management and organizing meetups for nomadless for example there would be a job for example and i thought like uh i don't have the money for that and i don't really want to run like a big company with a lot of people because it's a lot of work managing these people. So I've always tried to like automate these things as much as possible. And this can literally be like for Nomadless, it's literally like a, it's not a different other service, but it's like a webpage where you can organize your own meetup, set a schedule, a date, whatever. You can see how many Nomads will be there at that date so you know there will be actually enough nomads to meet up, right? And then when it's done, it sends a tweet out on the Nomadist account. There's a meetup here. It sends a direct message to everybody in the city who are there, who are going to be there. And then people show up on a bar, and there's a meetup, and that's fully automated. And for me, it's so obvious to make this automatic. Why would you, why would you have somebody organize this? Like, um, it makes more sense to automate it. And this with most of my things, like I figure out like how to do it with code. And I think, especially now with AI, like you can automate so much more stuff than before because AI understands things so well. Like before I would use if statements, right? Now you ask GPT, you put something in GPT for and in API and it sends back like, this is good, this is bad. Yeah. So you basically can now even automate sort of subjective type of things. This is the difference now. Yeah. And that's very recent, right? But it's still difficult. I mean, that step of automation is difficult to figure out how to, is you're basically delegating everything to code. And it's not trivial to take that step of automation is difficult to figure out how to, is you're basically delegating everything to code. And it's not trivial to take that step for a lot of people. So when you say automate, are you, are you talking about like cron jobs? Yes, man, a lot of cron jobs. A lot of cron jobs. It's like, I literally, I log into the server and I do like pseudo crontab dash E. And then I go into the editor and I do like sudo crontab dash E and then I go into editor and I write like hourly and then I write PHP, you know, do this thing dot PHP and that's a script and that script does a thing and it does it then hourly. That's it. And that's how all my websites work. Do you have a thing where it like emails you or something like this or email somebody managing the thing if something goes wrong? I have these web pages I make. They're called like health checks. It's like healthcheck.php. And then it has like emojis, like a green check mark if it's good and a red one if it's bad. And then it does like database queries. For example, like what's the internet speed in, for example, Amsterdam? Okay, it's a number. It's like 27 point megabits. So it's an accurate number. Okay, check, good. And then it goes to the next It's like 27 point megabits, so it's an accurate number. Okay, check. Good. Then it goes to the next, and it goes on all the data points. Did people sign up in the last 24 hours? It's important, because maybe the signup broke. Okay, check. Somebody signed up. Then I have uptimerobot.com, which is like for uptime, but it can also check keywords. It checks for an emoji, which is like the red X, which is if something is bad. And so it opens that health check page every minute to check if something is bad. Then if it's bad, it sends a message to me on Telegram saying, hey, what's up? It doesn't say, hey, what's up? It sends me like an alert. Like this thing is down. And then I check. So within a minute of something breaking, I know it. And then I can open my laptop and fix it. But the good thing is like the last few years, things don't break anymore. And like definitely 10 years ago when I started, everything was breaking all the time. And now it's like almost, last week was like 100.000% uptime. And these health checks are part of the uptime percentage. So it's like everything works. You're actually making me realize I should have a page for myself, like one page that has all the health checks, just so I can go to and see all the green check marks. Just feels good to look at, you know? It just be like, okay. Yeah, all right. We're okay. Everything's okay. Yeah. And like, you can see like, when was the last time something wasn't okay and it'll say like never, or like meaning like you've you've you've checked since you've last cared to check it's all been okay for sure it used to send me the the good health checks like yeah you know this it all works it all works but it's been so often and i like this feels so good but then i'm like okay obviously it's not gonna need to hide the good ones and show only the bad ones and And now that's the case. I need to integrate everything into one place. Automate like everything. Yeah. Also just a large set of cron jobs. A lot of the publication of this podcast is done all, everything is just automatically, it's all clipped up, all this kind of stuff. Yeah. But it would be nice to automate even more. Like translation, all this kind of stuff would be nice to automate. Every JavaScript, every PHP error gets sent to my Telegram as well. So every user, whatever user it is, doesn't have to be a page user. If they run into an error, the JavaScript sends the JavaScript error to the server, and then it sends it to my Telegram. From all my websites. So you get like a message so i get like a uncalled variable error whatever blah blah and then i'm like okay interesting and then i go check it out and that's like a way to get to zero errors because you get flooded with errors in the beginning and now it's like nothing almost so that's really cool but matt that's really cool but this is the same stuff people they they they, they pay like very big SaaS companies, like New Relic for, right? Like to manage the stuff. So you can do that too. You can use off the shelf. I like to build myself. It's easier. Yeah, it's nice. It's nice to do that kind of automation. I'm starting to think of like, what are the things in my life I'm doing myself that could be automated? Ask your GPT for, you know, like give your daily, your day and then ask what parts you'd automate. Well, one of the things I would love to automate more is my consumption on social media. Yeah. Both the output and the input. Man, that's very interesting. I think there's some startups that do that. Like they summarize the cool shit happening on Twitter, you know, like with AI. I think the guy called SWYX or something, he does like a newsletter that's completely AI generated with the cool new stuff on AI. Yeah, I mean, I would love to do that. But also like across Instagram, Facebook, LinkedIn, all this kind of stuff. Just like, okay, can you summarize the internet for me for today? Summarize internet.com. Yeah,.com. Because I feel like it pulls in way too much time. But also, I don't like the effect it has some days on my psyche. Because of haters or just general content? Just general. No, no, just general. For example, TikTok is a good example of that for me. I sometimes just feel dumber after I use TikTok. I just feel like i don't use it empty somehow and i'm like uninspired yeah it's funny in the moment i'm like ha look at that cat doing a funny thing and then you're like oh look at that person dancing in a funny way to that music and then you're like 10 minutes later you're like i feel way dumber and i don't really want to do much for the rest of the day. My girlfriend said, she saw me like watching some dumb video. She's like, dude, your face looks so dumb as well. Your whole face starts going like, oh, interesting. You know, so. I mean, with social media, with X, sometimes for me too, it's, I think I'm probably naturally gravitating towards the drama. Yeah. Art drama. Yeah. Hard wheel. Yeah. And so there's following ad people, especially ad people that only post technical content has been really good. Cause then I just look at them and I, and then I go down rabbit holes of like learning new papers have been published or good repos or, or just any kind of cool demonstration of stuff. And the thing, the kind of things that they retweet and that's the rabbit hole I go and I'm learning and I'm inspired, all that kind of cool demonstration of stuff and the kind of things that they retweet. And that's the rabbit hole I go and I'm learning and I'm inspired, all that kind of stuff. It's been tough, it's been tough to control. It's difficult, you need to like manage your platforms. I have a mute board list as well. So I mute like politics stuff because I don't really want it on my feed. And I think I've muted so much that now my feed is good. You know, I see like interesting stuff. But the fact that you need to modify, you need to like mod your app, your social media platform just to function and not be toxic for you, for your mental health, right? That's like a problem. Like it should be doing that for you. It's some level of automation. That would be interesting. I wish I could access X and Instagram through api easier you need to spend 42 000 a month which my friends do yeah you could no but still even if you do that that you're not getting i mean there's limitations that don't make it easy to do like yeah automate because they the thing that they're trying to limit like abuse or for you to steal all the data from the app to then train an LLM or something like this. Yeah. But if I just want to like figure out ways to automate my interaction with the X system or with Instagram, they don't make that easy. But I would love to sort of automate that and explore different ways to how to leverage LLMs to control the content I consume. And maybe publish that. Maybe they themselves can see how that could be used to improve their system. So, but there's not enough access. That we get to the API. Yes, you could screen cap your phone, right? Can be an app that watches your screen with you. You could, yeah. But I don't really know like what it would do. Like maybe it's gonna hide stuff before you see it. You know, like you scroll down and- I have Chrome extensions. I write a lot of Chrome extensions that hide parts of different pages and so on. For example, for my own, on my main computer, I hide all views and likes and all that on YouTube content that I create so that I don't... Smart doesn't affect you. It doesn't, yeah, so you don't pay attention to it. I also hide parts. I have a mode for X where i hide most of everything so like there's no it's same with youtube i have the succession like uh well i wrote my own because it's easier because it keeps changing it's like it's it's not easy to keep it dynamically changing but they're really good at like getting you to be distracted and like starting related accounts related stuff i'm like i don't want to relate it and like 10 minutes- Related accounts, related posts. I'm like, I don't want related. And like 10 minutes later, you're like, or something that's trending. I have a weird amount of friends addicted to YouTube and I'm not addicted. I think because my attention span is too short for YouTube, but I have this extension to like YouTube on hook, which like, it hides all the related stuff. I can just see the video and it's amazing. And, but sometimes I need to like, like I need to search a video how to how to do something and then i go to youtube and i had these youtube shorts these youtube shorts are like they're like algorithmically designed to just make you tap them and like i tap and then i'm like five minutes later with this face like and you're you're just talking and it's like what happened i was gonna open i was gonna play like the coffee mix, you know, like the music mix for drinking coffee together, like in the morning, like jazz. I didn't want to go to shorts. So it's very, it's very difficult. I love how we're actually highlighting all kinds of interesting problems that all could be solved at a startup. Okay. So what about the exit? When and how to exit? Man, you shouldn't ask me because I never sold my company. You've never, all the successful stuff you've done, you've never sold it. Yeah, it's kind of sad, right? Like I've been in, so I've been in a lot of acquisition, like deals and stuff. And I learned a lot about finance people as well there, like manipulation and due diligence and then changing the valuation. Like people change the valuation after. So a lot of people string you on to acquire you and then it takes like six months. It's a classic. It takes six to 12 months. They want to see everything. They want to see your stripe and your code and whatever. And then in the end, they'll change the price to lower because you're already so invested. So it's like a negotiation tactic, right? I'm like, no, I don't want to sell, right? And the problem with my companies is like they make 90% profit margin. So the multiple, the companies get sold with multiples, kind of multiples of profit or revenue. And often the multiple is like three times, three times or four times or five times revenue or profit. So in my case, they're all automated. So I might as well wait three years and I get the same money as when I sell. And then I can still sell the same company. You know what I mean? I can still sell for three, five times. So financially, it doesn't really make sense to sell. Yeah. Unless the price is high enough. Like if the price gets to like six or seven or eight, I don't want to wait six years for the money, you know? But if you give me three, like three years, nothing. Like I can wait. So I mean, the really valuable stuff about the companies you create is not just the interface and the crowdsource content, but the people themselves, like the user base. Yeah. Well, Nomadlist, it's a community. Yeah. So I could see that being extremely valuable. I'm surprised it hasn't. Nomadlist is like, it's like my baby. It's like my first product being extremely valuable. I'm surprised it hasn't. Nomenlist is like my baby. It's like my first product I took off, and I don't really know if I want to sell it. It's like something you would be nice when you're old that you're still working on this. It has like a mission, which is like people should travel anywhere, and they can work from anywhere, and they can meet different cultures, and that's a good way to make the world get better. If you go to China and live in China, you'll learn that there are nice people and a lot of stuff you hear about China is propaganda. A lot of stuff is true as well, but it's more, you know, you learn a lot from traveling. And I think that's why it's like a cool product to like not sell. AI products, I have less emotional feeling with AI products like Photoguy, which I could sell. Yeah, the thing you also mentioned is you have to price in the fact that you're going to miss the company you created. And the meaning it gives you, right? There's a very famous depression after a startup found a seller company. They're like, this was me. Who am I? And they immediately start building another one. They never can stop. So I think it's good to keep working you know until you die just keep working on cool stuff and you shouldn't retire you know i think retirement's bad probably so you usually build the stuff solo and mostly work solo what's the thinking behind that i think i'm not so good working with other people not like i'm crazy but like i i don't trust other people to clarify you don't trust other people to not like I'm crazy, but like I don't trust other people. To clarify, you don't trust other people to do a great job. Yeah, and I don't want to have like this consensus meeting where we all like, you know, you have like a meeting of three people and then you kind of get this compromise results, which is very European. Like it's very, in Holland we call it poldermodel, where you put people in a room and you only let them out when they agree on the compromise, right, in politics. And I don don't think i think it breeds like averageness you know you get an average idea average company average culture um you need to have like a leader or you need to be solo and just do it you know do yourself i think and i trust some people like now i like with my best friend andre i'm making a new AI startup. But it's because we, we know each other very long and he's one of the few people I would build something with. And, but almost never. So what does it take to be successful when you have more than one? Like, how do you build together with Andre? How do you build together with other people? So he codes, I shitpost on Twitter, literally like I promote it on Twitter. I, I, we said like product strategy. Like I said, this should be better. This should be better. But I think you need to have one person coding it. He codes in Ruby. So I was like, I cannot do Ruby. I'm in PHP. So you literally, so you've, have you ever coded with another person for prolonged periods of time? Never in my life. What do you think is behind that? I know it was always just me sitting on my laptop like i said like just coding no like you've never had another developer who like rolls in and like i've had once where the photo i like there's an ai developer philip i hired him to do the because i can't write python yeah and i stuff is python and i needed to get models to work and replicate and stuff and i needed to improve OTI. And he helped me a lot for like 10 months. He worked and, uh, man, I was trying Python working with NumPy and package manager, and it was too difficult for me to figure this shit out. And I didn't have time. Like I think 10 years ago, I would have time to like sit, you know, go do all nighters to figure this stuff out with Python. I don't have the, and I don't have the, it's not my thing. It's not your thing. it's another programming language. I get it, AI, new thing, got it. But like, you never had a developer roll in, look at your PHP, jQuery code and be, and yes, like, you know, like in conversation or improv, they talk about yes and, like basically, all right. I had for one week. Understand. And then it ended. Because he wanted to rewrite everything in the- No, that's the wrong guy. it ended because he wanted to rewrite everything in the no that's the wrong guy I know he wanted to rewrite in what he wanted to rewrite he said it's jQuery we can't do this I'm like okay he's like we need to rewrite everything in Vue Vue.js I'm like are you sure can we just like you know like keep jQuery he's like no man like and we need to change a lot of stuff and I'm like okay and I was kind of like feeling it like this you know we're to clean up shit. But then after a week, it's not going to, it's going to take way too much time. I think I like working with people where like when I approach them, I pretend in my head that they're the smartest person who has ever existed. Wow. So I look at their code or I look at the stuff they've created and try to see the genius of their way. Like you really have to understand people like really notice them like and then from that place have a conversation about what is the better approach yeah but those are the top tier developers yeah and they those are the ones that are tech ambiguous so they can work with they can learn any tech stack and they can and that's like really few like it's like top 5% because if you try higher devs like no offense to devs but most devs are not man most people in general jobs are not so good at their job like even doctors and stuff when you realize this people are very average at their job especially with dev with coding I think so sorry I think that's a really important skill for a developer to roll in and like understand the musicality the think that's a really important skill for a developer to roll in and like understand the musicality, the style. That's it, man. Empathy. It's like code empathy, right? It's code empathy. Yeah, it's a new word, but that's it. You need to understand like go over the code, get a holistic view of it and man, you can suggest we change stuff for sure, but look, jQuery is crazy. It's crazy I'm using jQuery. We can change that. It's not crazy at all. jQuery is crazy. It's crazy I'm using jQuery. We can change that. It's not crazy at all. jQuery is also beautiful and powerful. And PHP is beautiful and powerful, especially as you said recently in the as the versions evolved, it's much more serious programming language now. It's super fast. Like PHP is really fast now. Yeah, yeah, yeah. It's crazy. JavaScript is really fast now. So if speed is something you care about, it's fast now. Yeah, yeah, yeah. It's crazy. JavaScript is really fast now. So if speed is something you care about, it's super fast. Yeah. And like there's gigantic communities of people using those programming languages. And there's frameworks if you like the framework. So whatever, it doesn't really matter what you use. But like also you, if I was like a developer working with you, like you are extremely successful. You've shipped a lot. Yeah. So, like, if I roll in, I'm going to be, like, I don't assume you know nothing. Assume Peter's a genius, like, the smartest developer ever. And, like, learn. Learn from it. And yes, and. Like, notice parts in the code where, like, okay, okay, I got it. Yeah. Here's how he's thinking. Notice parts in the code where like, okay, okay, I got it. Here's how he's thinking. And now if I want to add another like little feature, definitely needs to have emoji in front of it. And then like just follow the same style and add it. And my goal is to make you happy, to make you smile, like to make you like, ha ha, fuck, I get it. And now you're going to start respecting me and like trusting me and like you start working together in this way i don't know i i don't know how hard it is to find developers no i think they exist i think you need to i need to hire more people need to try more people but that costs a lot of my energy and time but it's 100 possible yeah but do i want it i don't know things kind of run fine for now and I mean like okay you could say like okay normally this looks kind of clunky like people say the design is kind of clunky okay I'll improve the design it's like next to my to-do list for example you know like I'll get there eventually but it's true I mean you're also extremely good at what you do like I'm just looking at the interfaces of like photo AI like you would Jake like how amazing is Jake Ray but like you can these cowboys are getting these you can, these cowboys are getting, these are, there's these cowboys. This is a lot. It's a lot, but I'm glad they're all wearing shirts. Anyway, the interface here is just really, really nice. Like I could tell you know what you're doing. And with Nomad List, extremely nice, the interface. Thank you, man. And that'sad List, extremely nice, the interface. Thank you, man. And that's all you. Yeah, everything's me. So all of this and every little feature, all of this. People say it looks kind of ADHD or ADD, you know, like it's so much because it has so many things. And design these days is minimalist, right? Right, right. I hear you. But this is a lot of information and it's useful information and it's delivered in a clean way while still stylish and fun to look at so like minimalist design is about like when you want to convey no information whatsoever and look cool yeah it's very cool it's pretentious right pretentious or not the function is like is useless this is about a lot of information delivered to you in a clean, and when it's clean, you can't be too sexy, so it's sexy enough. Yeah. This is, I think, how my brain looks. There's a lot of shit going on. It's like drawing bass music. It's very... Yeah, but the spacing of everything is nice. The fonts are really nice. Very readable. I like it, but I made it so i don't trust my own judgment no this is really nice thank you the emojis are somehow like it's a style it's a thing i need to pick the emoji it takes a while to pick them you know like there's some something about the emoji is a really nice memorable like placeholder for the idea yeah like if it was just text it would actually be overwhelming if it was just text. The emoji really helps. It's a brilliant addition. Like some people might look at it, why do you have emojis everywhere? It's actually really, for me it's really nice. People tell me to remove the emoji. Yeah, well people don't know what they're talking about. And then the, I'm sure people will tell you a lot of things. This is really nice. And using color is nice. Small font, but not too small. And obviously you have to show maps, which is really nice. And using color is nice. Small font, but not too small. And obviously you have to show maps, which is really tricky. Yeah, this is really, really, really nice. And all of, I mean, like, okay, like how this looks when you hover over it. It's easy as transitions. No, I understand that, but like, I'm sure there's, like, how long does it take you to figure out how you want it to look? Do you ever go down a rabbit hole where you spent like two weeks? No, it's all iterative. It's like 10 years of, you know, add a CSS transition here or do this or. Well, say like, see, these are all, these are rounded now. Yeah. If you wanted to like, round is probably the better way. But if you want it to be rectangular, like sharp corners, what would you do? So I go to the index.css, and I do command F, and I search border-radius 12px, and then I replace with border-radius 0. And then I do command enter, and it's git deploys. It pushes to the git hub, and then sends a webhook, and then deploys to my server, and it's live in five seconds. Oh, you often deploy to production you you don't have like a testing ground no so i so i i'm like famous for this because uh i'm too lazy to set up like a staging server on my laptop every time so i nowadays i just deploy to production yeah and um it's man i'm gonna get canceled for this you know but it works very well for me because i have a lot of, I have like PHP lint and JS lint. So it tells me when there's error, so I don't deploy. But my, literally, I have like 37,000 Git commits in the last 12 months or something. So I make like small fix and then command enter and sends to GitHub. GitHub sends a web to my server, web server pulls it, deploys the production and is there. What's the latency of that from you pressing command? One second. It can be one to two seconds. So you just make a change, and then you're getting really good at not making mistakes, basically. Man, you're 100% right. People are like, how can you do this where you get good at not taking the server down? Because you need to code more carefully. But it's, look, it's idiotic in any big company, but for me it works because it makes me so fast. Like somebody will report a bug on Twitter and I kind of did do like a stopwatch, like how fast can I fix this bug? And then two minutes later, for example, it's fixed. Yeah. And it's fun because it's annoying for me to work with companies where you report a bug and it takes like six months. Yeah. It's like horrible. And it makes people really happy when you can really quickly solve their problems. So, but it's crazy. I don't think it's crazy. I mean, there's, I'm sure there's a middle ground, but I think that whole thing where there's a phase of like testing and there's the staging and there's a development and then there's like multiple tables and databases that you use for the state. Like it's- Filing. It's a mess. Yeah. And there's different teams tables and databases that you use for the state. Like it's. Filing. It's a mess. Yeah. And there's different teams involved. It's not good. I'm like a good, funny extreme on the other side, you know. But just a little bit safer, but not too much. It would be great. Yeah. Yeah. And I'm sure that's actually like how X, now how they're doing rapid improvement. No, they do because there's more bugs. Yeah. People complain about like, oh, look, he bought this Twitter. Now it's full of bugs. Dude, these shipping stuff there's more bugs. And people complain about like, oh, look, he bought this Twitter and now it's full of bugs. Dude, these shipping stuff, like things are happening now and it's a dynamic app now. Yeah, the bugs is actually a sign of a good thing happening. Yes. Bugs are the feature because it shows that the team is actually building shit. 100%. One of the problems is like I see with YouTube, there's so much potential to build features, but I just see how long it takes. So I've gotten a chance to interact with many other teams, but one of the teams is MLA, multi-language audio. I don't know if you know this, but in YouTube you can have audio tracks in different languages for overdubbing. And there's a team, and not many people are using it, but every single feature they have to meet and agree, and there's a team and not many people are using it but like every single feature they have to meet and agree and like there's allocate resources like engineers have to work on it but i'm sure it's a pain in the ass for the engineers to get approval to like because it has to not break the rest of the site whatever they do but like if you don't have enough dictatorial like top down like we need this now it's going to take forever to do anything multi-language audio but multi-language audio is a good example of a thing that seems niche right now but it quite possibly could change the entire world when you have when i upload this this conversation right here if instantaneously it dubs it into 40 languages. Yeah, man. And everybody consume, every single video can be watched and listened to in those different, it changes everything. And YouTube is extremely well positioned to be the leader in this. They got the compute, they got the user base, they have the experience of how to do this. So multi-language audio should be high priority feature right yeah that's high priority like that's and it's a way you know google is obsessed with ai right now they want to show off that they could be dominant in ai that's a way for google to say like we use the ai like this is a way to to to break down the walls that language creates the preferred outcome for them for them is probably their career not the the overall result of the cool product, you know? I think they're not selfish or whatever. They want to do good. There's something about the machine, the organizational stuff. I have this when I report bugs on big companies I work with. I talk to a lot of different people on DM and they're all really trying hard to do something. They're all really nice. And I'm the one being kind of asshole because I'm like, guys, I'm talking to 20 people about this for six months. Nothing's happening. You say, man, I know, but I'm trying my best. And yeah, so it's systemic. Yeah. It requires, again, I don't know if there must be a nicer word, but like a dictatorial type of top down, the CEO rolls in and just says like, for you two, it's like MLA, get this done now. This is the highest priority. I think big companies, especially in America, a lot of it is legal, right? You need to pass everything through legal. Yeah. And you can't like, man, the things I do, I could never do it in a big corporation because everything has to be, probably get deployed has to go through legal. Well, again, dictatorial. You basically say Steve Jobs did this quite a lot. I've seen a lot of leaders do this. Ignore the lawyers. Ignore comms. Exactly, yeah. Ignore PR. Ignore everybody. Give power to the engineers. Like, listen to the people on the ground. Get this shit done and get it done by Friday. Yeah. That's it. And the law can change. Like, for example, let's say you launch this AI dubbing and there's some legal problems with lawsuits. Okay, so the law changes. There will be appeals. There will be some Supreme Court thing, whatever. And the law changes. So just by shipping it, you change society. You change the legal framework. And by not shipping, being scared of the legal framework all the time, like you're not changing things. Just out of curiosity, what IDE do you use? Let's talk about like your whole setup. Given how ultra productive you are and that you often program in your underwear slouching on the couch, is there, does it matter to you in general? Is there like a specific IDE you use VS Code? Yeah, VS Code. Before I used Sublime Text, I don't think it matters a lot. I think I'm very skeptical of like tools when people think it, they say it matters, right? I don't think it matters a lot. I think I'm very skeptical of tools when people think it, they say it matters, right? I don't think it matters. I think whatever tool you know very well, you can go very fast in. Like the shortcuts, for example, IDE. I love Sublime Text because I could use multi-cursor. You search something and I could make mass replaces in a file with the cursor thing. And VS Code doesn't really have that as well. It's actually interesting. Sublime is the first editor where I've learned that. And I think they just make that super easy. So like, what would that be called? Multi-edit, multi-cursor edit thing, whatever. I'm sure like almost every editor can do that. It's just probably hard to set up. Yeah. VS Code is not so good, I think. Or at least I tried. But I would use that to process data. Like data sets, for example, from World Bank. I would just multi-cursor, mass, change everything. But yeah, VS Code. Man, I was bullied into using VS Code because Twitter would always see my screenshots of Sublime Text and say, why are you still using Sublime Text? Like, boomer, you need to use VS Code. And I'm like, well, I'll try it. I got a new MacBook and then I never installed, like, I never copied the old MacBook, I just make it fresh, you know, like a clean, like format C, you know, Windows, like clean start and I'm like, okay, I'll try VS Code and it's stuck, you know, but I don't really care. Like, it's not so important for me. Well, you know, the format C reference, huh? Dude, it was so good. You would install windows and then after three or six months it would start breaking and everything was like, it gets slow. Then you would restart, go to DOS, format C, you would delete your hard drive and then install the windows 95 again. It was so good times. And you would design everything like now I'm going to install it properly. Now I'm going to design my desktop properly. Yeah, I don't know if it's peer pressure, but I used Emacs for many, many years. And I love Lisp, so a lot of the customization is done in Lisp. It's a programming language. Partially it was peer pressure, but part of it was realizing you need to keep learning stuff. The same issue with jQuery. I still think I need to learn Node.js, same issue with jquery like i still think i need to learn no js for example yeah even though that's not my main thing or even close to the main thing but i feel like you need to keep learning this stuff and even if you don't choose to use it long term you need to give it a chance so you your understanding of the world expands yeah you want to understand the new technological concepts and see if they can benefit you you know it would be stupid not to even try it's more about the concepts i would say than the actual tools like expanding and that can be a challenging thing so going to vs code and like really learning it like all the shortcuts all the extensions and actually installing different stuff and playing with it that was a interesting challenge it was uncomfortable at first. Yeah, for me too, yeah. Yeah, but you just dive in. It's like NeuroFlex. You keep your brain fresh, you know, like this kind of stuff. I got to do that more. Have you given React a chance? No, but I want to learn. I understand the basics, right? I don't really know where to start. But would you like, I guess you got to use your own model, which is like build the thing using it. No, man. So I kind of did that. Like the stuff I do in jQuery is essentially, a lot of it is like, I start rebuilding whatever tech is already out there, not based on that, but just an accident. Like I keep going long enough that I build the same. I start getting the same problems everybody else had and you start building the same frameworks kind of. So essentially I use my own kind of framework. So you basically build a framework from scratch. That's your own, you understand it. Kind of, yeah, with AJAX calls. But essentially it's the same thing. Look, I don't have the time. And this is, I think saying you don't have the time is like always a lie because you just don't prioritize it enough. My priority is still like running the businesses and improving that and AI. I think learning AI is much more valuable now than learning a front-end framework. Yeah. Like it's just more impact. I guess you should be just learning every single day a thing. Yeah. You can learn a little bit every day, like a little bit of React or I think now like Next is very big. So learn a little bit of Next, you know, but I call them the military-indust very big. So learn a little bit of Next, you know. But I call them the military industrial complex. So if I, but you need to know, you need to know it anyway. So you got to learn how to use the weapons of war and then you can be a peacenegger. Yeah. Yeah. I mean, but you got to learn it in the same exact way as we were talking about, which is learn it by trying to build something with it and actually deploy it. The frameworks are so complicated and it changes so fast. So it's like, where do I start? You know? And I guess it's the same thing when you're starting out making websites, like where do you start? Yeah, it's GPT-4, I guess. But yeah, it's just so dynamic. It changes so fast that I don't know if it would be a good idea for me to learn it, you know? Maybe some combination of like Vue, Next with PHP, Laravel. Laravel is like a framework for PHP. I think that would be, it could benefit me, you know? Maybe Tailwind for CSS, like a styling engine. That stuff could probably save me time. Yeah, but like you won't know until you really give it a try. And it feels like you have to build, like if maybe i'm talking to myself but like i should probably recode like my personal one page in larval or yeah and even though it might not have almost any dynamic elements maybe have one dynamic element but it has to go end to end in that framework yeah or like Or like end-to-end build in Node.js. Some of it is, I don't, figuring out how to even deploy the thing. I have no idea. All I know is right now I would send it to GitHub and it sends it to my server. I don't know how to get JavaScript running. I have no clue. Yeah. So I guess I need like a, a pass, like a, like Verso, right? Or, you know, Heroku, kind of those kind of platforms. I actually kind of just gave myself the idea of I kind of just want to build a single web page, like one web page that has one dynamic element and just do it in every single, like in a lot of frameworks. Ah, on the same page. Oh, the same page? Kind of page. That's a cool product. You can learn all these frameworks. Yeah. And you can see the differences. Yeah. That's the same page. Kind of page. That's a cool product. All these frameworks. Yeah. You can see the differences. Yeah. That's interesting. All it takes to do it. Yeah. Stopwatch. I have to figure out actually something sufficiently complicated because it should probably do some kind of thing where it accesses the database and dynamically is changing stuff. Some AI stuff. Some LLM stuff. Yeah, maybe some, it doesn't have to be AI or LLM, but maybe API call to something. To replicate, for example, and then you have, yeah, that would be a very cool project. Yeah, and like time it and also report on my happiness. Yeah. I'm going to totally do this. Because nobody benchmarks this. Nobody's benchmarking developer happiness with frameworks. Yeah. Nobody's benchmarking developer happiness with frameworks. Yeah. Nobody's benchmarking the shipping time. Just take like a month and do this. How many frameworks are there? There's how many? There's like five main ways of doing it. So there's like, there's backend, frontend. And this stuff confused me too. Like React now apparently has become backend. Yeah. Or something that used to be only frontend and you're forced to do now backend also. I don't know. But you're not really forced to do anything. So, like, according to the internet. So, like, there's no – it's actually not trivial to find the canonical way of doing things. So, like, the standard vanilla – like, you go to the ice cream shop, there's, like, a million flavors. I want vanilla. If I've never had ice cream in my life, can we just like learn about ice cream? Yeah. I want vanilla. Nobody actually, sometimes they'll literally name it vanilla. But like, I want to know what's the basic way. But not like dumb, but like the standard canonical. Yeah, I want to know the dominant way. Like 60% of developers do it like this. Yeah. It's hard to figure that out. You know, that's the problem. Yeah, maybe LLMs can help. Maybe you should explicitly ask, what is the dominant? They usually know like the dominant. You know, they give answers that are like the most probable kind of. Yeah. So that makes sense to ask LLM. And I think honestly, maybe what would help is if you want to learn or I would want to learn like a framework, hire somebody that already does it and just sit with them and make something together. Like I've never done that, but I thought about it. So it would be a very fast way to, you know, take their knowledge out of my brain. I've tried these kinds of things. What happens is, it depends what kind of, if they're like a world-class developer, yes. Oftentimes they themselves are used to that thing and they have not themselves explored in other options. So they have this dogmatic, like talking down to you, like this is the right way to do it. It's like, no, no, no, we're just like exploring together. Okay, show me the cool thing you've tried, which is like, it has to have open-mindedness to like, you know, Node.js is not the right way to do web development. It's like one way. And there's nothing wrong with the old LAMP, PHP, jQuery, vanilla JavaScript way. It just has its pros and cons. And like, you need to know what the pros and cons are. Yeah, but those people exist. You could find those people probably. Yeah. Like, if you want to learn AI, imagine you have Karpathy sitting next to you. Yeah. Like, he does his YouTube videos. It's amazing. He can teach it to like a five-year-old about how to make LLM. It's amazing. Like, imagine this guy sitting next to you and just teaching you like, let's make LLM together. Like, holy shit, it would be amazing. Yeah. I mean, well, Carpatho has its own style and it's like, I'm not sure he's for everybody. But for example, five-year-old, it depends on the five-year-old. Yeah. He's like super technical. But he's amazing because he's super technical and he's the only one who can explain stuff in a simple way, which shows his complete genius. Yes. Because if you can explain without jargon, you're like, wow. And build it from scratch. Yeah. It's like top tier, you know, like what a guy. But he might be anti-framework because he built from scratch. Exactly. Yeah, actually probably is. Yeah. Uh, he's like Yuba for AI. Yeah. So maybe learning framework is a, is a very bad idea for us. You know, maybe we should stay in PHP and like ScriptKitty. But you have to, maybe by learning the framework, you learn what you want to yourself build from scratch. Yeah, maybe you learn concepts, but you don't actually have to start using it for your life, right? Yeah. And you're still a Mac guy. Always a Mac guy. Yeah, yeah. I switched to Mac in 2014 because it was because when i wanted to start traveling and my brother was like dude get a macbook it's like the standard now i'm like wow i need to switch from windows and i had like three screens you know like windows had this whole setup for music production had to sell everything um and then i had a macbook and i remember opening up this macbook box like uh and it was so beautiful. It was like this aluminum. And then I opened it, I removed the, you know, the screen protector thing. It's so beautiful. And I didn't touch it for three days. I was just like looking at it really. And I was still on the Windows computer. And then I went traveling with that. So I, and all my great things started when I switched to Mac, which sounds very dogmatic, right? But what great things are you talking about? All the business started working out. Like I started traveling. I started started building startups started making money it all started when i switched to mac listen i i kind of you're making me want to switch to mac so i use either use linux inside windows with wsl or just ubuntu linux but windows for most stuff like editing or any like Adobe products. Yeah, yeah. Well, you could use, I guess you could do Mac stuff there. I wonder if I should switch. What do you miss about Windows? What was the pros and cons? I think the Finder is horrible, Mac. Like it's like, it's not. The what is horrible? The Finder. Oh, you don't know the Finder? So there's the Windows Explorer? Yeah. Windows Explorer is amazing. Thank you for talking about it. Finder is strange, man. There's like Windows Explorer. Yeah. Windows Explorer is amazing. Thank you for talking to us. Finder is strange, man. There's like strange things. There's this bug where if you send, like attach a photo on WhatsApp or Telegram, it just selects the whole folder and you almost accidentally can click enter and you send all your photos, all your files to this chat group, happen to my girlfriend. She starts sending me photo, photo, photo, photo. So Finder is very unusual, but it has Linux. Like the whole thing is like, it's Unix based, right? So you use the very unusual, but it has Linux. Like the whole thing is like, it's Unix based, right? So you use the command? Yeah, all the time. Like all the time. And the cool thing is you can run, I think it's like Unix, like Debian or whatever. You can run most Linux stuff on macOS, which makes it very good for development. Like I have my Nginx server. You know, if I'm not lazy and set up my staging on my laptop, it's just the Nginx server, the same as I have on myGINX server. If I'm not lazy and set up my staging on my laptop, it's just the NGINX server, the same as I have on my cloud server, right? The same way the websites run. And I can use almost everything, the same config files, configuration files. And it just works. And that makes Mac a very good platform for Linux stuff, I think. Yeah, yeah. Real Ubuntu is better, of course. Yeah. I'm in this weird situation where i'm somewhat of a power user in windows and let's say android and all the much smarter friends i have all using mac and iphone and it's like but you don't want to go through the peer pressure, you know? It's not peer pressure. It's like one of the reasons I want to have kids is that there's a lot of... Like I would love to have kids as a baseline. But, you know, there's like a concern maybe there's going to be a trade-off or all this kind of stuff. But you see like these extremely successful, smart people who are friends of mine who have kids and are really happy they have kids. It's not peer pressure. It's just a strong signal. It works for people. The same thing with Mac. I don't see fundamentally, I don't like closed systems. Fundamentally, I like Windows more because there's much more freedom. Same with Android. There's much more freedom. It's much more freedom. Same with Android. There's much more freedom. It's much more customizable. But all the cool kids, the smart kids are using Mac and iPhones. It's like, all right, I need to give it a real chance, especially for development, since more and more stuff is done in the cloud anyway. Yeah. Anyway, but it's funny to hear you say all the good stuff started happening. Maybe I'll be like that guy too. When I switched to Mac, all the good stuff started happening. I think it's just about the hardware. It's not so much about the software. The hardware is so well built, right? The keyboard and... Yeah, but look at the keyboard I use. That is pretty cool. That's one word for it. What's your favorite place to work? On the couch. Does the couch matter? Is the couch your home or is it any couch? No, any, like hotel couch also, like in the room, right? Yeah. But I used to work like very ergonomically with like a standing desk. Yeah. And everything like perfect, like eye height, screen, blah, blah, blah. And I felt like, man, this has to do with lifting too. I started getting RSI, like repetitive strain injury, like tingling stuff. And it would go all the way on my back and i was sitting in a co-working space like 6 a.m sun comes up and i'm working and i'm coding and i hear like a sound or something so i do like i look left and my neck gets stuck like and i'm like wow fuck and um i'm like what am i dying you And I thought, I'm probably dying. Yeah, probably dying. So I don't want to die in a co-working space. I'm going to go home and die in like, you know, peace and honor. Yeah. So I closed my laptop and I put it in my backpack. Yeah. And I walked to the street, got on my motorbike, went home. Yeah. And I lied down on like a pillow, like with my legs up and stuff to get rid of this, like, because it was my whole back. And it was because I was working like this all the time. Yeah. So I started getting like a laptop stand, everything ergonomically correct. But then I started lifting. And since then, like it seems like everything gets straightened out. Your posture kind of, you're more straight. And I never have RSI anymore, reproductive injury. I never have tingling anymore uh no pains and stuff so then i started working on the sofa and it's great like it feels um you're close to the i sit like i sit like this yeah legs together and then a pillow and then a laptop and then i work are you Are you like leaning back? I'm kind of like together, like legs. And then... Where's the mouse? You're using... No, so everything's trackpad on the macOS, on the MacBook. I used to have the Logitech MX mouse, the perfect economic mouse. And you're just doing like this little thing with the thing. Yes. One screen. One screen. And I used to have three screens. So I come from the... I know where people come from. I had all this stuff. But then I realized that having it all condensed in one laptop, it's a 16-inch MacBook, so it's quite big. But having it all in there is amazing because you're so close to the tools. You're so close to what's happening, you know? It's like working on a car or something. It's like so, like, man, if you have three screens, you to look here look there you get also neck injury actually so it's I don't know this sounds like you're part of a cult and you're just trying to convince me but I mean but it's good to hear that you can be ultra productive on a single screen that's I mean that's crazy command tap you alt up like Windows alt up macOS command tap you can switch very fast so you have like one the entire screen is taken out by VS Code say you're looking at the code and then very fast. So you have like one, the entire screen is taken out by VS Code, say you're looking at the code and then, and then like if you deploy like a website, you what, switch screens? Command tab to Chrome. I used to have this swipe screen, you know, you could do like different screen spaces. Yeah. I was like, ah, it's too difficult. Let's just put it on one screen on the MacBook and then. And you'd be productive that way. Yeah. Very productive, yeah. More productive that way. Yeah. Very productive, yeah. More productive than before. Interesting. Because I have three screens and two of them are vertical. Yeah. Like on the sides. Code, right, yeah. For code, you can see a lot. No, man, I love it. Like I love seeing it with friends. Like they have amazing like battle stations, right? It's called. It's amazing. I want it, but I don't want it, right? You like the constraints. There's... That's it. There's some aspects of the constraints which once you get good at it, but I don't want it, right? You like the constraints. That's it. There's some aspect of the constraints, which once you get good at it, you can focus your mind and you can... Man, I'm suspicious of more, you know? Yeah. Do you really need all this stuff? It might slow me down, actually. It's a good way to put it. I'm suspicious of more. Me too. I'm suspicious of more in all ways. Because you can defend more, right? You can defend, yeah, I'm a developer. I make money. I need to get more screens, right? I need to be more efficient. And then Because you can defend more, right? You can defend, yeah, I'm a developer. I make money. I need to get more screens, right? I need to be more efficient. And then you read stuff about like Mythical Man Month where like hiring more people slows down a software project that's famous. I think you can use that metaphor maybe for, you know, tools as well. And I see friends just with gear acquisition syndrome that buying so much stuff, but they're not that productive. They have the best, most beautiful battle stations, desktops stuff, but they're not that productive. They have the best, most beautiful battle stations, desktops, everything. They're not that productive. And it's also like kind of fun. Like it's all from my laptop in a backpack, right? It's kind of nomad, minimalist. Take me through like the perfect ultra productive day in your life. Like say like where you get a lot of shit done. Yeah. And it's all focused on getting shit done. When are you waking up? Is it a regular time? Super early, super late? So I go to sleep like 2 a.m. usually, something like that. And before 4 a.m. But my girlfriend would go sleep midnight. So we did a compromise like 2 a.m., you know? So I wake up around 10, 11, no more like 10 shower make coffee I make coffee like drip coffee like the V60 you know the filter and I boil water and then put the coffee in and then chill a little bit with my girlfriend and then open laptop start coding check what's going on like bugs or whatever how long are you like how stretches of time are you able to just sit behind the computer coding so I used to need like really long stretches where I would do like all nighters and stuff to get shit done but I've gotten trained to like have more interruptions where I can like because you have to this is life like there's a lot of distractions like your girlfriend asks stuff people come over or whatever so I'm very fast now. I can lock in and lock out quite fast. And I heard people, developers or entrepreneurs with kids have the same thing, like before they're like, ah, I cannot work, but they get used to it. And they get really productive in like short time because they only have like 20 minutes and then shit goes crazy again. So another constraint, right? Yeah, it's funny. So I think that works for me. Yeah, and then, you know, cook food and stuff, like have lunch, steak and chicken. You eat a bunch of times a day. So you say coffee, what are you doing? Yeah, so a few hours later, cook foods. We get like locally sourced like meat and stuff and vegetables and cook that. And then second coffee and then go some more. Maybe go outside for lunch like you can you can mix fun stuff you know how many hours are you saying a perfectly productive day are you doing programming like if you were like to kill it are you doing like all day basically you mean like the special days where like special girlfriend leaves to like paris or something and you're alone for a week at home which is amazing you can just It's like, and you stay up all night and eat chocolate and... Yeah. Eat chocolate. Yeah. Yeah, yeah, yeah. Okay, okay. Let's remove girlfriend from picture, social life from picture. It's just you. Man, that shit goes crazy. Okay, yeah. Because when shit goes crazy... Now shit goes crazy. Okay, yeah. So shit, you... Let's rewind. Are you still waking up? There's coffee. There's no girlfriend to talk to. There's no... And now we wake up like 1 p.m. 2 p.m because you went to bed at 6 a.m yeah because i was coding i was finding some new ai shit yeah i was studying it and it was amazing and i cannot sleep because it's too important we need to stay awake we need to see all of this we need to stay awake. We need to see all of this. We need to make something now. But that's the times I do make like new stuff more. So I think I have a friend, he actually books a hotel for like a week to like leave his, and he has a kid too. And his girlfriend and his kids stay in the house and he goes to another hotel. Sounds a little suspicious, right? Going to a hotel. But all he does is like writing or coding. He's a writer and he needs like this alone time, this silence. And I think for this flow state, it's true, you know, I'm better maintaining stuff when there's a lot of disruptions than like creating new stuff. I need this, it's common, it's flow state, it's this uninterrupted period of time. So yeah, I wake up like 1, 2 p.m., you know, still coffee, shower, we still shower, you know. And then this code like nonstop. Maybe my friend comes over. Comes over. Just some distraction. Yeah. He also, Andre, he codes too. So he comes over. We code together. We listen, you know. It starts going back to like the Bali days, you know, like. Yeah. Coworking days. So you're not really working with him, but you're just both working. Because it's nice to have like a vibe where you both sit together on the couch and coding or something and you actually it's mostly silent or there's music you know and sometimes you ask something and but generally like you're really locked in and what uh what music are you listening to i think like like um techno like youtube techno um there's a there's a channel called hor with a umlaut, like H-O, like double dot. It's Berlin techno, whatever. It looks like they film it in like a toilet with like white tiles and stuff and it's very cool. And they always have like very good like kind of industrial like kind of aggressive, you know, like. Yeah. That's not distracting to your brain? No, it's amazing. Like, I think distracting, man, jazz. I listen to coffee jazz with my girlfriend when I wake up, and it's kind of like this piano starts getting annoying. It's like, It's too many tones. It's like too many things going on. This industrial techno is like, you know these African rain dances? It's this transcendental trance. That's interesting, because I actually mostly now listen to brown noise. Yeah, wow. Like pretty loud. Wow. And one of the things you learn is your brain gets used to whatever. So I'm sure to techno, if I actually give it a real chance, my brain would get used to it. But like with noise, what happens is something happens to your brain. I think there's a science to it, but I don't't really care you just have to be a scientist of one like study yourself your own brain for me it like it does something i discovered it right away when i tried it for the first time after about like a couple minutes your everything, every distraction just like disappears and it goes like, you can like hold focus on things like really well. It's weird. Like you can like really focus on a thing. It doesn't really matter what that is. I think that's what people achieve with like meditation. You can like, like focus on your breath, for example. It's just normal brown. It's not like b breath, for example. It's just normal brown. It's not like binaural. No. It's just normal brown noise. Just like shh. Yeah. White noise, I think it's the same. It's like pink noise, white noise. Brown noise, I think is when it's like bassier. Yeah, it's more diffused, more dampened. Dampened. Yeah, I can see that. No sharpness. Yeah, sharp brightness. Yeah, brightness. Yeah, I can see that. And you use a headphone, right? Yeah, headphones can see that. No sharpness. Yeah, sharp brightness. Yeah, brightness. Yeah, I can see that. And you use a headphone, right? Yeah, headphones. Yeah. I actually like walk around in life often with brown noise. Dude, that's like psychopath shit, but it's cool, you know? Yeah, yeah, yeah. When I murder people, it helps. It drowns out their screams. Jesus Christ. Yeah. I said too much. Man, I'm going to try brown noise. With a murder or for the coding? Yeah. For the coding, yeah. Okay, good. Try it. Try it. But you have to, like, with everything else, give it a real chance. Yeah. I also, like I said, do techno-y type stuff, electronic music, on top of the brown noise. But then control the speed. Because the faster it goes the more anxiety so if i really need to get shit done especially with programming i'll have a beat yeah and it's great it's cool i say it's cool to play those little tricks with your mind to study yourself yeah i usually don't like to have people around because when people even if they're working i don't know i like people too much. They're like interesting. That's my, yeah, in coworker space, I would just start talking too much. Yeah. Yeah. So there's a source of distraction. Yeah. We would do, in the coworker space, we would do like a money, like pot, like a mug. So if you would, we'd work for 45 minutes and then if you would say one, like per word, you would get a fine, which is like $1. So you'd put $1 to say, hey, what's up? So $3, you put in the1 to say hey what's up so $3 you put in the mug and then 15 minutes free time like we can like party whatever and then 45 minutes again I'm working and that worked but you need to shut people up or they you know I think there's there's an intimacy in being silent together yeah maybe I'm uncomfortable with like but you you need to make yourself in being silent together. Yeah. Maybe I'm uncomfortable with, like, but you need to make yourself vulnerable and actually do it. Like with close friends, to just sit there in silence for long periods of time and like doing a thing. Dude, I watched this video of this podcast. It was like this Buddhism podcast with people meditating and they were interviewing each other or whatever and like a podcast. And suddenly after a question, it's like yeah yeah and they were just silent for like three minutes and then they said that was amazing yeah that was amazing i was like wow pretty cool you know elon's like that and i really like that when you'll ask a question, like, I don't know, what's a perfectly productive day for you? Like I just asked, and you just sit there for like 30 seconds thinking. Yeah, he thinks. Yeah. That's so cool. I wish I was, I wish I could think more about, but I want to like, I want to show you my heart, you know, I want to show you, go straight from my heart to my mouth, to like saying the real thing. And the more I think, but I want to like, I want to show you my heart, you know? I want to show you, go straight from my heart to my mouth to like saying the real thing. And the more I think, the more I start like filtering myself, right? And I want to just throw it out there immediately. I do that more with team. I think he has a lot of practice in that. I do that as well. And in team setting, when you're thinking, brainstorming, and you allow yourself to just like think in silence. Yeah. Just like, because even in meetings meetings people want to talk yeah it's like no you think before you speak and just like it's okay to be silent together yeah and if you allow yourself the room to do that you can actually come up with really good ideas yeah it's okay this perfect day how much caffeine are you consuming in this day too much right because uh normally normally like two cups of coffee. But on this perfect day, like we go to like four maybe. So we're starting to hit like the anxiety levels. So four cups is a lot for you. Well, I think my coffees are quite strong when I make them. It's like 20 grams of coffee powder in the V60. So like my friends call them like nuclear coffee because it's quite heavy. Super strong. it's quite strong but it's nice to hit that anxiety level where you're like almost panic attack but you're not there yet so but that's like man it's like super locked in just like it's amazing but I mean there's a space for that you know in my life but it's I think it's great for making new stuff. It's amazing. Starting from scratch, creating a new thing. Yes. I think girlfriends should let their guys go away for like two weeks, every few, no, every year, at least, you know, maybe every quarter, I don't know. And just sit and make some shits without, you know, they're amazing, but like no disturbances, just be alone. And then, you know, people can make something very, very amazing. Just wearing cowboy hats in the mountains like we showed. Exactly. We can do that. There's a movie about that. With the laptops. They didn't do much programming though. Yeah. You can do a little bit of that. Okay. And then a little bit of shipping, you know, do both. It's a different. But they need to allow us to go, you know, you need like a man cave, right? Yeah. To ship. Yeah. To ship. To get shit done. Yeah. It's a different, broke back mind. But they need to allow us to go, you know, you need like a man cave, right? Yeah, to ship. Yeah, to ship. Get shit done. Yeah, it's a balance. Okay, cool. What about sleep, naps and all that? You're not sleeping much? I don't do naps in a day. I think it's power naps are good, but I don't really, I'm never tired anymore in the day. Man, also because of gym, I'm not tired. I'm tired when I want to sleep, you know, when it's night, I need to sleep. Yeah, me, I love naps yeah okay i don't know i don't know why brain shuts off turns on i don't know if it's healthy or not it just works yeah i think with anything mental physical you have to be a student of your own body and like no no what the limits are like you have to be skeptical taking advice from the internet in general, because a lot of advice is just like a good baseline for the general population. But then you have to become a student of your own, like of your own body, of your own self, of how you work. Yeah. That's, I've done a lot of, like for me, fasting was an interesting one. Cause I used to, you know, eat a bunch of meals a day, especially when I was lifting heavy, like, cause everybody says that you have to eat uh kind of a lot you know multiple meals a day but i realize i can get much stronger feel much better if i eat once or twice a day and me too yeah it's crazy i never understood this small meal thing didn't work for me let me just ask, it'd be interesting if you can comment on some of the other products you've created. We talked about Nomad List, Interior AI, Photo AI, Therapist AI. What's Remote OK? It's a job board for remote jobs. Because back then, like 10 years ago, there was job boards, but it was not really specifically remote job, job boards. So I made one, I made like, first on Nomad List, I made like Nomad Jobs, like a page. And a lot of companies started hiring and they pay for job posts. So I made one, I made like first on Nomad is I made like Nomad jobs, like a page and a lot of companies started hiring and they pay for job posts. So I spin it off to remote. Okay. And now it's like the number one or number two biggest remote job boards. And, and it's also fully automated and people just post a job and people apply. It has like profiles as well. Like it's kind of like LinkedIn for remotes work. It's just focused on remote only. Yeah. It's essentially of like LinkedIn for remote work. Just focus on remote only. Yeah. It's essentially like a simple job board. I discovered job boards are way more complicated than you think, but yeah, it's a job board for remote jobs. But the nice thing is you can charge a lot of money for job posts. Man, it's good money. B2B, you can charge like, you start with $299, but at the peak when the Fed started printing money, like 2021, I was making like $140K a month with Remote OK, with just job posts. And I started like adding crazy upsells, like rainbow colored job posts. You can add your background name. It's just upsells, man. And you charge $1,000 for an upsell. It was crazy. And all these companies just upsell, upsell. Yeah, we want everything. Job posts would cost $3,400, $4,000. And I was like, this is good business. And then the feds stopped printing money. And it all went down. And it went down to like 10K a month from 140. Now it's back. I think it's like 40. It was good times, you know? I got to ask you about back to the digital nomad life. Yeah. You wrote a blog post on the reset and in general, like just giving away everything, living a minimalist life. Yeah. What did it take to do that? Like to get rid of everything? 10 years ago was like this trend in the blog. Back then blogs were so popular. It was like a blogosphere and it was like the 100 things challenge. What is that? The 100 things challenge? I mean, it's ridiculous, but like you write down every object you have in your house, and you count it. You make a spreadsheet, and you're like, okay, I have 500 things. You need to get it down to 100. Why? This is just a trend, so I did it. I started selling stuff, started throwing away stuff. I did MDMA and XTC, like 2012. After that trip, I felt so different. And I felt like I had to start throwing shit away. Like, I swear. And I started throwing shit away. And I felt that was like, it was almost like the drug sending me to a path of like, you need to throw your shit away. You need to start, you know, go on a journey. You need to get out of here. And that's what the MDMA did, I think. Yeah. How hard is it to get down to a hundred items? Well, you need to like sell your PC and stuff. You need to go on eBay and then, man, going eBay selling all your stuff is very interesting because you discover society. You just, man, you meet the craziest people. You meet every range from rich to poor. Everybody comes to your house to buy stuff. It's so funny. It's so interesting. I recommend everybody do this. Just to meet people that want your to buy stuff. It's so funny, so interesting. I recommend everybody do this. Just to meet people that want your shit. Yeah. It was so, like, I didn't know. I was living in Amsterdam, and I didn't know I have my own, you know, subculture or whatever. And I discovered the Dutch people, like, as they are from eBay, you know? So I sold everything. What's, like, the weirdest thing you had to sell and you had to find a buyer for? Not the weirdest, but, like, what's memorable? So then i was i was making music and we would make music videos with like a canon 5d camera back then everybody's making films and music videos that and uh and we bought it with my friends and stuff and it was kind of like i had to sell this thing too because it was like it was very expensive like 6k or something yeah and but it meant that selling this meant that we wouldn't make music videos together anymore i would leave holland this kind of like stuff we were working on would end and i was kind of saying this music video stuff we're not getting big we're not getting famous in this or successful we need to stop doing this this music production also it's not really working and it was kind of like felt very bad you know for my friends because we would work together on this and um to sell this like camera that we'd make stuff with and it was a hard goodbye it was just a camera but it was like it felt like uh sorry guys doesn't work and i need to go you know who uh who bought it do you remember was some guy who couldn't possibly understand the journey motion of it of it. Yeah. Because you showed up, here's the money, thanks. Yeah. But it was like cutting your life, like this shit ends now and now we're going to do new stuff. I think it's beautiful. I did that twice in my life, give away everything, everything, everything. Wow. Like down to just pants, underwear, backpack. I think it's important to do. It shows you what's important. Yeah, I think that's what I learned from it. You learn that you can live with very little objects, very little stuff. But there's a counter to it. You lean more on the stuff, on the services, right? For example, you don't need a car, you use Uber, right? Or you don't need kitchen stuff because you go to restaurants when you're traveling. So you lean more on other people's services, but you spend money on that as well. So that's good. Yeah, but just letting go of material possessions, which it gives a kind of freedom to how you move about the world. Yeah. It gives you complete freedom to go into another city to... Yeah, with your backpack. With a backpack. Yeah. There's a kind of freedom to it. There's something about material possessions and having a place and all that that ties you down a little bit. Yeah. Like spiritually. Yeah. It's good to take a leap out into the world, especially when you're younger. Man, I recommend if you're 18, you get out of high school, do this, go travel, and build some internet stuff, whatever. Bring your laptop, and it's an amazing experience. Five years ago, I would still go to university, but now I'm thinking like, no, maybe skip university. Just go first, like travel around a little bit, figure some stuff out. You can go back to university when you're 25. You can like, okay, now I learned, I've been successful in business. You have money at least. Now you can choose what you really want to study, you know? Because people at 18, they go study what is probably good for the job market, right? So it probably makes more sense. If you want that, go travel, build some businesses, and go study what's probably good for the job market, right? So it probably makes more sense, like, if you want that, go travel, build some businesses, and go back to university if you want. So one of the biggest uses of a university is the networking. You gain friends, you gain, like, you meet people. It's a forcing function to meet people. But if you can meet people out into the world by traveling. And you meet so many different cultures. I mean, the problem for me is, like, if I traveled at that young age, I'm attracted to people at the outskirts of the world. Like for me. Like where? No, not geographically. Oh, like the subcultures. Yeah, like the weirdos, the darkness. Yeah, me too. But that might not be the best networking at 18 years old. No, but man, if you're smart about it, you can stay safe. And I met so many weirdos from traveling. You meet, that's how travel works. If you really let loose, you meet the craziest people. Yeah. And it's the most interesting people. And it's just, I cannot recommend it enough. Well, see, the thing is that when you're 18, I feel like, depending on your personality, you have to learn both how to be a weirdo and how to be a normie. Like, you still have to learn how to fit into society. Like, for a person like me, for example, who's always an outcast, like, there's always a danger for going full outcast. Yeah. And that's a harder life. If you, like, if you go to, like, go full artist and full, like, darkness, it's just a harder life. You can come back. You can come back to normie. That's a harder life. If you like, if you go to like, go full artist and full like darkness, it's just a harder life. You can come back. You can come back to normie. That's a skill. That's like, I think you have to learn how to, how to fit into like polite society. But I was very strange outcast as well. And I'm more adaptable to normie now. You learned it. Yeah. After 30s, you know, you're like, yeah. But I mean, it's a skill. You have to learn i feel man i feel so good you start as an outcast but the more you work on yourself the less like shit you have you kind of start becoming more normie because you become more chill with yourself more happy and it kind of makes you honest right yes yes like the most the crazy people are always the most interesting. If you've solved your internal struggles and your therapy and stuff and you kind of become kind of, you know, it's not so interesting anymore maybe. You don't have to be broken to be interesting, I guess is what I'm saying. Yeah. What kind of things were left when you minimalized? So the backpack. Yeah. MacBook, toothbrush, some clothes, underwear, socks. You don't need a lot of clothes in Asia because it's hot. So you just wear swim pants, swim shorts. You walk around, flip-flops. So very basic, T-shirt. And I would go to the laundromat and wash my stuff. And I think it was like 50 things or something yeah yeah it's nice there's uh as i mentioned to you there's the uh the show alone yeah they really test you because they only get 10 items and you have to survive out in the wilderness and an axe like everybody brings an axe some people uh also have a saw but usually axe does the job you basically have to in order to build a shelter you have to cut down cut the trees and make learn in Minecraft everything I learned about life I learned in Minecraft bro yeah it's nice to create those constraints for yourself to understand what matters to you and also how to be in this world and one of the ways to do that is to live a minimalist life but like some people I've met people that really enjoy material possessions and that brings them happiness and that's a beautiful thing for me it doesn't but people are different it gives me happiness for like two weeks I'm very quickly adapting to like baseline, hedonistic adaptation but people are different. It gives me happiness for like two weeks. Yeah. I'm very quickly adapting to like a baseline, hedonistic adaptation, very fast. Yeah. But man, if you look at the studies, most people like, like get a new car, six months, you know, get a new house, six months, you just feel the same. You're like, wow, should I buy all this stuff? Studying hedonistic adaptation made me think a lot about minimalism. And so you don't even need to go through the whole journey of getting it. Just focus on the thing that's more permanent. Yeah. Like building shit. Yeah, like people around you, like people you love, nice food, nice experiences, meaningful work, those things, exercise, those things make you happy, I think, make me happy for sure. You wrote a blog post, why I'm unreachable and maybe you should be too. What's your strategy in communicating with people? Yeah. So when I wrote that, I was getting so many DMs as you probably have, you have a million times more, but, and people were getting angry that I wasn't responding. And I was like, okay, I'll just close down these DMs completely. Then people got angry that I closed my DMs down, that I'm not like man of the people, you know. You've changed, man. Yeah, you've changed. You got, you know, like this. And I'm like, I'll explain why. I just don't have the time in a day to, you know, answer every question. And also people send you like crazy shit, man. Like stalkers and like people write like their whole life story for you. And also people send you like crazy shit, man. Like stalkers and like people write like their whole life story for you and then ask you advice. Like, man, I have no idea. I'm not a therapist. I don't know. I don't know this stuff. But also beautiful stuff. No, absolutely. Sure. Like life story. I've posted a coffee form. Like if you wanted to have a coffee with me. Nice. And I've gotten an extremely large number of submissions. And when I look at them, there's just like beautiful people in there, like beautiful human beings, really powerful stories. And it breaks my heart that I won't get to meet those people. So this part of it is just like there's only so much bandwidth to truly see other humans and help them or like understand them or hear them or see them. Yeah. I have this problem that I try, I want to try help people and like, also like, oh, let's make startups and whatever. And it's, I've learned over the years that generally for me, and it sounds maybe bad, right, but like I helped my friend, Andre, for example, he was, he came up to me in the coworker space. That's how I met him. He said, I want to learn to code. I want to do startups. How do I do it? I said, okay, let's go. Install NGINX. Let's start coding. And he has this self-energy that he actually, he doesn't need to be pushed. He just goes and he just goes and he asks questions and he doesn't ask too many questions. He just goes and learns it. And now he has a company and makes a lot of money, has his own startups. So, and the people that did that I had to kind of like, that asked me for help, but then I gave help and then they started debating it, you know? Do you have that? Like people ask you advice and they go against you, say, no, you're wrong. Because I'm like, okay, bro, I don't want to debate. You asked me for advice, right? And the people who need this push, generally it doesn't happen. You need to have this energy for yourself. Well, they're searching. They're searching searching they're trying to figure it out but oftentimes their search if they successfully find what they're looking for it'll be within sounds very like spiritual sani but it's really like figuring that shit out on your own but they're reaching they're trying to ask the world around them like how do I live this life? How do I figure this out? But ultimately, the answer is going to be from them working on themselves. And like, literally, it's the stupid thing, but like Googling and doing like searching. Yeah, so I think it's procrastination. I think sending messages to people is a lot of procrastination. Like, Lex, how do I become a successful podcaster? Yeah. Bro, just, you know, start. Like, just go. Yeah. And. Just go. I would never ask you how to be a successful podcaster yeah bro just you know start like just go yeah and uh just go i would never ask you how to be successful podcaster like i would just start it and then i would copy your methods you know say ah this guy's a black background we probably need this as well yeah try it yeah and then you realize it's not about the black background it's about something else so you find your own voice like you keep trying exactly imitation is a. Like, a lot of people copy and they don't move past it. Yeah. You should understand their methods and then move past it. Like, find yourself, find your own voice. Yeah, you imitate and then you put your own spin to it, you know, and that's like creative process. That's like literally the whole, everybody always builds on the previous work. Yeah. You shouldn't get stuck. 24 hours in a day, eight hours of sleep, you like break it down into a math equation. 90 minutes of showering, clean up coffee. It just keeps whittling down to zero. Man, it's not this specific, but I had to make like an average or something. Firefighting. I don't like that. One hours of groceries and errands. I've tried breaking down minute by minute what I do in a day. Yeah. Especially when my life was simpler. It's really refreshing to understand where you waste a lot of time. Yeah. And what you enjoy doing. Like how many minutes it takes to be happy. Doing the thing that makes you happy. And how many minutes it takes to be productive. And you realize there's a lot of hours in the day if you spend it right. Yeah. A lot of it is wasted. For me, it's been the biggest uh for the longest time is finding stretches of time where i can deeply focus into really really deep work just like zoom in and completely focus cutting away all the distractions and me too that's the battle yeah it's unpleasant it's extremely unpleasant we need to fly to an island, you know, make a man cave island where we can just, everybody can just go for a week, you know, and just get shit done. Make new projects. Yeah. Yeah. But man, they called me psychopath for this because it says like one hours of sex, hugs, love, you know. Man, I had to write something, you know. And they were like, oh, this guy's psychopath. He plans his sex, you know, in a specific hour. Like, bro, I don't but you have a counter for hugs yeah exactly like yeah like click click click uh it's it's just a numerical representation of what life is yeah it's like one of those like when you draw out how many weeks you have in a life oh dude this is like dark yeah man don't want to look at that too much. Yeah, man. How many times you see your parents? Jesus, man. Yeah. It's scary, man. That's right. It might be only a handful more times. You just look at the math of it. If you see them once a year or twice a year. Yeah, FaceTime today. Yeah. Yeah. I mean, that's like dark when you see somebody you like seeing, like a friend that's on the outskirts of your friend group. And then you realize like, well, I haven't really seen him for like three years. So like how many more times do we have that we see each other? Yeah. Do you believe that like friends just slowly disappear from your life? Like they kind of, your friend group evolves, right? So like, you don't wanna, there's a problem with Facebook. You get all these old friends from school, like when you were 10 years old, back when Facebook started, like you don't really, you would add friend them and then you're like, why are we in touch again? Just keep the memories there, you know? Like it's different life now. Yeah, I have, you know, I don't know, that might be a guy thing or I don't know. There's certain friends I have that like we don't interact often, but we're still friends. Yeah. Like every time I see him, I think it's because we have a foundation of many shared experiences and many memories. I guess it's like nothing has changed. Like we've been almost like we've been talking every day even if we haven't talked for a year. Yeah. It's like, yeah, that's deep. Yeah. So that, so I don't have to be interacting with them for them to be in a friend group. And then there's some people I interact with a lot. So it depends, but there's just this network of good human beings that can, I have like a real love for them. And I can always count on them. It's like, real love for them. I can always count on them. If any of them called me in the middle of the night, I'll get rid of a body. I'm there. I like how that's a different definition of friendship. It's true. It's true. You've become more and more famous recently. How does that affect you? It's not recently. I think it's just a gradual thing, right? Like it keeps going. And I also don't know why it keeps going. Does that put pressure on you to, because you're pretty open on Twitter and you're just like basically building shit in the open. Yeah. And just not really caring if it's too technical, if there's any of this, just being out there. Does it put pressure on you as you become more popular to be a little bit more, like, collected and... Man, I think the opposite, right? Because the people I follow are interesting because they say whatever they think and they ship or whatever. It's so boring that people start tweeting only about one topic. i don't know anything about their personal life i want to know about their personal life like you do podcasts you ask about life stuff of personality that's the most interesting part of like business or sports like what's the behind the sport athlete right behind the entrepreneur that's interesting stuff to be human yeah like you you share that you know like i shared a tweet went too far but like we were cleaning the toilet because the toilet was clogged, you know? But like, it's just real stuff. Because Jensen Huang, the NVIDIA guy, he says he started cleaning toilets, you know? That was cool. You tweeted something about the Denny's thing. I forget. Yeah, it was recent. NVIDIA was started in a Denny's diner table. And you made it somehow profound. Yeah, this one, this one. NVIDIA, a $3 trillion company was started in a Denny's at American Diner. People need a third space to work on their laptops to build the next billion or trillion dollar company. What's the first and second space? The home, office, and then the in-between, the island. I guess, yeah. The island. Yeah. You need a space to like congregate. Man, and I found history on this. So 400 years ago in the coffee houses of Europe, like the scientific revolution, the enlightenment happened because they would go to coffee houses. They would sit there. They would drink coffee and they would work. They would work. They would write or they would, and they would do debates and they would organize marine routes, right? They would do all the stuff in coffee houses in Europe, in France, in Austria, in UK, in Holland. So we would always be going to, we were always going to cafes to work and to have serendipitous conversations with other people and start businesses and stuff. And when I, like you asked me to come on here and we flew to America. And the first thing I realized was that, I've been to America before, but we were in this cafe and like, there's a lot of laptops. Everybody's working on something. And I made, I took this photo. And then when you're in Europe, like large parts of Europe now, you cannot use a laptop anymore. It's like no laptop, which I understand. But that is to you a fundamental place to create shit is in a natural organic uh co-working space of a car for a lot of people a lot of people have very small homes and co-working spaces are kind of boring they're not very they're private they're not serendipitous kind of boring um cafes are amazing because they random people can come in and ask you what are you working on or you know and not just laptops people are also having conversations like they did 400 years ago debates or whatever things are happening and man I understand the aesthetics of it like it's like oh startup bro shipping is a bullshit startup you know like but there's something more there like there's people actually making stuff making new companies that the society benefits from like we're benefiting from NVIDIA I think the US GDP But there's something more there. Like there's people actually making stuff, making new companies that the society benefits from, like we're benefiting from Nvidia, I think. It's the US GDP for sure is benefiting from Nvidia. European GDP could benefit if we build more companies. And I feel in Europe, there's this vibe and this, you have to connect things, but not allowing laptops in cafes is kind of like part of the vibe, which is like, yeah, we're not really here to work.'re here to like enjoy life i agree with this anthony bourdain like this tweet was quote to anthony bourdain photo with him with cigarettes and a coffee in france and he said that's this is what cafes are for i agree but there is some element of like entrepreneurship like you have to allow people to dream big and work their ass off to towards that dream and then feel each other's energy as they interact with it that's one of the things I liked in Silicon Valley when I was working there it's like the cafes like yeah there's a bunch of dreamers that there you can make fun of them for like everybody thinks they're gonna build a trillion dollar company but like yeah and it's all it's not everybody wins nine percent people's will be bullshit but they're working their ass off yeah they're doing something and and you need to pass this startup bro, like, oh, it's startup bro on a level. No, it's not. It's people making cool shit. And this will benefit you because this will create jobs for your country and your region. And I think in Europe, that's a big problem. Like, we have a very anti-entrepreneurial mindset. Dream big and build shit. And this is really inspiring. This is a pinned tweet of yours. All the projects that you've tried and the ones that succeeded. That's very few. Mute life. It was for Twitter to mute, to share the mute list. Yeah. Mute words. Fire calculator, no more Google, maker rank, how much is my site project worth climate finder ideas ai airline list still runs but it doesn't make money airline list like compares the safety of airlines because i was nervous to fly so i was like let's collect all the data on crashes for all the airplanes bali c cable nice that's awesome uh make village nomad gear 3d and virtual reality dev play play my inbox like you mentioned there's a lot of stuff yeah man i'm trying to find some embarrassing tweets of yours you can go to the highlights tab it has all the like the good shit kind of there you go this was dubaiV, building an AI startup. Wow, you're a real influencer. And if people copy this photo now and they change the screenshot, it becomes like a meme. Of course, you know. This is good. This is how Dubai looks. It's insane. It's beautiful. Architecture-wise, it's crazy. The story is behind these cities. Yeah, the story is behind, for sure. So this is about the European economy where like. European economy landscape is ran by dinosaurs. And today I studied it so I can produce you with my evidence. 80% of top EU companies were founded before 1950. Only 36% of top US companies were founded before 1950. Yeah, so the median founding of companies in US is something like 1960. And the median of the top companies, right? And the median in Europe is like 1900 or something. Yeah. So it's here, 1914 and 1963. So there's a 50-year difference. It's a good representation of the very thing you were talking about, the difference in the cultures, entrepreneurial spirit of the peoples but europe used to be entrepreneurial like there was companies founded in 1800 1850 1900 it flipped like around 1950 where america took the lead and um and i guess my point is like i hope that europe gets back to because i'm european i hope that europe gets back to being an entrepreneurial culture where they build big companies again because right now the all the old dinosaur companies control the economies. They're lobbying with the government. Europe is also, they're infiltrating with the government where they create so much regulation. I think it's called regulatory capture, right? Where it's very hard for a newcomer to join and to enter an industry because there's too much regulation. So actually, regulation is very good for big companies because they can follow it. I can't follow it, right? If I want to start an industry because there's too much regulation. So actually regulation is very good for big companies because they can follow it. I can't follow it, right? If I want to start an AI startup in Europe now, I cannot because there's an AI regulation that makes it very complicated for me. I probably need to get like notaries involved. I need to get certificates, licenses. Whereas in America, I can just open my laptop. I can start an AI startup right now. Mostly, you know? What do you think about EAC, Effective Accelerationist Movement? Man, you had Beth Jaisels on. I love Beth Jaisels and he's amazing. And I think EAC is very needed to similarly create a more positive outlook on the future. Because people have been very pessimistic about about society about the future of society um you know climate change all this stuff uh e-arc is like is a positive outlook on the future it's like technology can make us you know we need to spend more energy we should find ways to of course get like clean energy but we need to spend more energy to make cooler stuff and you know go to space and build more technology that can improve society and we shouldn't shy away from technology technology can be the answer for many things yeah build more don't spend so much time on uh fear-mongering and cautiousness and all this kind of stuff some is okay some is good but most of the time should be spent on building and creating on like and doing so unapologetically. It's a refreshing reminder of what made the United States great is all the builders, like you said, the entrepreneurs. We can't forget that in all the sort of discussions of how things could go wrong with technology and all this kind of stuff. Yeah, it goes together. Look at China. China is now at the stage of like America, what, like 1900 or something? They're building rapidly, like insane. And obviously China has massive problems, but that comes with the whole thing. That comes with America in its beginning, all the massive problems, right? But I think it's very dangerous for a country or a region like Europe to, you get to this point where you're kind of complacent you're kind of comfortable and then you know you can either go this or you can go this way right you're you're from here you go like this and then you can go this or this i think you should go this way and uh go up yeah go up and and uh i think that's the problem is the the mind culture so eoc i made eu oc which is like the European kind of version. I made like hoodies and stuff. So a lot of people wear like this Make Europe Great Again hat. I made it red first, but it became too like Trump. So now it's more like European blue, you know? Make Europe Great Again. All right. Okay, so you had a incredible life very successful built a lot of cool stuff so what advice would you give to young people about how to do the same man I would listen to like nobody just do what you think is good and follow your heart right like everybody peer presses you into doing stuff you don't want to do and like they tell you like parents or family or society but like try your own thing you know because it probably it might you don't want to do. And like they tell you, like parents or family or society and tell you, but like try your own thing, you know? Because it probably, it might work out. You can steer the ship, you know? It probably doesn't work out immediately. You probably go into very bad times like I did as well, relatively, right? But in the end, if you're smart about it, you can make things work and you can create your own little life of things as you did, you know, as I did. And I think that should be more promoted, like do your own thing. There's space in economy and in society for do your own thing, you know? Yeah. It's like, you know, like little villages, everybody would sell, I would sell bread, you would sell meat. Everybody can do their own little thing. You don't need to, you know, be a normie, as you say, you can be what you really want to be, you know? And like go all out doing that thing. Yeah, you can be what you really want to be, you know? And like go all out doing that thing. Yeah, you got to go all out. Because if you do, if you half-ass it, you cannot succeed. You need to go lean into the outcast stuff, lean into the being different and just doing whatever it is that you want to do, right? You got to whole-ass it. Yeah, whole- whole asset, yeah. This was an incredible conversation. It was an honor to finally meet you. It was an honor to be here, Lex. To talk to you and keep doing your thing. Keep inspiring me and the world with all the cool stuff you're building. Thank you, man. Thanks for listening to this conversation with Peter Levels. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Drew Houston, Dropbox co-founder. By the way, I love Dropbox. Anyway, Drew said, don't worry about failure. You only have to be right once. Thank you for listening. I hope to see you next time. | Pieter Levels: Programming, Viral AI Startups, and Digital Nomad Life | Lex Fridman Podcast #440 | 13,414 | Lex Fridman | 20240820 | Pieter Levels (aka levelsio on X) is a self-taught developer and entrepreneur who has designed, programmed, launched over 40 startups, many of which are highly successful.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep440-sb
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.
*Transcript:*
https://lexfridman.com/pieter-levels-transcript
*CONTACT LEX:*
*Feedback* - give feedback to Lex: https://lexfridman.com/survey
*AMA* - submit questions, videos or call-in: https://lexfridman.com/ama
*Hiring* - join our team: https://lexfridman.com/hiring
*Other* - other ways to get in touch: https://lexfridman.com/contact
*EPISODE LINKS:*
Pieter's X: https://x.com/levelsio
Pieter's Techno Optimist Shop: https://levelsio.com/
Indie Maker Handbook: https://readmake.com/
Nomad List: https://nomadlist.com
Remote OK: https://remoteok.com
Hoodmaps: https://hoodmaps.com
*SPONSORS:*
To support this podcast, check out our sponsors & get discounts:
*Shopify:* Sell stuff online.
Go to https://lexfridman.com/s/shopify-ep440-sb
*Motific:* Generative ai deployment.
Go to https://lexfridman.com/s/motific-ep440-sb
*AG1:* All-in-one daily nutrition drinks.
Go to https://lexfridman.com/s/ag1-ep440-sb
*MasterClass:* Online classes from world-class experts.
Go to https://lexfridman.com/s/masterclass-ep440-sb
*BetterHelp:* Online therapy and counseling.
Go to https://lexfridman.com/s/betterhelp-ep440-sb
*Eight Sleep:* Temp-controlled smart mattress.
Go to https://lexfridman.com/s/eight_sleep-ep440-sb
*OUTLINE:*
0:00 - Introduction
2:03 - Startup philosophy
9:34 - Low points
13:03 - 12 startups in 12 months
19:55 - Traveling and depression
32:34 - Indie hacking
36:37 - Photo AI
1:12:53 - How to learn AI
1:21:30 - Robots
1:29:47 - Hoodmaps
1:53:52 - Learning new programming languages
2:03:24 - Monetize your website
2:09:59 - Fighting SPAM
2:13:33 - Automation
2:24:58 - When to sell startup
2:27:52 - Coding solo
2:33:54 - Ship fast
2:42:38 - Best IDE for programming
2:52:09 - Andrej Karpathy
3:01:34 - Productivity
3:15:21 - Minimalism
3:24:07 - Emails
3:31:20 - Coffee
3:39:05 - E/acc
3:41:21 - Advice for young people
*PODCAST LINKS:*
- Podcast Website: https://lexfridman.com/podcast
- Apple Podcasts: https://apple.co/2lwqZIr
- Spotify: https://spoti.fi/2nEwCF8
- RSS: https://lexfridman.com/feed/podcast/
- Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
- Clips Channel: https://www.youtube.com/lexclips
*SOCIAL LINKS:*
- X: https://x.com/lexfridman
- Instagram: https://instagram.com/lexfridman
- TikTok: https://tiktok.com/@lexfridman
- LinkedIn: https://linkedin.com/in/lexfridman
- Reddit: https://reddit.com/r/lexfridman
- Facebook: https://facebook.com/lexfridman
- Patreon: https://patreon.com/lexfridman | 2024-08-31T07:22:38.510934 |
https://www.youtube.com/watch?v=eFJ4y9cz2aI | Hello community! I was asked to explain here a brand new paper about automated design of a genetic system in simple terms. So yes, I'm gonna accept this challenge. So you know in my last video here, PhD days isn't one day, with an open source AI, here the AI scientist, I showed you how you can build multi-AI agents that they write you a scientific paper. And now we go to the next step, if you want. Now we want to have an automated design of an agentic system, of a multi-agent system. And we have here University of British Columbia, Vector Institute in Canada, CIFAR AI Chair, and they have a new paper and they say, hey, we have a newly forming research area that is called the automated design of agentic systems, which aim to automatically create powerful agentic system designs that are specific to your job. So unfortunately, they still go here with a central intelligent they call this the meta agent i show you the code in a minute which is nothing to do with the facebook agent let's call it the super agent so we have here a central intelligent that builds and selects all the other little agent the other minions here ideas that you have somewhere a directory where you already have some interesting agents. And now this super agent comes, look at all the agents and chooses the right agent for the right task. And if an agent is not really 100% fit for the task, this super agent goes to GPT-4 and says, hey, can you optimize the code for my specific domain specific task that's all there is but let's have a deep dive so before we jump into the code and i explain the code let's just have a look what they give you and they say hey we have a simple three component design so we have a lot of little agents a lot of little minions where we have the code in our directory or in our github repo this is now our search space and then for our super agent we have a search algorithm the searches here those vector space of all available little minions little agents if we find one beautiful we take it or we optimize it a little bit. We evaluate this agent. And if it's not good, we optimize it. If it's perfect, we just use it. Those are the three elements. And they did a lot of experience, so let me show you. The first here is the meta agent search on the RSC challenge. For my little green glass hoppers, the RC price is here a multi-million dollar price and it is simply about here. RC AGI and you see here the prices are. What are the prices? Those are the sponsors. Where are the prices here? The ground price is 500. 500 000 the progress price is 100 000 pub verification price you get the idea and there's a beautiful challenge for you now let's look at the agents that you should have in your code repo so they compared here now to the agent that is a state-of-the-art design agent. They go here with an F1 score with accuracy, whatever you like here as your benchmark data. And those are the agents that they have. The chain of sort, the chain of sort here with let's go here. I show you this with a self consistency check. So it is not simply a greedy decode chain of sort prompting, but we want here that we have here a diverse set of reasoning tasks that are explored and then let's say we go with a majority voting. So self-consistency if you want to see this from a green cross, as this is the paper I would recommend here by Google. Or you go with a self-refinement, I have a video on this, or you go take a step back and look at the abstraction, or you go and assign here a specific role, a role assignment agent. So you can go there, you can take the original code from the authors of the paper or whatever, and they give you here for the RSC challenge here, the benchmark data. and as you can see we are here chain of thought 64 64.4 59 so beautiful for math the accuracy you see they're quite have a heavy spread here and then if you do here the automatic agent selection and the automatic agent update we have here now by the authors here the best agents for this specific rc challenge 79.4 so you see from 65 to 79 this is a significant jump so the system tries to automatically design here the perfect number of agents and the perfect function in the agents. And if an agent is missing some code block, it will ask GPT-4 to write this code block to get a better performance. Now, yeah, you can have this again here, dynamic role playing or structure, multimodal feedback loop or whatever. So this is an interesting topic we should have a look at. Now show you here the prime author. This is it gives up go there. He has there his automated design of a gigantic system repo. Really nice. You see here the complete code implementation and they explain how you can run this on your machine, how you can modify this for your domain. Nice course. Careful. there's a warning it says hey this code in this repo involves executing untrusted model generated code we strongly advise user to be aware of this safety concern could have some malicious action in your current setting so careful do this maybe in a sandbox environment but beautiful we have this now and yeah let's go here to the official publication before we do anything else and here we have an automated design of a genetic system 19 of august 2024 and here they give you here a beautiful explanation what is going on and we could theoretically look here at all the theory, but you know what? Since today I have to be simple, I just show you the real code and the real prompts. Let's start with this. So here we are now, prompts. We are at page 24. And here you have the system prompt for our super agent and it says hey you are a helpful assistant make sure to return here well-formed json observed and you said amazing so and then we have here the main prompt for the meta agent we tell it hey you are an expert machine learning researcher testing various agentic system your objective is to design a billing block such as prompts and control flows but we will focus at the first of the prompt and the second part here various agentic system. Your objective is to design a billing block such as prompts and control flows with this system to solve complex tasks. Your aim is to design an optimal agent performing well on a particular domain. Then we have to provide some framework code. I'll show you this in a minute. Some output instruction and some examples for in-context learning and then we have to give it some code base. So we have here some agent archive. This has of course the original code and then here are ideas that the super agent if it it finds here an agent, a little minion agent here in this directory that it can optimize, it stores here the optimized version of this minion. Beautiful. And then we say your task is, you are familiar with prompting techniques and the agent works from the literature. The goal is to maximize the performance matrix by proposing interesting new agents. So design new agents, take existing ones, optimize, permutate, combine them, alterate these agents, design some new conglomerate of agents. Just go. Of course, remember, you have to pay for everything. Ok, so be creative when thinking about this, use the knowledge from the archive, have some inspiration from academic literature, think outside of the box and you know all of these classical ways. Great. Now for the output instruction, we say hey, you have to respond in JSON to me and we have here a clear schema. So the first key should be sort and it should capture your sort process for designing the next function. So in the sort section your first reason, what is interesting, yes, the second has the name, the third has the code, yeah, forward. I will show you the forward function in about two minutes. We go through this step by step. And then here is an example i thought thoughts are abc the inside are yes yes the overall idea is you describe this implementation yeah beautiful the name you have your specific name and then the code you have here in this example you define a forward function and this is the code. And this is a really interesting part and we will focus here on this in part number two of this video. But let's go here with the classical process. So our super agent has now the instruction and now we have here a self-reflection round. So the system, the super agent, must now run one, two, three self-reflection rounds. And what it is doing? It is looking at the interestingness. Is this new idea interesting? Make sure that it has no implementation mistakes. That there is no error. Is an improvement given the classical agent that I just showed you, the chain of thought? What can you add? Maybe you can add a memory, maybe you can improve here the planning module. Whatever you find. Check that it is not redundant, try to avoid being too similar to the previous agent. Yes, yes, yes, so GPT-4 here has a main task to perform and then we tell this super agent running here on gpt4 omni for example in our example hey your response should be as follows you have a reflection so you provide your sorts of the interestingness of the new thing then you have revise your previous proposal and propose a new architecture if necessary, you provide a name and you provide a new codebase if this is necessary. And then you have a second run. Beautiful, beautiful. Now, you remember that I've showed you we have to provide the framework code to our super agent. With a simple framework to implement basic functions such as we have to query a foundation model gpd4 omni and this has to be done in a formatted prompt way so here we have it you have it here also in the github repo but let's just have a look at this so the auto information the format information the role that you have here, beautiful. And then we have here the foundation modules. Base class of the foundation model, we will go in detail on a concrete example, this is here just the abstraction. You have an output field, you have a name, you have a role, you have a model to be used, you have a temperature to be set set and you have a unique identifier. Then you generate here your first prompt for the foundation model. As an example, you are a helpful assistant, do exactly what we were told. I guess you have an idea. Then you define here the query for our super agent and then we define here an agent system with the forward function. Here is where our code will enter. And I will show you here the class of the agent system and the class of the base agent in a concrete example in one minute. So you see, this is all there is to our super agent. Beautiful. Now, next step is here, here, we do now this Minion agents, this little super trained, perfectly optimized, strategically aligned agents. Let's have a look what they give us here, a multi-step peer review agent. So you have a scientific publication and you just act as a human reviewer of the scientific information. So here we have the definition of our forward function with the task information. And now we have, for example, here the initial instruction. Please think step by step and solve the task. Wow. Then we have the critique instruction. Please review the answer above and provide feedback on where it might be wrong. If you're sure that it's correct, just say true correct. Refine instruction, give previous attempts and feedback, carefully consider where you should go wrong in your latest attempt, using insights from previous attempts from your memory try to solve the task better. The final decision, give them all of the above, thinking and answer, reason over all of them carefully and provide a final answer. So you see, simple task. Then we have our foundation model modules. Here a concrete example. You have here, for example, the role to be a physics expert, to be a chemistry expert, to be a biology expert, or a general science generalist, then you are not going to believe it if you go for the critic model, you are a physics critique, a chemistry critique, and you get the idea. Then we have here all the thinking and then we can implement this simply here with a for loop. Now you might say, hey wait a minute, in your videos you showed us here how to code multi-agent reinforcement learning, you showed 20 different code implementation plus recursive delegation and agent scope here for multi-agent. Yes, you could use this or you simply code it by hand here as a general case and this is it now you know me if i want to be simple i do have a problem therefore i ask here my chat gpt what is here this code and you can do the same if you want let me give you 35 yeah so i just copied here this code and i said hey explain this to me and gpt4 omni comes back let's make it a little bit bigger that you see this oops sorry where are we come Come on, here. So, this code, here, forward, this is it. Defines a method called forward, seems to be part of a larger class, yes, yes, yes, so let's break it down. Method definition and initial instruction. Initial instruction, critique instruction, refine instruction, final decision instruction. Ask the models to think step by step. Yes, critique, you know this. Then second step, we initialize the modules. A, B, C. Models, final decision models, then we initialize the storage for our sorts, for our answers and for our feedback. This is happening here. Explained here. These lists are stored, storing the thoughts and feedback for each of the foundation models through the process. And then we have the initial reasoning by each expert model. This is our for loop here. And whatever is the loop, you have here the purpose, the storage. And then we have here a cross critique by the critique modules each expert answer you know physics chemistry is critiqued by another critique model and this cross critique process generates the feedback of the initial answer you can store this and then you can refine this answer of course and then you have here with this code your final decision making. The final decision module combines all of the refined thoughts and answers from each expert module. It uses this input to make the final decision and the final output. The final answer is here presented and it is the conclusion reached by considering all the expertise and all the feedback that you have and how many runs you had for the self-reflection, for the optimization, for the evaluation and for the continuous improvement. So you see here, the forward method is designed to solve a complex task by simulating a collaborative process among multiple expert modules. Each module first tries to solve the task independently, then critiques the other module's solution, refines its own solution based on the feedback and finally, a decision is made by aggregating the refined answers. This process allows the system to combine diverse perspectives and iteratively improve the quality of the solution, making the final output more reliable and robust. So here you see why it is better to have those multiple expert modules instead just of a simple GPT-4 Omni that does all of this somehow internally. But here we have a clear defined process. Now, you notice that I thought for a moment, hey, I want to be consistent here in my terms, and I'm talking here about expert modules, but we should be talking about agent, because this is, you remember, this is here a multi-agent system. I was talking here about multi-agent and multimodal AI, if you're doing physics and so. So where is now the difference? Why do you say expert models instead of agents? I ask you my GPT-4. And it came back. And this was quite interesting. And this, you can have this discussion here with AI. It says, in the context of the code, the term model is often used to refer to the components of a system that perform specific functions. Here, the term foundation module likely stands for a foundation model module, which refers to instances of the model, of agents, or GPT-4, that have specific roles, where agents generally refer to autonomous entities that perform tasks or make decisions based on inputs and predefined rules or learning. So in this case, each module is functioning as an agent with a specific expertise. Ask with solving part of the problem. solving part of the problem. So if you want, each module is functioning as an agent with a specific expertise. Okay, so this is the big revelation. Why is it? I find it also nice here, the code defines these components as instances of a module which suggests here this modular approach. The models are likely configured with specific roles, like you are now a physics expert, and they operate as independent agents within the broader systems. And then it even explains, and this new GPT-4 Omni is really better than the old one, why both terms are interchangeable in this context. And I didn't ask it, it was an explanation given by GPT-4. And it says modules as agents. Each module is an agent with a specialized skill designed to solve tasks or critique solutions within its domain of expertise. So in essence these modules are agents, but they are referred to as modules in the code to reflect the role as components in a larger system and i say my goodness so i saw beautiful i was happy everybody was happy great so let's just go back here because i wanted to show you here the next step yeah okay so here we are and these are here the examples here of our little let's go back maybe where are we yeah and this is now the code of our minions yeah there's a very nice information about the cost of all these experiments and they say it costs about 500 us dollar here for a single run of search and evaluation and if we go here with the reasoning cost about 300 dollars so that you have an idea if you have one single run 500 bucks so you can imagine it gets more expensive real fast but of course this all depends on your financial background great so what else yeah well i wanted to show you here the official repo. Add us and if we go there and we have a look, I don't know what we're gonna choose, what we're gonna choose. Mathematics. Yeah, let's choose mathematics. And I told you here, if you don't know a code and I was asked, hey, I don't understand the code base I see in the github repos what can i do now again my simplest solution is here let me go there you see here is the code where we have here the class agent base we have here initialization generate the prompt do the query and then we have here the class agent system and we define here the search functionality, and if I'm not sure what this is, you know, very simple, you're not going to believe what I do, I just take the code, and as you can see here, class, llm, agent, base, I just copy the code, and I say, hey, explain this code to me, so gptT-4 comes back and I think beautifully explains this. Comes back and says, LLM agent-based class represents a basic template for an AI agent that interacts with the language model. The primary function of this class include generating prompts, querying the language model and handling the output. This is it. Yeah, I got another question. Just had an idea. I now was asked, hey, you showed us something here about, where is it, where is it, where is it here, about prompt optimization. This DSPy, our old model and now this new by Stanford here, this text-grad, this automatic differentiation via text is this not something similar or can we optimize now further with text grad here this code and the simple answer is yes of course you can either do it yourself or you can upload here the github repo of text grad from stanford to let's say gemini gemini 1.5 pro with 2 million context length and then you tell Gemini, hey look this is here the code base that I have and look over there I have here the repo from Stanford with an extended code version. Can you try to bring together and create here an optimized advanced code base integrating here the main modules here of both code implementation sometimes it works right out of the box sometimes you have to spend some hours to optimize it but it's really nice let you help you here with a code llm to put together here more complex code structures beautiful i was here with the llm agent base class yes we have the output name the agent named rule the model gpt4 for example would recommend the temperature the id you have an initialization then you generate the prompt you have of course a query and then you have the prompt, you have of course a query, and then you have here a string representation of the agent used for logging or debugging, and then you have here with the call a convenience method that allowed the agent to be called like a function, which internally calls the query method. So we are back here at the query, it simply sends a generated prompt to your GPT-4 and processes the response in gpt4 writes the codes optimize the codes tells you hey this is now a much better code than your other code example stores and you can then store now this new updated block or definition or code segment or whatever you have in your directory or in your github repo and you improve improve and improve continuously so to all my viewers who ask hey i don't understand this piece of code can you help yeah just take your 3.5 sonnet from cloud or whatever you like whatever code llm you like or deep sea code version 2 also real nice and you don't have to spend hours trying to understand it let it explain here the system to you then we have the agent system class then we have the search function central to the process of generating and refining the agent over multiple generation it uses a search process to evolve agents evaluates the agents and store here the optimized agents in the directory. Key concept, reflection, main functionality. And then you know me, I said hey, show me the code where this happens. This system uses a language model to generate new agent designs. And I was not sure what the authors meant with an agent design, because it could be theoretically everything. So, you know, when I'm simple, I'm asking simple questions, and therefore code to generate new agent designs tells me, okay, construct a message list for the model, query the language model to generate a new agent design. You have the reflection process, reflection one, yes, yes, yes. And then it just tells you here. Conducts the prompt. This list contains messages that are sent to the language model. The system prompt sets up the context and the user prompt specifies what the model should do in detail, like generate a new agent design. And then I said, okay, beautiful. Give me an example now of this agent design that we just generated with the code. The design includes the algorithm, the decision-making process itself, and any special features or module that the agent uses to achieve its goal. So, rather clear, even give me here a complete example, high level description using a chain of thought reasoning strategy, and we include here a memory module to recall our previous information, hey, is the system converging to a better solution or is it, has it a gradient methodology that we are leaving now, or even our local minimum, whatever there is, component of the agent, then chain of thought reasoning, the memory model, the decision making output, yes, yes, yes, and then they give you here simple Psyduck code so you understand what is part here in this publication, what they meant with the agent design. So, yes, yes, rest, so you just go there. How the agent design works. You have the input, you have the chain of thought reasoning, you have the agent compares the conclusion with a multiple choice option to select the best match, and then the agent outputs the final answer, choosing the option it determined to be the correct one. You can be here, standardize testing, consumer support chatbot educational tools whatever your complex question is to break it down in the manageable part give it an agent put an agent on each single now much simpler part let it come together let it critique all others let it bring together let's have a two self-reflection run before you finally then go here for the final answer. So you see, it is as simple as can be. Great! And if you made it, if you are one of the lucky ones to made it here till the end, I have a bonus for you. Because I also received a question that you said here. Hey, listen, there is a brand new literature now. August 13. Oh, this is not anymore brand new. Okay, sorry. It's called the Open Research Unleashing AI for Accelerating Scientific Research. last video was here on PhD thesis in Vendeea an open source AI who does here this computer simulation completely autonomous for us in an automatic version here synthetic science done by our AI models our multi-agent AI model and I was asked hey what about this open researcher is it better is it the next step what is it compared here to my last video so I had a look at it and let me formulate it here in a in a very diplomatic version I think they focus here that this is based on the retrieval augmented on our rack generation to integrate here different rack modules here with our llm to have to integrate here an up-to-date domain specific knowledge in our llm yes this is beautiful and they now argue that compared to a simple rack that their version is now a little bit more performant so and they have different tools so if we have a look at this so we go here let's say with the query tool you have an active query tool and a query rewriting tool so if your question if your input to the llm is too complex before it is sent off to rack the llm try or this tool here tries to rewrite it here in simpler terms and therefore you have this rewriting you have the query decomposition something similar then you have the retrieval tools the internet retrieval tools they use here the microsoft internet explorer then you have the retrieval tools, the internet retrieval tools. They use here the Microsoft Internet Explorer. Then you have here BM25 retrieval. Then you have a data routing strategy that is standard of current RAC applications. Maybe what you see especially is the re-ranking of RAC, but I think all the current RAC implementation already have a re-ranking of rag but i think all the current drag implementation already have a re-ranking of their results so i wouldn't call this here a novelty a brand new special idea filters also implemented here in the advanced rack system so yeah and then the generation tools as they call here the tool use they have here the generation they have a citation tool this is nice and then they have refinement tools but i think yeah this reflection we already have at the base level and then they have a polishing so yes it's an optimized formulation tool but if you ask me what is the unique currently unseen new component i would not be able to answer this because i think it is a very nice bring all concepts together whatever we know we built now one coherent system and you see their web application this is built here with streamlit and you can have a look at this and if you want here the complete system the complete case here they have this oops i have to make it a little bit bigger that we can see this. So here's the Open Researcher and you say, okay, what is PPO and sends here for the proximal policy optimization. Then you can say, help me to summarize its latest development and identify its common variants. And then you have a tool here in our Open Researcher that is now trying, interacting with you asking you for further details this is also something new as they specify here in their paper so the open researcher comes back and says could you specify the time range for the latest development that you are interested in so you're looking for the last month for the for the last three years so what is it and you say just look at the last year and then you get here this and you have sub queries retrieve information on recent developments in ppo within the past year and then identify common variants of ppo since your question here yes okay so you see and then you get the result so i would say now this is a beautiful brought together implementation of an advanced rack system that i think you and i already use there are tools like a refinement tool or a re-ranking tool or whatever tool you want but in basic it is all there but now in one complete system integration maybe if you look at the code you would like to see this now in one frame so if you look here and or if you ask me what is here the brand new thing with open researcher is it accelerating scientific research i think it is a very nice rag implementation with all the methodologies that we know from advanced rag and they have a very beautiful implementation so if we go there you see yeah there it is table of contents using api setting up the process introduction the performance here so this is it this is this paper therefore i hope i've answered all your question regarding here whatever it was we are continuing here in this if you want multi-agent multi-domain coding and now today we had a look can we automate here that a super agent is selecting its minion agents to have the perfect configuration for our specific task in our specific domain, like theoretical physics or medicine or whatever, UF chemistry. And then we have a self-learning, self-reflection process happening. Careful could be real expensive, real fast. So have you got Rails ready and code is open source? So why not give it a try? And if you have some feedback in your particular branch, in your particular domain, hey, I think the community would be happy to hear from you. I hope you enjoyed it. I hope you found some new ideas. And it would be great to see you in my new video. | NEW: AI Agents That Design Themselves w/ ADAS | 2,123 | code_your_own_AI | 20240903 | AI Agents that create and design themselves: Explore a novel framework for the automated design, evaluation, and refinement of AI agents through iterative collaboration and self-improvement processes.
The system leverages advanced language models to generate agent architectures that consist of multiple specialized modules, each embodying expertise in specific domains such as physics, chemistry, and biology.
The framework operates by initiating a task that each expert module approaches independently, generating initial solutions through step-by-step reasoning. These solutions are then subjected to a cross-examination phase where peer critic modules analyze and provide constructive feedback on each other's outputs, identifying potential errors and areas for improvement.
In response to the critiques, the expert modules refine their solutions, incorporating insights and corrections to enhance accuracy and reliability. A final decision module then aggregates the refined solutions, performs comprehensive reasoning, and produces a cohesive and optimized answer to the original task.
This iterative process fosters a collaborative environment where AI agents can self-improve by learning from peer feedback and refining their reasoning strategies. The automated nature of the design and evaluation pipeline accelerates the development of robust AI systems capable of complex problem-solving through collective intelligence and adaptive learning.
The results demonstrate the effectiveness of this approach in creating sophisticated AI agents that exhibit enhanced performance and adaptability across various domains, highlighting the potential for scalable and automated AI development methodologies in tackling increasingly complex challenges in all industrial and medical areas.
#ai
#aiagents
#aiagent
#newtechnology
#science
All rights with authors:
Automated Design of Agentic Systems
https://arxiv.org/pdf/2408.08435
GitHub Code:
https://github.com/ShengranHu/ADAS
00:00 Automated Design of Agentic Systems
03:19 SOTA Hand-design Agents vs Meta Agent Search
05:33 GitHub Code repo ADAS
06:33 Real Code and Prompt for Meta Agent
09:35 Self-Reflection Prompt of Meta Agent
10:35 Framework Code
12:21 Python code for Review Agent (Minion)
14:45 GPT-4o explains Python Code of Agent
21:14 Template for an AI Agent: class LLMAgentBase
25:10 Search Function is central to refine Agents
26:23 Example of an Agent Design
28:12 Special Bonus: NEW Open Researcher for accelerating Research | 2024-09-03T08:03:04.680161 |
https://www.youtube.com/watch?v=ag-KxYS8Vuw | What's up, engineers? Andy Dev Dan here. Cursor has been absolutely popping off in the tech ecosphere. This is due to the new multi-file editing composer feature. We covered this in the previous video. I'm a huge fan of Cursor, but the truth is Cursor is a closed source commercial application and putting all your eggs in one basket this early in the generative AI age is a mistake. On the channel, we've been benefiting from AI coding and multi-file editing for over a year now, all thanks to the original LLM-based AI coding tool, AIDR. This is a coding tool that, as you'll see in this video, typically outperforms Cursor while giving you more control over the AI coding process. Not to mention, Ader is completely free with tons of LLM providers and Paul, the engineer that built Ader, puts out some incredible blogs to keep you updated on AI coding. So in this video, I want to show you another option outside of cursor to help you diversify your AI coding tools and we're going to look at Ader by making a change to a brand new Mermaid AI agent that I'm building that can help you build diagrams faster than ever. If you don't know what Mermaid is, it is an awesome text-based diagramming and charting tool. You can basically pass in simple blocks of text and get out node edge-based diagrams. Let's dive in by first showing off how our mermaid diagram AI agent works. So the mermaid agent is really simple. You run a command, you ask for the diagram that you're looking for, you specify an output image, and then optionally you can pass in a input file that the mermaid agent then reads, generates diagrams, and then you can on a loop improve the diagram let me just show you what that looks like i'll open up the terminal and i'm going to run this command so i'm just going to go ahead and kick this off and then we'll walk through exactly what's going on here so you can see the diagrams getting generated there and now we have this image fully generated so let's go ahead and just dissect that prompt a little bit right so we're using typer so i I have this function called meriter. And then we have this prompt, right? Flowchart explaining why communication via diagramming is so important for engineering great products. And then I say include things like time saved, faster iteration, increased trust, and more. Then I specify the output file. So based on that, we got this really, really cool diagram. The cool part about this tool is we can continue to iterate. So I'm running the iterate command and I can just say add additional nodes group by benefit. Okay. And so now with this human in a loop pattern, we're going to loop and hopefully we'll have our elements combined together. It's kind of squished. So let's go ahead and clean this up a little bit. I'll say drop subgraphs and make diagram LR. So now we're gonna make it run left to right. Awesome, so it looks pretty good, right? We can see that our benefits of diagramming are grouped together by time saved, clear communication, team alignment. And then the last thing I want to do here is I want every single node to go into engineering grade products. So I'm just going to go ahead and add that as a final prompt, right? Create an edge between the last nodes and engineering grade products. So we're going to ask that for that last change there. And we should get a full perfect. We should get a diagram generated there, right? This is a killer use case for generative AI. When you combine it with a fantastic text based tool like mermaid, you can rapidly iterate on asset generation. So if we go ahead and look at the generated output file, you can see this is awesome. But inside of our code base here, we're only saving the last generated file. So our task here today is going to be to take this file, move it into the output directory. You can see we have a couple other items generated here. And we also want to save all the previous iterations of this file getting generated right so let's go ahead and just make a couple tweaks to this code base let's get into the AI coding of this I have a couple key pieces of value I'm going to share with you here one of the great parts about AIDR is that it runs right in the terminal so all I'm going to do is type AIDR no auto commits We're going to run with dash dash sonnet. So you can see here we're running sonnet with the diff edit format. We have infinite output and in this code base we have 19 files. Let's go ahead and add files to our context window. To do that I'll type slash add and if we look at the code base here I want everything under source mermaid agent so I'll literally just type source mermaid agent slash star now you can see I have every single file in that directory I have all my python files so now if I type slash tokens you can see exactly how much every single run costs and you can see all the files in your context window so So this is fantastic. Let's go ahead and just ask the code base. Let's ask Ader. Let's ask our LLM what changes we need to make in order to move our generated image into the output directory. We also want to add the iteration number and save that as well. Go back in the terminal. I'll clear and I'll just type slash ask in what function are we saving the mermaid diagram images and so with all the context of the code base call 3.5 running on ader has shown us exactly where this is there's a function called mm and we have this save image locally if we go ahead and search all hop to this you can see that we just have this simple save command. It's just saving the file name that gets passed in, right? And the MM just builds the image based on the graph that gets generated by our LLM. And then we just save that image, right? So fairly simple. Let's go ahead and ask for the changes we'll need to move this to the output directory, right? And I'm being super verbose here, right? Just because we're walking this through and I wanna share how I think about AI coding with you. But normally, I would like, I already know where this code is. I don't need to ask. I would just go ahead and prompt it. Although I do want to highlight something here that we'll talk about in a second. Building up these conversations with AIDR with the slash ask command is super, super important. And you'll see why in a second here. So I'll go ahead and type slash ask. And we reuse existing functionality to save that file to the output dir. Okay, so as you can see here, there's this build file path utilities function, and we can go ahead and just grab this, right? And search all. So you can see here, we already have this pattern set up of using the build file path to write to our output directory, right? So it works just like this, right? We have this function, we have this simple constant that gets combined with the file name and make sure that the directory exists, right? So simple enough. So this is really cool, right? And our AI coding assistant picked up on this. And now we can do something really, really cool. So before I mentioned that, you know, normally you can just type the changes that you want. I've been using this pattern of running the slash ask command to have more conversations with your AI coding assistant. And then after you know the changes it's going to make, then you say, go ahead, make the changes. So that's literally what we're going to do here. Go ahead and make that change. So you can see there, we got the import and we have the update using the new output path. So go ahead and close this. Let's open up that file. We go to the mermaid.py. Our save image locally is now going to first run the build file path from our utils. So really simple. I just want to walk through that so you can see exactly how AIDR using the ask command can build this up for us. So that's a simple change. That looks good. Before we go ahead and run the example, let's go ahead and get that second change in there, right? So ask draft the change needed every iter file in our mer iter function. And I'll say every iter mermaid chart. Okay, and I'll run that. Sometimes when you run these changes and you run your ai coding prompts it changes too much and it makes mistakes and it does things wrong by running the ask command you can ask for a full-on draft you can see the reasoning behind your ai coding assistant and then you can say make these tweaks or go ahead and implement that so overall that looks good but all i wanted to do is save the new output file name and i basically only want this right so we're going to iterate while we're running this while true and we can go ahead and just take a look at this code in the main file here so you can see we have this loop during our main iter call and all we want to do is update to make sure that saves a new file in the output directory, right? You know, I'm just going to highlight this and only implement this portion. I'm just going to paste that in. Okay. So let's go ahead and see how it's done there. So now we have iteration count. We now have the initial output file. That's good. So that's getting saved. And then we should see our file getting incremented here, right? Exactly like what we were looking for. We have the iteration count and then we're saving this in a loop. Awesome. So this looks right. Let's go ahead and what we'll do is we'll open up a new terminal. We'll go full screen here and now let's go ahead and iterate on a pie chart. Let's use a pie chart as an example, right? This is one of the cool parts about Mermaid. There are varieties of charts you can build. So if we look at our readme, we'll have an example here of a pie chart. Where's that pie chart? There it is. So I'm just going to copy this command here, paste this and go ahead and update to use the mer-iter command. Okay, so our Membrane AI agent is going to go ahead and create this pie chart for us. Awesome. So you can see that looks great. It's iterating right now. So what I'm going to do is ask for a change I'll say you know coding testing documentation meetings go ahead and add a new activity here we're going to say add new pie slice mastering AI coding with nd dev dan go ahead and get that slice added there all right so this looks great let's go ahead and check our files so if we open this up you can see we created this under an additional intercession that's fine but if we look at these images we can see we have time spent on project tasks and then we have that additional you know five percent uh mastering ai coding with any data by the way if you're interested in the intersection of ai coding and ai agents definitely drop a like and consider subscribing to the channel. The way I see it, the world of generative AI is extraordinarily simple. You're either learning and using gen AI tech like AIDER, Cursor, and tools like our Mermaid AI agent, or you're getting left behind fast. On the channel, AI coding and agentics is our bread and butter. Here, we spot trends before they happen we extract value we learn and we move on don't fall behind and join the journey so let's keep going here let's go ahead and do something bigger if we go up to the top level here collapse all of our code you can see here we have two commands we have mer and then we have mer iter so we've been working on mer iter we you know created this additional session directory it made that change for us that's awesome i'm going to go ahead and create one more command here i want this bulk command and what this is going to do is basically create five versions of a flow chart right so sometimes you're not sure what you're looking for and you just want you know n iterations of it so all we should need to do here is pass in an additional dash c flag and that should do the trick for us. We're going to use our favorite open source free AI coding tool, Ader, and we're just going to ask for this change. Let's hop back to the terminal. I'll go ahead and clear the chat and I'll also clear all of our files. I'm going to go slash add. I'm going to add the readme and then I'll say, you know, add. Of course, we're going to need the main and let's go ahead and add our mermaid agent. And let's go ahead and add our mermaid call as well. And that should be good. So with these files, we can go ahead and type slash tokens to see what our context window is like that looks good and now let's go ahead and request a change so read read me and let's build out our new mer bulk command but add another dash C, create n versions of our diagram. I also forgot to add our type file. So I'm just going to copy this command and then I'll add our typings file here. This has all the types, all the pydantic types for this code. So you can see here we have a couple of different types. For every function, I have this, you know, one-shot mermaid params and then our iterate mermaid params. This is what gets passed in and created on the top of the method. So this gets passed into this actual functionality. So I'm going to say that. So I'll paste this back in and then I'll say create a new param type for this method and let's call it bulk mermaid params. Okay. So I'm going to go ahead and let AIDR make these multi-file changes for us automatically. I'm hoping in just one shot without having to do anything here, it's going to make all these changes for us. Let's go ahead and see if AIDR and Sonic 3.5 can one-shot this new feature. All right, nice. So we got a little anthropic server error, but this is awesome. So it's going through, it's making those changes, it added the type, it added the new command, it's importing it, it's updating the Mermaid agent to make these changes, and now it wants to know if we need to create this input file.txt i'm not sure what this is let's go ahead and hit yes we'll see what that looks like in the end so all right so let's go ahead and hop back to our editor and now we should be able to look through all of our commands here so we can see we have our new merb balk command and you know what i'm not even going to look at this code right we can see we have our new mer balk command and you know what? Not even going to look at this code, right? We can kind of see that it looks like it's in the right form. All I'm gonna do is rerun a prompt. So I'm just gonna go ahead and copy this. So you can see here, we have the prompts. So this is a flow chart of the setup instructions. We're gonna call this setup diagram.png. We're loading our.readme. So literally this readme file right here, right? And then we're going to say dash C5. And that's the new flag that we have right here. And it's going to be the count. And so let's just go ahead and run this. Let's see if AIDR did this all for us in one shot. shot okay so interesting results there um it looks like it did generate a couple different versions. We have install UV, install dependencies. And before we look through these diagrams, you can see here that that's exactly what our setup instructions look like, right? We're using UV, the brand new hyperfast Python dependency manager. You use UV sync to download all the requirements, set up the open AI key, and we have a couple optional setups. And then we have just running our commands, right? So if we look at the diagrams, these look really great, right? So we have install UV, install dependencies, setup key, run single, run interactive, and then we have our brand new run bulk, right? So this was one version. We can go ahead and close this. We have another version right here. This also looks good. Has an additional done node. Some of the node titles are a little simpler. Looks good also. This one's really interesting, right? So this is the power of generating multiple versions, right? You might want something more verbose like this. So we have start, install UV, install dependencies with UV sync. Nice. I like the additional detail there. And then we have set up open AI key as MVAR. Awesome. Then we have this optional decision diagram. This is really cool. You know, we can also in this code base set up Anthropic, Vertex or Grok. And then we have run a single generation. And then if we want the interactive generation, you know, we want our MER-ITR. So we also have that as an option and then we have just a simpler version here right so just top to bottom option only set the other keys and then we can run single run interactive or run bulk so it actually looked like this this did work perfectly in a single shot right i won't bother you with going through the code but uh you know we can see that um aider generated all this in one shot. If we look back at this run, it needed to edit three files to make this happen, right? And we could also see, which I really love, you can see we sent 18K tokens, received 1K, that costs 7 cents total, session total is 30 cents. Really, really nice to see those metrics right at the top here. As we discuss on the channel all the time, the cost of LLMs is going to zero. The context window is going to 150, 100 million and beyond. A really interesting post just came out backing that we have 100 million context token long window with this company called the Magic Team, which they're new on my radar. Apparently they've been teaming up with Google Cloud to crack the 100 million token contacts. We'll be keeping an eye on this on the channel. You know, if we hop back here, we can see that, you know, across three files, Ader and Claude 3.5 Sonnet just nailed everything that needed to happen to generate this change. And we even got, you know, this, the mermaid bulk params, which is exactly what we asked for. If we open up our typings file, we can see we have bulk mermaid params right here. And it looks perfect, right? We got the prompt output input count. And this is really awesome to see. So big shout out to Ader. This is a fantastic tool. I wanted to share this with you because I don't want you to be locked into the closed source tools. Like I mentioned, I love Cursor. I use Cursor, but I also love AIDER and use AIDER. This one's free. It's open source. It's a lot more customizable. Paul also has some really, really killer insights and some incredible benchmarks. So, you know, if you check out the AIDER LLM leaderboards, a large portion of the AI coding ecosystem looks at this web page right here to know what is the performance of this new model on real code bases using AIDER, right? So this is really cool. You can see Cloud 3.5 Sonnet, as we've been saying on the channel. It is the best AI coding assistant, large language model by far, hands down. This is the benchmark to look at. Paul, with his work on AIDR, is just the leader here in the AI coding space. I would love to see Cursor put out some benchmarks like this, where they compare and use their tool, their prompts, to create some concise benchmarks for us to understand and internalize how they see model performance with their tool. So anyway, tons of value here. Definitely check out AIDR as an alternative to Cursor. As I've mentioned in the past, I go back and forth between these tools. But when I'm coding deeply, when I'm working fast, there is just no more precise tool than AIDR. You come in, you can check your token usage, you can add and remove files, you can add and remove files, you can add directories. Huge fan of this. It's a lot simpler. AIDR really leans on the power of the LLM and the simplicity of a just a really clean AI coding architecture. So anyway, that's the Mermaid Diagram AI agent. This is going to be in the description for you. Feel free to check this out. Regardless of how you generate your diagrams, I highly recommend you dig into some type of gen AI tool like this. Even if you just go for, you know, right for the simple cloud or chat GPT interface, generating diagrams on the fly is so, so important. So valuable as an engineer, right? And if you are a mid senior plus level engineer, you know that communicating the value of your work is incredibly important. And creating diagrams helps you do that in a faster, more concise way. So that's the power of an AI agent like this. Just being able to generate diagrams on the fly, iterate on them, and having a quick iteration human-in-the-loop pattern to do that will help you generate these diagrams faster, which will help you communicate your ideas faster for both your personal and career work. Definitely spend time, communicate your ideas well, use diagrams to help you do that, and use Gen AI to help you build your diagrams to help you drive that effort forward. Again, I'm a huge fan of Cursor. I use it, but it's important to diversify. It's just too early in the AI coding game and the generative AI age to be committing and putting all your eggs into one basket, into one tool. I like to stay close to the prompt, stay close to the metal, stay close to the tooling, and tools like AIDR allow you to do that. We've been at this for longer than a year and a half, almost two years now. And we are just scratching the surface of what's possible with these incredible AI coding tools and with generative AI. If you want to stay up to date and catch these trends before they happen, join the journey. Drop the like, drop the sub, and I'll see you in the next one. | Cursor is great BUT Aider is the OG AI Coding King (Mermaid Diagram AI Agent) | 1,326 | IndyDevDan | 20240902 | 🔥 Why Aider? Simple - Because Aider is THE Original LLM-based AI coding tool.
🍓 AIDER
https://aider.chat/
🖼️ Mermaid JS AI Agent
https://github.com/disler/mermaid-js-ai-agent
🔐 Great builder.io article on Cursor, OSS, and Lock-in
https://www.builder.io/blog/oss-consequences
💻 Our Cursor Composer Breakdown
https://youtu.be/V9_RzjqCXP8
⏰ Aider Review 1 YEAR Ago
https://youtu.be/MPYFPvxfGZs
Unlike Cursor, Aider is open-source and completely free, offering you more control and customization over your AI coding process.
With support from multiple LLM providers and incredible insights from its creator, Paul, Aider is designed to keep you ahead of the AI Coding curve.
🥚🥚🐣 Don't put all your eggs in one basket - explore the benefits of open-source AI coding tools like Aider. Whether you're a seasoned dev or just starting out, this video will show you how to:
- Use Aider's terminal-based interface for precise control over your AI coding process
- Implement the "Ask - Draft - Change" pattern for more accurate code modifications
- Develop new features for our Mermaid JS Diagramming AI Agent with minimal manual input, leveraging Aider's advanced AI coding capabilities
🎥 In This Video:
- Discover why Aider, the original LM-based AI coding tool, often outperforms Cursor.
- Learn how to leverage Aider's multi-file editing capabilities to enhance your coding efficiency.
- See a live demo of building a Mermaid AI Agent to create stunning diagrams effortlessly.
Understand the importance of diversifying your AI coding tools in the rapidly evolving generative AI landscape.
🛠️ Key Topics:
AI Coding: Experience the power of AI in coding with Aider.
Multi-File Editing: Seamlessly edit multiple files with ease.
Diagram AI Agent: Use Mermaid.js + Generative AI to generate and iterate on diagrams quickly.
Human-in-the-Loop: Enhance your agentic coding with iterative feedback and adjustments.
Open Source: Enjoy the freedom and flexibility of an open-source AI Coding tool, AIDER.
🌟 Stay ahead in the world of AI programming and AI software engineering by subscribing to our channel.
Stay focused and keep building.
📖 Chapters:
00:00 Cursor Pop off BUT BEWARE
00:30 Aider - The Original LM-based AI Coding Tool
01:27 Mermaid Diagram AI Agent
04:10 AI Coding with Aider
11:35 Multi-File AI Coding with Aider
17:45 Cheaper LLMs and 100m context window is coming
18:50 Aider gives you incredible AI Coding Insights
#aicoding #programming #aiprogramming | 2024-09-03T08:08:52.005682 |
https://www.youtube.com/watch?v=NbyxWAC2TLc | Here we go. Hello, good morning, everybody. Good evening, good afternoon, and thank you for joining another Neo4j Live today about the Knowledge Graph Builder app that my two colleagues Niels and Morgan have developed over the past. How long has it been now? A couple of weeks, maybe? Months? Yeah, I would say months. months? Yeah. So that has been a recent addition to the GenAI ecosystem, I would say, and all the interesting projects that came about end of last year when everything turned into a GenAI application. Obviously, we also added one, and you probably know Niels from an earlier stream. Maybe you know him from an earlier stream. He is the mastermind behind the NeoDash application. It's a dashboarding tool. We showcased that a couple of times in this series and also on nodes. So welcome, Niels. So good to have you back, kind of, today. And then we have uh we have morgan as well morgan is is is new to the stream but nevertheless even more welcome to to having you here today you are um where are you based actually morgan in uh in france um no so i'm based in london i'm french uh from the north of france but i'm based in london yeah yeah cool um and you uh what's your what's your story maybe you want to share a few words of uh of of what you what you do and how maybe how did this uh this collaboration uh come to come to yeah yeah sure absolutely um so as you said right my name is morgan i'm part of new 4j professional professional services um i started my journey with new 4j five years ago now, so quite a long time, probably the same as you, Nils. And so my job is basically to guide and advise on how to use Neo4j in the best way possible, not just obviously on the function and feature we provide at Neo4j, but also how to make it work within your ecosystem or environment. And obviously lately I've been focusing a lot on, obviously as we all know, AI and innovations. So that's what I do, yeah. Yeah, cool. And then this, I don't know, we can dive right in, but maybe what was the most, maybe before a little bit to get to know everybody a little bit better is like this hype, this everything Gen AI, everything LLM kind of. How was this? How did it feel for you? How was it? Did you feel like immediately something has to happen, something needs to add? You know, maybe Niels, did the first thing you saw, i need to add this to neil dash um yeah i i mean i can i can start so for me this was really um a feeling like you know there's a lot of hype there's a lot of talk but i want to make something real i want to make something that people can use and actually use that to to to create some value and not just slides and words that was my idea yeah probably morgan you're the same yeah yeah i'm totally the same on that i mean obviously we saw all the talks on the slide on how to how we should work in theory but we never have actually have like a real uh demo or actually working solution from hand to hand uh i mean in the past like years when it all started. And the way I see it really is basically just the way we saw like Google for the search engine like years ago. It's so nice tool. Obviously, everyone has to use it. And if you don't use it, obviously, you're going to be left behind. So, yeah. Yeah. And it's good to see something that I think that's a good point. There's something that you can actually try out something that you can see that you can, that you can then explore yourself a little bit and then that can, that actually works. You know, it's not just as you said, it's not just some slides that, you know, always very, very, paper is very patient as they say in German. So it's, you know, you can, you can, you can, you can put as much as you like on there, but you never have to do anything about it. But if you see something there and if something then comes to life a little bit and you see some results directly or you can execute something and then you say, yeah, this is cool. This works. This gives me feedback, gives something back. So that's pretty cool. How did you two then, how did the collaboration, do you want to say a few words about how did that happen? Do you want to take it or should I take it? Yeah, no, you go ahead. Yeah. So basically it all started from an initiative from Christopher Crosby and Michael Langer, which you're going to see in the slide. Actually, we do have a small slide with the team on this application. But so basically the idea was, okay, so we do have large language models, which obviously we can use for a lot of different things. In Neo4j, we obviously are trying to innovate on that aspect as well. So that's why you see we do support vector search for example, vector indexes and integration with those. So now the idea was to make it work from hand to hand. And we're going to discuss about that in the first few slides. Neo4j can be used as the knowledge graph, right? Which means basically take all of your knowledge, put it in a graph. All the documentation, documents you have on your company, on your processes, on your application, whatever documentation you have, just put it as a knowledge graph. So you have the knowledge represented as node and you have the relationship connected between each node. And then the idea was, okay, now that we have that, what if we create embeddings on top of it? And based on the embeddings, we can actually try to use that information, the knowledge from unstructured, into what we call GraphRAG, Ritual Augmented Generation, which is basically ask the large language model, which doesn't know about your knowledge, they don't know about what you have internally, your documentation, your processes, but feed them that information from your knowledge graph and get obviously better results, transparency and explainability. Pretty cool. Pretty cool. That sounds very exciting. Before we dive in, I just want to say hi to everybody who's watching live. Good to see. I see people from Lake Constance. I see a couple from Paris, Texas, Seattle. Good morning, Rob. So that's an early morning for you. So hopefully you have your coffee ready and we can entertain your morning here. Chicago, London, Cambridge, Massachusetts, Bengal, Luro in India. Hi, welcome. Geneva. So Bordeaux, Rome. So all over the world. So thank you very much for watching. If you have any comments, any questions, anything that's not quite clear or anything you would like to know a little bit more about during this showcase today, then please let us know. Type it in chat. Use the chat. It's supposed to be interactive. So if you have any question, anything is burning under your nails and kind of like, oh, I want to know more about this or that or how it's done or something else, then, yeah, you know how to do. Use chat. But hi, everybody. Cool. Yeah, should we dive in? You said you had a couple of slides to outline the project. So should we start with that or what do you prefer to do? Yes. Yeah, I'll start with slides. I know I'm contradicting myself starting with slides, but I just wanna quickly set the stage and I promise it will be very short. Yeah, yeah, that's cool. All right, let me share my screen and hope that goes well. Can you see it all right? Not yet. No, I don't see your screen. OK, wait a second. Something is wrong here. How about now? Oh, yeah. Here we go. Perfect. All right. Cool. So super quickly, to set the stage about what we're doing here, I promise, as I said, it will be short and we'll get right into the demo and show you something really cool. As you already heard, so Morgan and I, we work together in the field team at Neo4j, which means that we are engineering together with our customers, building Neo4j solutions. So this could be anything from data modeling, querying, visualization, or even integrations with LLMs. So we like to build solutions on top of Neo4j to make our customers happy. And what we talk about today, I will do a super quick intro about Neo4j to make our customers happy. What we talk about today, I will do a super quick intro about Neo4j and LLMs, how people typically use that together. Then Morgan will take over and dive a little bit more into the technical details, how we built this demo. We'll do a nice demo of building a knowledge graph and then also a chat interface on top of the knowledge graph that you're creating. Finally, we'll do a super quick wrap up of everything that we talked about today. So intro to Neo4j and LLMs. So Neo4j, obviously, we're all about graphs. But we're also doing recently a lot of integrations with generative AI and large language models. So what does this mean? There's a lot of things you can do with GenAI, but for us in Neo4j, this really mainly focuses on two big areas. The first one being GenAI tailored to your organization. So this means how can I actually use this technology, these large language models, with my own data, answering questions on my own documents, my own tables, my own graphs, right? And that turns out it's not so easy. I mean, it's easy to spin up JetGPT and ask some questions about public data, but making it work with your own data is quite tricky. So we're going to be showing some stuff called grounding, which means hooking up a large language model into a knowledge graph and using that knowledge graph to answer questions about your own data. And that's what we call, as I said, grounding or graph rag as Morgan already alluded to. So that's kind of on the front end side, like how do you consume a graph with which NAI, but we also have the number two, which is the backend side. How can I build a graph with Gen AI? So using large language models to create knowledge graphs. And especially what we talk about is natural language entity recognition, NLP, right? So finding in a document of unstructured text entities and turning that into a graph. And this is also something that we'll show. And these two kind of logically follow one after the other, right? If you create your graph with a large language model, you have your knowledge graph, and you can use that same knowledge graph to answer questions in natural language from users. So two and one, together they kind of make up this recipe of a Gen-AI plus graph use case. And then number three, I think something that we will also show is that we have all these partnerships with cloud providers like Azure or GCP or AWS, but also a lot of integrations with open source tools like LandChain and many, many others. So we're really well integrated with any kind of provider of LLM or tooling around that, which makes it super easy for you to build your own Neo4j plus GenAI solution. We love to open source things, so also the tool that Morgan will be showing later is completely open source for you to use and try out with your own data. Drilling down a little bit more on what I said, these two buckets, right? First, the jumpstarting knowledge graph at llm or building a graph using our flames models we have two subcategories there the first one is the very very first step of data creation turning your entities and relationships creating your entities and relationships from a unstructured data into a graph so we built this kind of reference architecture this flow and we'll have a better picture of that later on to turn your documents your unstructured text into a graph the second one here is curating knowledge graph with query generation so once you have your initial graph maybe you want to make some changes and you could even do that with natural language you could say okay fix this fix this node here connect to this this other node. So that's step two in the process. Then on the front end side, as I said, this is all about chat interfaces, natural language interface. How do I consume data from my knowledge graph? So this is the famous example, chatbots, that we all love. Natural language queries, I don't wanna write Cypher, I don't wanna write SQL, I wanna ask a normal human question to my data, to my graph and get the answers from there. So that's something that we will also show. And then the graph powered search, which is grounding. How do I grind? How do I ground my search in reality? How do I ground my possibly hallucinating LLM into a trusted knowledge graph? So these are the four main architectures that we see when we're building these kind of solutions for our customers. So as I said, we're going to be drilling into some of these today. So these two is the one that Morgan will be talking about later. And just to give you a feeling for what that looks like, what are the components involved in this entity extraction flow? So this is a very high level diagram of what that looks like, what are the components involved in this entity extraction flow. So this is a very high level diagram of what that looks like. You got on the left here, you've got your data sources. So you have unstructured data, as I said, documents, text, even videos, images can be as crazy as you want, but you can also have structured data that you insert into your knowledge graph. Then using your large language model, you extract entities from these documents and turn that into a graph, and you can use a wide variety of tools for that. So your data, your documents get turned into a graph, and you can also create vectors on that and use our vector index to create that, of course. So this is all about building graphs right um as i said once you have that in place you go on to the next step which is building a natural language interface and again you've got a whole orchestra of tools here a lang chain llama index any kinds of tools here to to build your own interface on your own graph and this is again where we're really nicely integrated with these tools. You'll see some on the list here. We have APOC procedures, open source APOC procedures that you can use to call LLM providers. Again, there's a lot of stuff here. You can build your own UI with Streamlit or even with JavaScript, any types of things. We're trying to make it so easy, as easy as possible for you to build a Graph plus LLM integration. So this is a high level of review of what our GNI landscape looks like. And I'll just now hand over to Morgan, who's gonna zoom in onto this specific project that we're talking about today. Thanks, Niels. Maybe I can just take it for a night and share my screen. Let me do that here. Screen, here you go. Can you see that? And before I even start actually talking about the VISA, just a couple of questions, actually interesting questions in the chat on the transparency and explainability as well as what's the difference between graph rag and vector rag. I'm not going to answer that right now, but you're going to see during the demo, you're going to have the answer to those two questions. So going to the next one here. So what I'm going to show you in the demo here, and just going back to the premise Niels made, it's not going to take long. I just have like three slides, just obviously to give you some context on the application, how you can use it and how we made it, right? So the application which we're going to demo is basically a simple and fast demo, basically application to onboard text to graph. So basically any text you have from unstructured documents like PDF or from cloud storage or any document you have into a knowledge graph it's usable for end users meaning that you're going to see in the demo you can use the application load your knowledge in a graph and start asking questions to the chatbot with zero code without any knowledge around how to code or do anything and it's also a good starting point for developers because as nils was mentioning everything is open source we're going to link all the github repository and all the sources so you can basically just take it and you can extend it as you want and i'm going to show you as well how i made some extensions already on some part of it and then obviously uh what's what's the aim goal is basically to demonstrate how graph rag compared to vector rag is powerful in combination with llm genii and a knowledge graph on neo4j so how does it work well i'm gonna go quickly through that because Niels already gave all the pointers. But basically what we do is we load all your documents, which can be, for example, PDF, as we were mentioning. We do the split on it, so we create chunks of that document. We use the LLM to extract nodes northern entities on the chunks we merge that northern relationship into the neo4j knowledge graph and then we use a chatbot interface to do graph rag the architecture is as you can see here on the screen from left to right so you do have data sources which can be as again as i was saying document pdf videos anything you want uh we actually have this what we call lm graph builder which is the application i'm going to show you so you do have two sides you do have a react.js front-end application and you do have a python fast api back-end obviously we do have the api layer in between to make the front-end communicate with the back-end so you can extend it as you want as well and then we have the large language model which we do support quite a few models which you're going to see the demo. On the knowledge graph aspect, so we do have Neo4j RR, obviously you can use any Neo4j database you want, you can run it locally on-premise or you can use our self hosted Neo4j RR. You do have Neo4j Bloom, which is the end users to show the graph. You're gonna see when we create the knowledge graph, I'm gonna show you nice visualization to see the the graph. You're gonna see when we create the knowledge graph, I'm gonna show you nice visualization to see the knowledge graph. You do have the graph database, so Neo4j, and then we do have graph data science because we are using embedding and then similarity on the chunks. So remember when I said, we take a document, we split it into different chunks, then on the chunks, what we do is we generate embeddings and then we create similarities based on the embeddings between chunks. And then what we have is we do have the chat interface, which is also part of the demo, which is all around ReactJS front end and then communicating with the python fast api um in terms of end user application and use case you can see here on the right we do have few obviously use cases application that we see where you can use this but obviously it's not exhaustive and you can come up with basically anything you want really where you have your knowledge and you want to ask questions about your knowledge so in this one for in this list we do have obviously faq boats we do have supply chain which i'm going to show you pharma pharmaceutical industry health assistance all these kind of things. So as we were saying, all the projects, so everything, the front end, the back end, the whole solution is open source. So you can see the GitHub repository here on the top. Obviously we're gonna share all of that so you can access it. And we do have a couple of more links to make it easy for you to find all that application. As I was saying, the frontend is made with ReactJS, so you can connect to Neo4j. You can see you can load local file, PDF file, you can load from S3 or Google Cloud Storage, or you can load YouTube and Wikipedia sources. And we do have a chatbot which is integrated within that application. On the backend, which is met again with Python fast API, what we do is we do store all the data. So every knowledge that you pass, we do store that in Neo4j. So meaning you're gonna see that in the next slide, but meaning the documents, the chunks and the entities, everything is stored on Neo4j. We do the chunking as part of the backend. We create the embeddings on the chunks. We do the entity extraction. So meaning you have a chunk of a document, we detect and extract the entities with the node and relationship in that backend as well. And we do KNN on the the chunks so you are able to see how similar chunks are connected to each other's everything is running on docker images so you can simply download the github repository do docker compose up with your environment and you will be able to run it locally or you can use the one we include which you're going to see in the next slides um the step-by-step process so i was obviously mentioning it a bit earlier but the way it works right you are going to have the front end where you can drop files pdf youtube videos whatever what's going to happen then when you hit the generate graph is we're gonna basically take that document store it as a node in Neo4j with different information. Based on this document, we are going to apply a chunking strategy. So each document with langchain loaders, and then we do split the document into different chunks. So one document can have one, two, three, five, ten chunks. So one document can have one, two, three, five, ten chunks. Then once we have the chunks, obviously we store them into Neo4j, into the graph database, and we connect them to the document that we just created. We do create the similarity based on KNN between every chunks. So you are able to see if two chunks are very similar to what they're saying you were able to create a similar relationship between them we generate embeddings on top of the chunks and we do have a vector index on those embeddings and then what we do to extract the entities on that chunk so for each chunks what we do is we do we use the LLM graph transformer or divbot graph transformer to detect those entities and relationships. And then once we obviously detect them, we load them into the graph and connect them into the chunks. So regarding the team, and then we can move into the demo. So either we're saying the PMs on the applications are Michael Enger and Christopher Crosby. Mostly the team who done all the development or the contractors called Persistent. So you can see the teams here in the middle. And then they do have a few helps from different person in your 4g so you can see too much one but which you might know if you read his blog post and all the work is done on length Shannon or the geni and LLM integration then you can see myself nice Oscar and Jim alright so let's go into the demo now. So what I'm gonna show you today is, so obviously here I'm using Confluence for the documentation aspect, but it can be anything, right? So my idea was to show you, let's say you have Confluence or any other tools, Notion, for example, or anything else, where you have all your knowledge, knowledge about your industry or your application, your APIs, your development processes, whatever you have, right? In this case, I do have it on Confluence. So I'm gonna show you a supply chain demo basically. So you can see, I do have few pages. This introduction for example is about a fake company called Graphac.me, which is based in Brussels, Belgium. They are making a bunch of different products and they do have supply chain processes and strategy. On the product specification, we can see we do have two fake products, which is EcoPack and WebRestoreustor eco pack is basically a packaging solution produced in bangladesh and then we do have robusto but is in china which is a small component polymer component so we do have obviously more pages here i'm going to load all of that into the into new 4g with that application and then we're going to be able to answer questions. Now, the reason we took that demo about supply chain is because, not sure if you saw that in the news, but there is a new law which has been approved last week, which is CSDDD, which is going to have an impact on the supply chain companies. And that has been approved, not voted yet, but approved last week. So obviously, this is something which is really recent. And if you're asking the LLM, you wouldn't have this kind of knowledge, right? You wouldn't have both, actually, neither the knowledge on this fake company or this new law. So what I'm going to do now is i'm gonna show you the ui of the application so this is what the graph builder looks like um so you have here on the left the menu to load your data from different data sources so you can see this drag and drop or browse here to load your p PDF files from local file system. You can load YouTube videos, you can load Wikipedia sources, or you can load PDF files from an S3 bucket or a Google Cloud Storage. So obviously for the Cloud Storage, if you want to make it work, you need to provide your access key and secret keys, but then you basically just pass the bucket name and then going to list all the PDF file within that bucket. Now what I'm going to do is, if I can show you here what I've done on the Confluence page, I basically just exported all my pages into PDF. So if I open just one PDF here, just to show you that, it's really just the same thing that I showed you earlier, but in the PDF format. What I did is I extracted this article from Forbes as well. Let me show you that here. So that was from last week. Forbes made an article about this CSDD law, which has just been approved on the 24th of April. And they have some comment and link to different sources here. So I'm gonna load all of that. So I'm simply just like gonna drag all of that. First of all, obviously I'm gonna connect to Neo4j, to a Neo4j database. If you don't have a Neo4j database, you can click this link. And what it does is basically take you to Neo4j Aura, our self-hosted version. So in this case here, I do have a database. So I won't have to create one. But if you don't have one, you can head over here. You can create a free database for free. You don't need to provide any credit card or anything like that what's what's nice about this is once you created your aura database you might be familiar with that but you can download a text file which will contain your environment configuration which is the database url the username and password all these kind of things you do see here a small drop zone where you can drag and drop an env file well it does support the one you get from your folder rs so in fact i can just take this one drop it here and you can see username password and uae or autofill so that's really cool let's connect to this database you can see obviously no data is present, I don't have anything on this database, it's basically just an empty database. Now what I'm going to do is, as I was saying, I'm going to drag all those PDFs, so my Confluence pages and this article from Forbes, into drag and drop. And you can see obviously it's uploading the files, so you can see different things, we can see the name of the files, you can see the status, you can see whether we were successful in uploading the file, you can see the size, the type, which you can see we do have different types, the sources, so in this case we are local, the model, which we're going to talk about that in a second to create the knowledge graph, and the node relations, durations, and processes when we create the knowledge graph and the node relations durations and processes on when we create the knowledge graph now what i'm gonna do is just to showcase the other stuff i do have a youtube video which is this one here which is talking about uh the law obviously csdd uh it's a five minute video which is talking about um different things on what what's the impact of the laws how this company is gonna change the way of working all these kind of things so I'm just gonna grab the your head from this YouTube video I'm gonna go here click on YouTube and you can see you really just have to paste the link YouTube link you can click Submit and you go and if we go back here you can see the YouTube video is here on the top so it does get the name right from the YouTube video and you can see in this case here we do have YouTube as a source you can click on it and it's going to take you to the YouTube video I'm also gonna just to showcase here I'm gonna show you how to load sources from Wikipedia so in the way it works is you can provide different Wikipedia sources separated by a comma so in this case I do have two Wikipedia pages I do have this one about the law itself and I also have this one fromadesh and you can see why it's because um if i go back to here you can see here i do have like some product um who are produced in bangladesh so i just wanted to showcase you how the graph database is going to make these links between all different things so what i did is basically just get the sources from those two wikipedia sources so bangl Wikipedia sources, so Bangladesh and then VizLaw, and hit submit. It's going to take just a couple of seconds just to load it from Wikipedia, getting all the different data from those sources, and there you go. You can see now we created two other documents, so one for Bangladesh and one for Vizslo, which are taking their sources from Wikipedia. And you can see again, we do have the link, so we can click on it. Now, on the bottom of the screen here, you can see the list of different models we do support. So we do have DivBot, we do have Gemini 1.0 Pro and 1.5, and we do have OpenAI 3.5 and 4. And by the way, I didn't mention it, but all this application, as I was saying, it's open source, so you can rent locally. But we also provide a self-hosted version that you can play and try on your hand. So we're going to link that after the demo. But then, obviously, you can play a bit with different models obviously we do recommend for your testing or playing with the data sets with your data set to use 3.5 because of C's cheaper in terms of tokens price per token in this case I'm just gonna use GPT-4 because all the size or fairly small and on that so I'm gonna click on generate graph here and you can see it's gonna start processing all the different documents it's not it's all we see in a thread and environment running in parallel so we do process multiple nodes at the same time multiple chunks of each document so you might see some documents finishing there you go so you can see one for example here which completed faster than the others and you can see for example here as i was saying you do have different informations on the knowledge that we built so you can see for this specific document we created 14 nodes five relations and it took us 15 seconds now you can click on the small glass here it's going to show you the the basically the graph the knowledge graph which has been generated based on that document but I'm going to come back to that once everything is loaded because you're going to see a better view of everything there you go so you can see in less than what couple of minutes maybe we loaded all those pdf document youtube video and wikipedia sources into new ford so we do have quite a few nodes what you can do now is click on show graph and what i'm gonna do is I'm gonna remove the chunks here and you can see different information so you can see obviously different document different chunks well I'm gonna look at here let me see if I do for for number you go you can see the main reason why I wanted to have this this Wikipedia sources from from Bangladesh is because you can see here we created a node country which is Bangladesh. We get all the different sources from the Wikipedia sources so you can see all of that is basically coming from Wikipedia. What you can see here on the right is we do have a product RoboStore from a company called GraphAcme. Sorry for the coloring here on the white screen, that's something we probably want to change. But this one is basically the second product, which is the the EcoPack product. So you can see GraphAcme from the conference page does produce two two product this one you can see is produced produced in Bangladesh so going back to the question on obviously graph rag and vector search well that's something which you can see already which is quite nice when you have your knowledge graph into graph and not not just vector database is because then you can link all the data you have in your knowledge whether it's structured or unstructured into all your different sources as you can see from my knowledge from Confluence being linked to some knowledge from Wikipedia. All right so that's for the knowledge graph kind of constructions um now what i want to show you is the second aspect of the demo which is about the chatbot interface so if you click on this button here you can see we do have this uh small pop-up which uh appear on the screen so obviously you do have different information here, you do have the source here which link back to Neo4j but you can start obviously asking questions here. Now what I'm going to do is, because I see it's already 37, I'm going to do both into instead of doing the demo here, I'm going to show you the same chatbot interface but in the bigger larger screen because then I can show you what you can do in terms of extensions as well so this chatbot interface is basically the exact same as the one you see here on the screen but I just added some extensions to it so what I'm going to do here I'm going to ask the first question which is going back to our conference pages you can see we do have different product we do have performance criteria so what I'm going to do is I'm going to ask what are the performance criteria of this product and where is it produced and who produce it there you go so now what's going to happen is sorry what's going to happen is we're going to basically create we're using basically graphag to create the graph structure, the chunks, based on the MNNs, and then we're gonna provide a response from different sources, from unstructured information and from certain informations. There you go. So there's a small glitch here on the sources, but let me ask another question just to showcase that. So you can see obviously it gets it right, EcoPak is being biodegradable, moisture-resistant, durable, etc., whatever, which is basically what we have here. You can see the production in Bangladesh and produced by GrafAgme. So if I look at here, it's correct. It's produced in Bangladesh and GraphAqm. Now let's say we don't know who this company is. So let's ask him who is GraphAqm. Here you go. and it should be in a couple of seconds we should have the response okay there you go and yeah so you can see this one was just a small glitch. I probably have to have a look at it. But then once you have the sources, going back to the explainability and transparency, you can see what we do is basically we list the sources from the question you ask. So for example, here we do have the question, who is GraphAcme? So we do, as I was saying, we do create the structured and unstructured information. So you can see it's giving a lot of different information, not just from one document, but from two documents in this case, with different information which you can fact check. But then you can see it's actually listing the sources from where it got its information. it's actually listing the sources from where it got his information. Now, going back to the new law which was approved last week, what I'm going to do is I'm going to ask him... okay so we got it... impacted by the new csd low and let's say and if so how and let's see what he gets now while it's actually generating that what i can show you as well is uh talking about the extension you can see here a bunch of different icons here um So what I had is basically turn up and turn down in terms of providing feedback on a response. You can also generate the response if you are not happy with the response and there is a text-to-speech as well. So what I'm gonna do is wait for the response here. Okay, here you go. So you can see we do get the response from a different documents here. It's obviously gonna list that GraphAcme is impacted by this new rule of course, because we all look at it in Europe in brussels belgium which was part of one conference pages um but and but we do produce uh product in bangladesh and china um so that's how uh what i'm going to do is i'm just going to stop sharing quickly and share my screen again because i want to share share with you the audio for the text to speech just to show you that you go. And I can't, I can't share the audio with this full screen. Sorry about that. But you can see if I go back here to this one here, if I click on play, we're going to basically have text-to-speech on that as well for accessibility. GraphAcme is a company that manufactures and supplies sustainable packaging solutions, including the EcoPack and components for automated warehouse systems. Their products, such as the EcoPack made from jute and bamboo, are designed for eco-friendly packaging and home decor, while their RoboStore components are intended for industrial use in automated storage systems. GraphAchemy is headquartered in Brussels, Belgium, and operates production facilities in Bangladesh and China, focusing on quality assurance, customization for bulk orders, and compliance with industry standards. All right. Probably should have taken a shorter one, but here you go. Then so what we're going to do now is so obviously we do have whether the we do get information on the EcoPak or another product on the company, whether we are impacted and all of that. That's really cool. But now obviously, now that we know we are impacted what i want to check is uh has this law been approved because i want to know uh if it's already there or not so i'm gonna ask when uh and obviously maybe we can ask also uh what is the current status of that law what is the current status of that law. And obviously, as we were mentioning here, while it's generating the response, this whole demo here is made on this fake supply chain data set, apart from the law, obviously the law was a real thing from last week, but you can apply that to any knowledge you want, right? It doesn't have to be just about supply chain, any knowledge you have internally at your company or even within your environment, you can just do the same thing here. Okay, so you can see the new law was approved. It has been approved on April 24, 2024. And then obviously giving a bunch more different information on that. Now what I want to do, the last thing I want to show you as well is about the multi-op questions. So what are multi-op questions is um something that you can actually test your when you bring the chatbot um uh what you want to test is this multi-op questions is basically a way of double checking that not only does get the context but it's also being able to process reserved based on different informations so what I'm going to do here is you can see this product, which is RoboStore, which is a component, does have this dimension. And you can see this echo pack, which is the packaging, does have the dimension, which is lower than the RoboStore. So obviously with one echo pack, you cannot fit RoboStore into that. So I'm gonna ask him exactly that and see what he respond to that. So I'm gonna say, can the RoboStore be wrapped inside the echo pack for shipping purposes? And let's see how we reply to that while it's generating the response well I can show you as well is a couple of things so obviously this is the good a proposed tree so again everything is open source so go ahead get this repository you can clone it you can play with it locally give it a star if you want to if you want to you can so fork it, you can play with it locally, give it a star if you want to. You can also fork it if you want to extend it. You do have this GenII ecosystem and all the links, I believe, Niels are going to show you that in the next slide when we're going to go back to the presentation. But this GenII ecosystem, which basically lists every initiative we have at Neo4j about GenIA and LLM. And you can see in the example project, we do have the LLM graph builder, but that's not the only thing we have. But if you click on this one, basically, and you scroll down, you have some explanation how it works. We're still seeing the chat bot only. Oh, ah, yeah, because I'm sharing the tab, sorry. Well, in the meantime, we do have the response from the chatbot. So let's go through that one. So you can see, no, obviously it cannot fit because the dimension or incompatible. So yeah, that was the demo for the chat interface. Now going back to what I was saying without changing my screen, the, this GitHub repository, which is linked in the slide and you're gonna have, you're gonna be able to access it. It's all open source as I was saying. You can give it a star, you can fork it to extend it and you can play with it locally. If you don't want to run it locally, you do have this page, GenII ecosystem, which is again listing all the initiatives we do have at New4j. And in example projects, we don't have just this project, obviously, but we do have this one here, which lists all the documentation, how to run it, etc cetera. But it also comes with the self-hosted version. So this link here is publicly available. So you can just open it and you can play and do the same thing I just did here with this link. Now, last thing I wanna show, and then I'm gonna give it back to you, Niels, is something we are working on, which is right now, as I was saying, we do have, whenever you load your data, we do process it, we do create the chunks. Based on that, we are going to create entities. What we are working on here on the development branch is to be able to come up with your taxonomy. So meaning that you will be able to come up with your taxonomy. So, meaning that you will be able to provide what node labels you want to create only and what relationship only. So, instead of letting the LLM and our basically strategy to come up with entities, you will be able to restrict that. So, you can either just come here and say I want to create a fake company for Nopal and you can either just come here and say, I want to create a fake company for an Apple, and you can create your own labels. Or if you click on use existing schema, sorry, here I'm not connected to Neo4j, so I'm just going to refresh. Here you go. Because you need to be connected to your database. If you click on use existing schema, it's going to pull the schema from your database. If I click on that, you're going to see it's getting all the labels from the database and all the relationship so you can come up with your own taxonomy um and that's pretty much it from me i'm going to hand over to you news now cool awesome yeah thanks a lot this is a really really cool demo and just from my side the big takeaways here is that this this demo i mean anyone can use this there's no code involved Yeah, thanks a lot. This is a really, really cool demo. And just from my side, the big takeaways here is that this demo, I mean, anyone can use this. There's no code involved. You don't even have to provide a model, the LLM in first model for you. You don't have to write ciphers. So my grandmother can use this in a way if she knows how to upload a PDF, which I think is pretty impressive for a demo like this. Of course, it can be customized. As Morgan said, you can provide a model and give it something to hold on, or custom queries, or agents, or this kind of stuff. So it can be customized. And I think what's also interesting is what you showed, where you combine these different PDFs and combine different data sources into one graph is what we do quite often, right? Creating a knowledge graph from different data sources and ask questions that are across data source, which is very cool. And then the last thing that I know we talked about at Salt Morgan two weeks ago about explainability, right? You give the sources for each question. That's something that a lot of our customers ask. They say, look, we think LLMs are fantastic, but we want to make sure that the information that comes back is actually true. So it's super valuable to provide the link, the PDF link. Exactly. Well, actually, I probably didn't show that as well in the demo, but so you saw when the chatbot was replying, it was giving the source of the pdf document but if the sources are from youtube video or the wikipedia sources that you loaded you're going to have the link you're going to even have the link to the wikipedia page or the youtube video that is referring to yeah cool i think we're almost out of time. I'm just going to do my very last wrap up here. And I think this is already a lot of stuff that Morgan has said. This demo is open source and it's actually built of open source components. If you're making your own Neo4j plus LLM solution, you can take this as a base or you can take the ingredients for this and build your own thing, whether that's just the back end or the front end or both. Right. So the website that Morgan showed, neo4j.com slash lab slash Gen-AI ecosystem lists all these components that you can use for your own solutions here. Right. So we showed you the LLM graph builder, which is the UI plus the back end. And under the hood, this is using stuff like back to research, the epoch-gen-AI procedures, langchain, the graph transformer. This is all sitting under the hood, hidden from the user. But all these ingredients that you could use to build your own solution here. And that's it for us. Thank you so much for joining. I think we have a couple minutes for some questions, maybe. Yeah, that would be cool. I have lots of questions. So maybe we can take some time and go through them. And I don't go by chronicle order, so sorry, but I'll pick a couple and then we see how it goes. But one that just came in that I think maybe is interesting from 804. Do you plan to add more sources? Like they are mentioning Excel files. Is there something that you think is in the future for the app? Yeah, so that's not something we have considered so far, but that's something we can obviously think about. So I can't make any premise here, but as I was saying, right, the cool thing is it's all open source. So if anyone to contribute or even like if you want to extend it as well, yeah, we can do that as well. Cool. Another question comes in from LinkedIn. They want to know if it's possible to see the cipher that's kind of like generated in the back. Do you think that is something you want to do or or not um yeah so i think we need a bit more information on uh which cipher is it for the um for the question uh that is being asked so because going back to um to the processes right that we have for the data loading part uh obviously, we don't have proper cipher generation because we're using different frameworks from Langchain. As I was saying, we do create the node for the document, then we split the document into chunks, and then all the process to extract the entities. So I think the question is more on the chatbot. So for the chatbot, what we do is we do ask the question from structured and unstructured sources. And maybe the question is from that aspect. And yeah, we do have the cipher to do the retrieval basically for the augmented generation retrieval. And we do have the cipher query which we provide based on the chunk and document that is basically best fit into the question. Cool. Jeremy asks, the references when you showed the chatbot's answers were basically all PDFs instead of something from Wikipedia. Is that because the PDFs have the same info and get higher matching? So like a tiered thing like PDF, is there some tiered or some kind of value towards which sources or was this accidental? Yeah, no, I think there's so all, so the PDF in terms of type, the PDF are going to have the type PDF. I think the question is more for the sources from Wikipedia and YouTube, which all show as type text. And yeah, the main reason is because what we do is from Wikipedia, obviously, we get the data as text, right? So we do flag the type as text because, well, basically, it's just a bunch of text that we get from Wikipedia. From YouTube, what we do is we do scan the transcript which in the end of the day is text text i think the question is do we order i mean because it cannot you cannot show all the older older sources all the time but it showed pdfs preferably is there like a weighted algorithm as well that says, okay, use PDFs before using Wikipedia? Or was this just an accident from the result list? Yeah, it was just from the result list, which is something I have to have a look at. Cool. Rob wants to know what's happening actually when you click the thumbs up thumbs down buttons. Okay so let me show you that I can share my screen again just quickly. So remember what I was saying is because it's all open source you can just grab it and you can extend well that's basically what I did here right you can see the one which is included right now doesn't have any feedback loop. So the one I showed you here, which is this one here, it's actually the same in terms of response and interactivity, you're gonna have the same. But what I did is I added those feedbacks. So it's kind of an extension, which is not part of the application here. It's something I did here just for the demo, same for the text to speech. So the temp hub is not doing of the application. Here it's something I did here just for the demo. Same for the text to speech. So the Thumb Hub is not doing anything for now. It's really just an icon here because I still need to think about how to make it work. The Thumb Down, so what I did is if you click on it, it's gonna show you a pop-up and I'm still working on that. But basically the idea is you can provide a text feedback. So for example, you are not happy with this text because for some reasons you think the name is wrong. You can just come up with your feedback saying, the name of the company is not GraphArc, but GraphOrAwesome, for example. graph but graph or awesome and if you click on save to the graph what is going to happen is it's going to save that feedback into the graph as a feedback node it's going to connect it to the chunk which is related to the question you were gonna you're giving a feedback to as you can see here feedback saved and we do generate I And we do generate, I mean, we do. So what I do is, in this extension, is I do generate the embeddings for that feedback as well. So that later on, if you ask a question, which is similar to this chunk or similar to this response, it's going to be able to retrieve the feedback you just provided. Yeah. Okay. That's cool. One question from Alonso. Can you use local models so you don't have to rely on sending something you don't want to send outside of your local private network to the internet, basically? Yeah, absolutely. So right now, for both the knowledge construction and the chatbot we do have the models we show you um so we do have open ai we do have chimney and if but um obviously you if you come up with your own for example if you're running on olama for instance um obviously you cannot take the application, run it locally, just click on the button, it's going to work. You're going to have to do the integration because we are not doing it. But in theory, you can just take it and make it work. Cool. An interesting question from Jacob. Because you showed this with the relationships and nodes and the labels, can you provide a schema to use as a guide or basically as a baseline before you actually generate the knowledge graph? Not right now, but it's basically come down to what I was showing on this development environment, which is this setting here on the top. So before you even load your data, you can go over here on the settings and you can come up with the schema or the taxonomy on which you want to create your knowledge graph you can create your own stuff so my own label or relationships or you can delete stuff or you can create it from from the graph using existing schema. Cool. There are lots of questions, but maybe another one here from Brian. Is there a limit on scale? Can you upload 100,000 documents? You can wait. Yeah, I would say it's about the weight and the limits you have on your API key. So when we load data, obviously, as I was saying, we do create the chunks, and then we create embeddings on the chunks, and then we create entities based on the model you're using. Now, if you're using, for instance, Gemini or OpenAI or anything like that, you're going to have the limits from OpenAI, and obviously you're going to have the cost and OpenAI. And obviously, you're going to have the cost and the price. Yeah, that makes sense. TikiBoy asks if you have different structured and unstructured data, so PDFs or like your buckets. There are three buckets you showed. Is there a difference in the process or is this just working seamlessly um so it's all working seamlessly i would say the only difference is going to be on the extraction of the data so as you saw we do have different sources we do have youtube wikipedia pdf and then google cloud storage in the end of the the the end goal is basically the same, is to have a document, a document object, and then split it into chunks of text, basically. So what's going to change is if you load data from your Google Cloud Storage, we do only support PDF files, so after we took all the PDF files, listed the PDF files, the process is going to be the same, because it's the same as if you drag and drop pdf files but for youtube and wikipedia as i was saying we get the text from the transcription or from the wikipedia pages yeah cool there are some more questions um but i don't think i don't know what else do you see Ollama yeah lots of discussions on Ollama so yeah I mean if you're interested in Ollama if that is something you want to do especially for local execution then as Morgan said you can happy to contribute I guess if you are Ollama expert then probably this is easy for you and then you can get it hooked up quickly and then you know maybe in you and then you can get it hooked up quickly and then maybe in the next edition you can see the Ollama working. Lots of love, especially for the project. Lots of hugs and high fives. So really, really good for what you did, what you built here. So that's really cool. Thanks. And I would say, so going back to this application, so we do have the self-hosted public version that you can use. So I would say if you really like the small demo here and you're interested in that, just grab the link, the self-hosted version, use it, just load whatever knowledge you want into it. We don't store anything on our end. So obviously everything is going to be stored on your Neo4j database and just play with it. And if you don't want to run it with our self-hosted, just grab the repository, run it locally. But yeah, just play with it and give us any feedback. Yeah, exactly. That's a good point if you if you discover something if you have uh something like like a bug or if you see something that you'd like to like to see added then the community is there and maybe you can suggest it or maybe you build it yourself and and add it to to the project so that would be cool and it's it's really cool to see that would be cool um and it's it's really cool to see um and uh great great stuff so thank you very much for for your work uh morgan and niels on on the application and for the presentation today for the cool demo for uh for the cool uh you know intro and and for the work you did and yeah here see the first fork already happened so So J Banks, cool. Good luck with that. Have fun and let us know what you make of it. So yeah, that's what we like to see. That's cool. Yeah, before I let you all go, I like to say next stream on 14th of May. So in two weeks time, it will be with David on vector search. So we go a little bit more into detail on how to work with vectors in Neo4j. So that's maybe an interesting session, a little bit like a, not necessarily follow up, but goes into the same direction as a little bit where we talked today. Next up, obviously, call for papers for nodes. So until 15th of June, you have time to submit your graph stories for notes 2024, which is happening later this year. So if you are interested in presenting, if you want to showcase something cool, like what we've seen today or something completely different, there's a broad range of topics we're going to looking for, not only Gen AI stuff, but also anything's on graph applications, graph visualizations, and more. So if there's something that's interesting to you and you'd like to show the world, then please do submit. We like to hear from you on notes. And if you are now wondering what to do next, then check out the community, either on the forum or on our Discord server. Check out the Graph Academy for learn lessons and courses and do get your free instance and play around with Neo4j and graphs. So yeah, with that, I let you go. Thank you very much again. Thank you, Morgan and Niels for the cool demo, for the cool application. I hope you all have uh have a have a good time with playing with it and uh and see what you can do with with the knowledge graph um application builder and yeah i hope to see you soon at another live stream um until then take care everybody and see you soon thank you bye-bye bye | Neo4j Live: Knowledge Graph Builder App | 3,862 | Neo4j | 20240503 | Let's turn PDF documents into dynamic knowledge graphs stored in Neo4j with an easy to use app!
This innovative application harnesses the formidable capabilities of OpenAI's GPT and Diffbot LLM to seamlessly extract nodes, relationships, and properties from PDF text content. Using the advanced Langchain framework, we transform this information into a structured knowledge graph, empowering you to visualize and explore your data like never before.
Learn how to effortlessly upload files from your local machine or S3 bucket and select the perfect LLM model to craft your knowledge graph.
Guests: Niels De Jong & Morgan Senechal, Neo4j
Github: https://github.com/neo4j-labs/llm-graph-builder
Neo4j GenAI: https://bit.ly/4eRHS6g
Neo4j GenAI Ecosystem: https://bit.ly/4cpAY6D
Neo4j AuraDB: https://dev.neo4j.com/discover-aura
Public App: https://llm-graph-builder.neo4jlabs.com/
0:00 Intro
8:30 Neo4j + LLMs
15:30 Knowledge Graph Builder
24:25 Demo
48:20 Summary
51:00 Q&A
1:02:35 Wrap-Up
#neo4j #graphdatabase #llm #knowledgegraph #langchain #openai #pdf #graphrag | 2024-09-03T08:20:45.836373 |
https://www.youtube.com/watch?v=LlNy5VmV290 | Today I will demonstrate how to use the Neo4j Knowledge Graph Builder to create a graph for RAC applications using unstructured information like documents from PDFs to YouTube transcripts to Wikipedia pages. We start by connecting to Neo4j. I have already created a Neo4j Aura instance for free. If you don't have one, there's a link that gets started here. So I can take my credentials file and to load this information and connect to our database, which is currently empty. And then I can start adding my documents. I'll start with a Gemini release video from YouTube. So we have some AI related content. I'll take the Google DeepMind page from Wikipedia and a PDF about alpha fold release. So we added all these documents and then we can pick an LLM to extract this information. And we can configure what to extract and how to extract this information from our text. So I already created a graph schema here with people, organizations, technologies, and concepts, and the number of relationships. There's other ways how you can configure the schema from your existing database, or by extracting this information from text that you can paste into the application. Okay, now we can click on generate graph, text that you can paste into the application. Okay, now we can click on generate graph and it will start extracting this information from our PDFs. So we see we have nine pages of PDF and one page of transcripts and Wikipedia. So we can already look at the first extracted data. So this is from the Gemini video. So we see what has been extracted. So the different Gemini models, Google AI, some concepts, Google products, and so on. And then the other two are also done, which allows us now to visualize all of them. So I can just select all of them and we can see the graph. So we see our entities, but then we can also extract and show the chunks that we extracted from the documents and that we extracted from the documents and that we connected the entities to. And you see as well that it cross connects actually entities across the chunks. And you can also see the documents here as well. So now we can also start asking questions of our data. So for instance, we can ask what did DeepMind work on? And it will use the information from the graph and from the answer here for us. And what's especially interesting is that we can actually see what information went into generating this answer by clicking this retrieval information button. And this shows us from the Wikipedia page, this section has been used and from the PDF document, the pages 3 and 7. You can also see which entities have been used to answer this question. We can even visualize them here and look more into it. And we can also see which text chunks have been used to answer this question. have been used to answer this question. If you want to go deeper, you can explore the graph of Neo4j Bloom to visualize, edit and navigate the graph as well. | Neo4j's LLM Knowledge Graph Builder - DEMO | 263 | Neo4j | 20240619 | In this demonstration of the LLM Knowledge Graph Builder, we show you how to automagically create a graph from your unstructured text and leverage it for Graph-powered Retrieval-Augmented Generation (GraphRAG).
-upload documents, YouTube videos, and Wikipedia pages.
-configure a graph schema
-extract the lexical and knowledge graph
-visualize the extracted graph
-ask questions and see the details that were used to generate the answers
Try it live: https://bit.ly/4c4HKyp
Learn more: https://bit.ly/3KMRrp6
#Neo4j #GRAPHRAG #LLMs #GenAI | 2024-09-03T08:22:25.339127 |
https://www.youtube.com/watch?v=Z42VVH9qngo | So as we were saying, all the projects, so everything, the frontend, the backend, the whole solution is open source. So you can see the GitHub repository here on the top. So what I did is basically just get the sources from those two Wikipedia sources. So Bangladesh and then Vslo and it submits. You can see now we created two of the documents. So one for Bangladesh and one for this law which are taking their sources from wikipedia there you go so you can see in less than what couple of minutes maybe we loaded all those pdf document youtube video and wikipedia sources into in your project so we do have quite a few nodes. What you can do now is click on Show Graph. And what I'm going to do is I'm going to remove the chunks here and you can see different information. So you can see obviously different documents, different chunks. We get all the different sources from the Wikipedia sources. So you can see all of that is basically coming from Wikipedia. When you have your knowledge graph into graph and not just vector database is because then you can link all the data you have in your knowledge, whether it's structured or unstructured into all your different sources. So you can see from my knowledge from Confluence being linked to some knowledge from Wikipedia. I'm gonna show you the same chatbot interface but in the bigger larger screen because then i can show you what you can do in terms of extensions as well we're using basically graph rag to create the graph structure the chunks uh based on the eminence and then we we're gonna provide a response from different sources from unstructured information and from structured information. You can see what we do is basically we list the sources from the question you ask. So for example here we do have the question who is GraphAcme? So we do, as I was saying, we do create the structured and unstructured information. So you can see it's giving a lot of different information, not just from one document, but from two documents in this case, with different information, which you can fact check. Sorry, but then you can see it's actually listing the sources from where it got his informations. Talking about the extension, you can see here a bunch of different icons here. So what I had is basically turn up and turn down in terms of providing feedback on a response. You can also generate the response if you are not happy with the response. And there is text-to-speech as well ecosystem, which is again listing all the initiatives we do have at Neo4j and in example projects. We don't have just this project obviously but we do have this one here which lists all the documentation, how to run it, etc. But it also comes with the self-hosted version. So this link here is publicly available. | Knowledge Graph Builder App | 180 | Neo4j | 20240506 | In this 3 minute video, Morgan Senechal demonstrates the new Knowledge Graph Builder App.
Full Video: https://youtube.com/live/NbyxWAC2TLc
Github: https://github.com/neo4j-labs/llm-graph-builder
Neo4j GenAI Ecosystem: https://bit.ly/4cpAY6D
#neo4j #graphdatabase #llm #knowledgegraph #langchain #openai #pdf #graphrag | 2024-09-03T08:23:07.656534 |
https://www.youtube.com/watch?v=ViHV16ly-qs | Hello, good morning, good evening, good afternoon, and welcome to Going Meta episode 25 today. Out of the darkness, I'm talking to you. I was going to say, Alex, you're an unusually dark background. Yeah, it's a little bit weird. I'm in this hotel room and they have like this corner space for the desk that is here. And there is no lamp except for this one here at the desk. And underneath me, or rather it's the rest of the room, and it seems like the lights are on. It's a bit weird. But I'm here in Stockholm this week for JFocus, which happens tomorrow and on Wednesday. So that's why a different location but yeah i'm i just realized earlier but i couldn't do anything about it so there is no i know don't worry we can see you very well and we can hear you very well so it's it's all good that's great it's great to have you all uh join i i saw a lot of people already before we actually uh you know actually went live, talked to a few people from Shared where they're from and what they watch from Amsterdam, from New Jersey, from North Carolina, from other locations, New Hampshire here. So it's great just to have you all here. Even for some of you, it must be a little bit early in the day. For some of you, a little bit later here in Europe, for sure, or even in Asia, it's very late. So if you're watching this live, as always, it's great to have you. It's great to have your comments, your feedback, your insights. And yes, Walter, it looks like Harry Potter Room Under the Stairs. I get the feeling. Now I know how Harry Potter Monster felt. It is almost as narrow. My bed is a little bit bigger, so at least I don't have to sleep here. It is exactly that room. Yeah, today's topic is LLMs for automated knowledge graph construction. So this continues a little bit our journey of LLMs, RAC, knowledge graph, a bundle of topics that we discussed since I think episode 21 or so. Not necessarily you have to watch them all in order or have to have watched everything before when you when you watch now but they kind of i think uh connect well with each other um and build up on each other a little bit i actually actually think that they kind of make make sense i mean it kind of uh they sort of build somehow on top of each other and and i mean of course if you're familiar with some of these topics of course you can jump straight on any of them but i think you know if you're on a learning journey and and how you know what the the value of the combination of knowledge graphs with lms is i think it makes sense to to to watch them in order and and you're right alex i mean today we are kind of stepping i mean staying in the in the you know topic of GenAI, but stepping a little bit out of RAG. I mean, we've been doing RAG for the last two or three sessions. So today, we're still going to be using LLMs, but this time to help in the construction of the knowledge graph. And in particular, we're going to be looking at how to do that from this time from structured data. So we've been seeing you know a lot of interest in in okay having structured data i can do entity extraction and populate knowledge graph what we're going to be looking at today is okay i have some structured data and that can come from a relational database from a collection of csv files so what do i do with these and and how do i build the graph out of that so that's that's going to be the topic of today. And we're going to see how LLMs do actually help. Yeah, sounds good. And I guess, correct me if I'm wrong, but I think that was just today's topic was a little bit influenced by a comment we received on YouTube. You're going to see that in the first two slides, because I'm going to tell you in a couple of minutes, and you're going to immediately recognize. That's right. I mean, there's a couple of reasons that brought us to run this session. And one of them is exactly people's comments and people's suggestions, which actually it's a great segue to say, please keep coming to us and keep telling us the things that are not clear, that you want us to go deeper in, and that will help us coming up with new episodes and new experiments to run. Exactly. Yeah, that's what I wanted to say as well. If something isn't clear, obviously, that's great. But if something else is on your mind, if you think, hey, wouldn't it be nice if Jesus were to discover this or were to to dive into this or that topic then let us know and we can we can see if that makes sense in one of upcoming episodes but if you don't have anything to suggest now then you obviously you can hopefully you interact with us during the the episode so you know you lots of people chatting already away so that's great to see uh keep that going so we can make it a conversation rather than a monologue even though we are two people exactly oh that's very interesting unable to share my screen. And that will be because I have obviously changed laptops. So Alex, you're going to have to cover me for a couple of minutes. Why? Because I'm going to have to change the settings and leave the meeting and rejoin. So you have to come up with a few jokes. I'll be back. rejoin. So you have to come up with a few jokes. I'll be back. All right. Good. See you in a minute. See you in a minute. All right. So good. I'm not really good at jokes. So I hope you helped me out here. Dear, dear watchers. Maybe somebody has a joke they can, they can type in and I can read off on your, maybe I should ask a chat GPT if he's got a graph joke, but usually they are super bad. But I can open that up just to keep going. Tell me a graph joke. Let's see what he says. Why did the graph go to therapy? Because it had too many unresolved complexes and couldn't find its own limits. Oh, well. I'm not sure. Dead jokes. Yeah, okay. Maybe I should ask, make it more like a dead joke. Okay, another one. Why don't graphs ever go to parties? Because they always have too many points and can't connect the lines. I'm not sure. I mean, no. I think we have to cut this out. Well, I mean, until Jesus is back, I can only reiterate that the previous episodes are all in our repository with the code, but also all the videos you can find in our landing page. Whoopsie, where have I gone? Which I'll share in chat. So you can look there and find all these previous episodes we did since episode one, basically, and this is updated constantly. So following through to today's episode. So if you are ever wondering why, where to look at and how to find that. Emil wants to hear another test. Okay, well, I mean, I guess another one. If you can bear the pain. Another one here. What do you call a graph that's been lying on the beach too long? A tangent? I don't think chat GPT understands jokes very well. Oh man, you're gonna kill me. I'm back but it's not showing the same... How do I help Santa? Damn it. How did we... help Santa damn it how did we maybe do yeah help me Windows user max user well I'm willing to take tips here because I'm what did you do did you do? Did you let the app... Presentation, presentation to... Yeah, share. If I share entire screen and I select the screen and I click share, it's still showing update system permissions. But which permission is that? Because I thought I had... permission is that because I thought I had. I think on the Mac you need to Yeah, you need to switch it on. And privacy, maybe. Here we go. So camera. But maybe... Look at the comments, oh my goodness, we're going to get fired. We are getting... Can we have our CEO helping us changing the settings so that we can share our bloody screen? Oh my gosh, that's embarrassing. Can you share a window instead? Does that work? Did you try? I can try, but I believe... yeah, that's a good point. Good. Let me. Last joke at least. Yeah. It's good. Yeah, that works. Did that work? Wow. There you go. Perfect. Well, good. Okay. Well, that's great because that's all going to work from a single browser window. So, well, you can see it, that's great. What if I go on full screen on slideshow? Is it still showing nicely? Yes. Yes. Okay. Good. Brilliant. Like we're saying, well, that's unexpected. Anyway, we're going to be focusing today, staying within GenAI and using LLMs. But this time, instead of RAG is to help us building a knowledge graph. Right. So and in particular from structured data. So I said that it's kind of a, you know, we're kind of revisiting what we did in in in going meta episode five. That's many, many months ago. But what we did back then is we had some structured data in the form of CSV files, and we have a kind of a target model defined by an ontology, right? So we know the kind of graph that we want to build, and we define some mappings, and then some kind of general purpose code, some Python code basically would read that target model, would read the mappings, and generate the data ingestion pipeline to populate the graph. So that's what we did back then. Right. There were some questions, though, and that's probably where you were going, Alex. But at the origin of this was, of course, Episode five, but also something that Dan posted, you know, a few a few weeks now ago. And, and then if you if you're not following him, you should be, because he's great and he's doing amazing stuff also in the Gen AI space. And one of the things that he talked about, and this was, like I say, two, three weeks ago, was that he was trying to use LLMs to kind of help in the construction of a knowledge graph, right? The creation of the graph model, he focused on that. And I'm going to show in a minute. I mean, you have the link there to his LinkedIn profile because he, like I say, he posts super interesting stuff, but also his GitHub account where you can see his experiments. And actually, I probably have it here. And what he did, basically, you have the notebook, you can you can play yourselves. And, and he went and looked at the structure of a CSV file, he's using some thing insurance, public data, government data around insurance. And he analyzes like the headers, he even does some profiling of the minimum, maximum values. And using that, he kind of interacts with an LLM to come up with a graph model. So that's one approach. What I'm going to do today is slightly different, but it's along the same lines. So it's also answering to these questions that you were mentioning before, Alex, where some people actually at the end of this episode five, where we did what I described a minute ago, it was something like, hey, give me best practices around modeling, because you start with a target model. What if I don't know how to build that model? How do I create this kind of declarative schema definition of the model of the graph that I want to build? So these two things are the kind of the reasons that made me explore these, these idea that we're that we're working on today. So and I think we got an last or previous episode, we also got a comment on, hey, how can I can I use LLMs to to, to date to model my my knowledge graph. So it's, it's not exactly the same topic we talked today, but it goes into a, I would say it goes in a similar direction. Kind of, you know, it's kind of, you know, similar. And these diagrams shows the process. I mean, if you follow the numbers, and I try to pick, you know, a common or popular source of structured data, CSV data mostly, which is Kaggle. And you will see that most of the pages, and we're going to see that in a minute, offer access, of course, to the data, some very rich description of the structure of the data, some metadata describing which are the key features in the data set, some description of the data set itself, etc. And that can be extracted in a structured fashion. That was a nice surprise. And I'm going to talk more about that. So you can extract both the metadata and the data itself, right in the form of CSV. So I'm focusing on CSV data here. So what we're going to be looking at is, okay, I'm going to use LLM, and in this case, we're going to use again, OpenAI to generate a data ingestion pipeline, which is basically a collection of CSV scripts that will consume the data, but this process is going to be fully automated. So I'm going to, at the high level, although I'll go into the details in the session today, I'm going to go and read the metadata description, right? The formal description of what the structure of my of my data set. I'm going to pass that in a consumable format to the to the LLM with some some prompt engineering there. And I'm going to ask the LLM to generate a model, a target model. So I'm not going to pass the model, I'm going to ask the LLM, hey there's some latent entities because the thing is in these CSV files sometimes there's, you might find them very highly normalized, where you have a lot of help already, let's say you have it broken down into files in the form of a kind of a relational database where you would have a file for customers or file for products or file for orders. You kind of have a kind of the model there, right, if you read the files. But sometimes you have a large CSV file completely denormalized and all you have is header descriptions. And you have to go there and say, hey, actually, these three columns are one entity and these three columns are another entity and this is another entity and so on and so forth. So this is something that you can do if you have some experience, but this is something that LLMs can help us with. And that's exactly what we're going to be experimenting. We're going to pass this metadata description. The LLM is going to come up with the model and the mapping, and then we're going to fully automate the process of, okay, now I'm going to read the CSV and I'm going to populate the graph. So that's what we're going to do. So step one is extraction of the metadata, generation of the model and the mapping. And third, importing the actual data. Make sense? Yep. So I'm not giving any kind of guidance. The guidance will be in the metadata about the data set and the LLM will do the rest. So the first surprise that I was mentioning, I was not aware of this. It's called a Croissant format, right? That's something I haven't seen before, but if you go to any dataset in Kaggle, you will see that when you download, you can download the data itself, but there's also the option of exporting the metadata as Croissant. I mean, a croissant is a JSON format. You see an excerpt there on the right that actually it's a very rich and formal description of what the dataset contains. And the dataset will have a collection of record sets, and the records will have fields. And the fields will have a name, they will have some description, et cetera,, etc. It's a very rich description. And interestingly, it's JSON, actually JSON-LD, which as we know, and those of you who have been following, JSON-LD is a graph format. So we can easily import it. And that's what we're going to do. Because once we have that description, we're going to be able to use that to create a prompt that the LLM will be able to consume. So first, a nice surprise is this cross-format. I put the GitHub repo there because it's being developed by the ML Commons body. And that's what it looks like. We'll see more of it in a minute. Now, once we have that, we're going to pass that, I mean, not exactly that, but in a nicer form of that to the LLM, so that it can generate a description of the model that ultimately is going to be our graph model, right? But we're going to have it in a form that we can consume in a machine-readable format. And it will be basically a collection of entities, which is in Neo4j terms, the labels, the types of nodes that we're going to have. And for each of the entities, the entity types and the list of attributes or properties. And then we're going to map them to the elements in the file. So this is all going to be worked out by the LLM. So the LLM is going to go over all the features in the dataset and will determine which latent or kind of hidden entities are there. And that's what we're going to use. When we have that, we're going to write some Python like we saw that's going to do the whole work of generating the graph model. Very simple. So in terms of interaction with the OpenAI API, it's going to be, we're going to be generating a prompt that looks pretty much like what you see there. So I'm passing the description of the dataset, the name, and the features in a readable format, and I'm going to use the completions API for it to generate a collection of entities and relationships, which is our graph model. So we'll get to see these in detail. Does that make sense, Alex? Any questions? Do we have anything there that we need to make clear before we actually get to see it in action? No, I think that that is pretty clear. I'm excited to see it in action. So I think we can go there. Perfect. So I assume you still can see all the tabs in my notebook. Can see all the tabs. Browser. I'm going to connect. I'm going to work in, as usual, in a notebook, in a Python notebook. And while we talk, I'm going to be- I think we have to zoom in a little bit in this view. Yeah, let's do that. Is it better? A little more, maybe. Okay, let's do that. I can do that. Yeah, I think that's good. So the first dataset that we're going to use is this New York housing market. This is a pretty simple dataset, but as you can see, we have the human readable version of it. This is in Kaggle, and it's a single CSV file. I'm working with this type of denormalized structures. You could do with others. But for this example, I'm going to go with this. And we have a description of all the columns there. So you see that there is a description about the property, there's a description about the address, there's a description about the property. There's a description about the address. There's a description about, it looks like there's a broker involved in this housing market. So these are, it looks like there could be several entities in these dataset, but we'll let the LLM figure that out. So this is like I say, the human readable version of the dataset. But if you go up here, you can export the metadata as a croissant and and this is this uh rich format that i've shown a minute ago and i'm going to show again so that you can uh see it probably by now we've already finished doing the um install i'm going to set up my connections but basically we have the the the source here so the cross and i've copied it to my uh just for this example and i'm going to show it to you here so this is what it looks like so like i said before there's some header uh some information about language and some high level information but then a... This is also a little bit small. Also smaller. Let me make it bigger. Apologies about that. So there you go. We have a data set that contains a reg set, which is a collection of fields. So we have for each one of the fields that we saw in the more human readable, we see the broker title, which is the title of the broker, the type of the house, the price of the house, the beds, the number of of bedrooms so we see that they are all listed there and uh and uh they come with the description of the name of the column and some additional information so this like i say is uh is json ld format and as you know json ld can be imported and um and i'm going to use today um something that we have not talked about in quite a while. I'm working with Aura, right? So I have my Aura instance up and running. It's an empty database at the moment. So if I, let me make sure, just to make sure that I'm on the right instance. It's an empty database. Perfect. If you remember in order, we cannot use extensions at the moment, and that prevents us from using Neo semantics, which would be our default option for importing RDF data in JSON-LD is one form of that. Now what we're going to use is um uh rdf lib neo4j which is kind of uh it's a project that we talked about in one of the episodes quite a while ago but that has a new a new version something that our professional services team has been working on and that makes it really really easy to work with uh rdf data writing to Aura because it works entirely on the client side. I mean, it just works using the Python driver and interacts with the database without having to install any extension. So the way it works, without going into too much detail, I'm going to click for it to run. I mean, I think I've set my database. I'm going to set my files. I have a couple of extra examples. If we have time, we'll go through a couple of them. And the first thing I'm going to do is, okay, I'm going to have a look at this croissant description of my dataset. So if I run this code, what this is going to do, what I'm doing essentially is I'm importing the RDF Lit Neo4j, and I'm creating a store configuration. I have to go quickly over that, otherwise we're going to not have time to see the whole thing. But we'll get back to it. There's an episode and maybe Alex, you can find it, but it's one of the early ones where we talked about how to import RDF data into Aura. But basically we define the the some of the the kind of the the the way in which one to the rdf to be imported and we want to basically ignore the namespaces we want to batch the input and we pass the configuration the the authentication configuration which is my my credentials that i have locally. Once I have it, what I'm doing, basically, it's I parse my Croissant file. And by parsing it, it automatically gets persisted in my Aura instance. So we've imported 179 triples, which is roughly what we had in that data set. You will notice that there is a kind of a temporary hack. I've commented it here. And that's because at the moment, the current version of RDF Lit Neo4j does not support JSON-LD. So what I have to do is I have to import it and then serialize it as Turtle to be able to import it. This will not be the case in the new release. You will check, if you check in the issues, it's already been committed to the source, to the code base. And the moment the team decides to do a new release it will be out there and we can just get rid of these intermediate steps but anyway we have now the the i put the link in the in the comments and in the video description that's it was episode number 12 where we talked about importing rdf data into aura and it's it's one of the labs projects so so now we've imported the the this description the the cross and description of our data set so if i if i um copy this query it's going to make it possible for me to explore that in my aura instance so there you go let me um that makes it a bit more readable so what do we have here so we have a data set which is this orange node i'm going to have to probably make it bigger uh yeah thank you actually take all the state there you go so we have the the i'm gonna make it uh maybe a bit bigger so that's um damn it the the problem with the browser when damn it, the problem with the browser when, is that when we have multiple labels, it gets a bit confused. But anyway, so we have a node representing the dataset, right? If I click on it, you see the, oops, what's it not? There you go. So you see all the attributes to the right. Let me make that a bit bigger. And so we have the description, the name, all the information that that we had in human readable form, we have it already imported in the graph. And the direct the data set has in this case, a single again, I'm going to make it bigger, a single record set, I'm going to reorganize that to make it a bit more understandable. But what we have is that um this uh record set is a single one again it's described in this collection of attributes and then we have a number of fields and for the fields i'm going to show the damn it that's that's the problem i mean it just uh because it has labels it just goes a bit crazy the browser but let me try it again and now it's ignoring me anyway well we'll have to deal with that so the visualization is not going to look great but basically these are these are the the the the fields you see that there's a field connection and the fields fields, the name, and the data types. So they're connected to the data types. These are two fields that happen to be integers. You can see here, two additional fields that happen to be floats and then a bunch of them that are strings texts. You see that for each of the fields you have the name and so on and so forth. So we have it loaded in the graph. So we have a rich description of the data that we're going to be importing. Good. Now we have that and we can use that to generate a textual description to the LLM. And what we're going to do is we're going to pass a more readable version of it. And for that, we're going to be running this query. Let me show you what it does. It basically does the same, but it returns it in a tabular form so it tells me the data set name that's the data set description and then for each of the features each of the columns in the file I give uh the name and the data type that's all and it is with all the columns in my data set so I'm going to use that uh and I'm going to run this query now so that you can see what it does. So it basically runs this query and we've stored in this variable the name of the dataset, the description of the dataset, and the features. All this to create a prompt. And this is the one that I showed before. So I'm telling you're a data modeling expert, that you're capable to create an entity relationship model, which in this case is exactly what we need, collection of entities and collection of relationships. And I'm passing the three elements that I've just extracted from the graph. If I run this, you're going to see what it looks like. So there you go. So it just passes the name of the data set, a description, not very useful in this case, but sometimes they're richer, and then a collection of features, which is a list, a collection of pairs, feature name, data type, feature name, data type. That's what I'm going to pass. This is the prompt that I'm going to be passing to the completions API. If I run this, it's going to take a few seconds. But basically what we're asking the LLM is, yes, go explore this collection of features that are flattened in a single file and determine if there are recognizable entities. Basically, can you come up with a graph model for me? I don't wanna do it. I want you to give me some guidance and give me some assistance in doing that. So it's done. And now we can print out the results. So we can see that it's come up with a plausible collection of entities. For example, there's a notion of property. A property has a type, a price, a number of bathrooms, square feet, and so on and so forth. There's also the address. So it's decided to split into the entities, which kind of makes sense right because they consider uh two different entities and there's the notion of a broker there could be different properties at the same address exactly so so we have three different types of entities and then it's identified three relationships that connect these uh entities so there's a connection has address between the property and the address and the has broker between the property and the broker. So pretty nice, nice job. Right. So I think it's made a good interpretation of the of the of the data set of the description. And we're going to go with that. Right. All we need to do is. OK, well, that's all the information I need because I have the name of my of the fields or the features in my data set, which is this map two. And this is the name in my in the property in my mode. And that's all. Like I say, that's that's all I need. So the next next part, I'm going to run these. I might go in detail, but basically, this is where I reuse the code from Episode five. This basically is generic code that is going to explore structure and generate cipher. So it's going to parse that and it's going to create both the constraints, but also the the data input the cipher input, which cipher input, which basically are load CSV instructions that can out of this description. I'm going to show the result and maybe it's just a bunch of lines. So it's not super complex. And we're going to see what it looks like. So this is the definition of the function. I'm going to actually call it on the output of the LLM. I have to make sure, and here there's a caveat, because there are mechanisms to make sure that the output of the LLM matches a particular schema, right? So sometimes it's generative AI, so it can be creative and it can come up, it can name properties in different ways. For example, sometimes it's name, it will call it entity name or it will call it relationship name, or instead of mapped to, it would call it map to feature. So there's ways of controlling that and there's function calling, there's Pythonic, there's a number of ways of doing it, i didn't have time to prepare it and i would it would have made a super long session so what i've done is i've made my code flexible enough to so that i can tune it depending on on but it looks like the names are pretty much the same that i was expecting so there's a from there's a two there's a type mapped two his the same and this name for both names. So it looks like it's the, you know, the expected type of structure. But in case there was some variation that would have helped me, you know, react to it. OK. Essentially, what I was saying, what this does, if I run it on the output of the LLM, it has produced the following output. Look at that. so it's generated because there are three entities we're going to be creating three constraints property address and broker so this is valid cipher this is the creation of the constraints and at the same time it's created uh um a loader basically it was going to load from this file which is the one that we specified at the top and we're going to be uh unwinding because we're going to read the file pass it stream it to the to this code and read record by record which is what the unwind does and create entities i mean create nodes of type property with the properties as defined and you see that it's going to be reading them from the corresponding column in our dataset because we had it in the mapping that the LLM has generated. So very similar, a bit simplified from what we did in episode five, but that's pretty much all we need, right? So I have all the cipher that I need to ingest the model, to ingest the data and create the model that the LLM has come up with. So, and the same for the relationships and so on and so forth. So all I have to do is actually run this code. And that's what we do with this, in this function. Again, you can have a look at it, but basically all it does is read the, I'm gonna read the CSV file and I'm going to batch the file and send it in chunks to the graph so that it's uh persisted in neo4j so and the the code is available after that so we put it again in the right now but it's going to be no but you know soon soon yeah absolutely i'm going to go in you know in a little bit of detail but i wanted to kick it off just to see if it throw any any errors or anything unexpected. There you go. So it's already, like I said, it's basically using pandas to read the CSV file. It's calling the insert data function, which is this one that we created here, that chunks the file in batches and sends it to Neo4j by running, you know, execute the cipher that we've just created. It takes the cipher and runs it and imports the data. There you go. It's imported the properties, the addresses, the brokers, it's loading the relationships and it looks like it's done. We can now click here and explore. We have as expected, let's see. For example, we have the has address, know, let's see, for example, we have the has address. Let's make this bigger. But, you know, no surprises. So we have properties that we can change slightly. Maybe we can refer to a property by, I don't know, maybe we can put the type. And the properties are linked to addresses. And we might even find things like properties. There are multiple properties in the same address. There can be a has address. Let's look at that. There you go. It's done a good job. It's transformed our original flat file CSV file that contained what you can explore it here, you can transform this flat file into a graph model and load it. And we had also if I expand, we had also the broker remember. So I mean, it's worth saying that this is a fully automated process. So I've, you know, I kind of eyeballed the output and this might not be a perfect model, but it's a great starting point to say, hey, I want to see my data as a graph and I want to come up with a plausible model, a reasonably okay model. So that's what we've done. Does that make sense? Because you might say, I suppose the question might be, well, this is a pretty simple model. And yeah, I agree. I mean, this is, as usual, we try to kind of come up with kind of didactic examples. But we're going to, just to show that it's a generalizable process, I thought, well, let's give it a try with another data set. Because it's exactly the same code, and it's entirely data driven so what i'm going to do is i'm going to empty the database so if i empty that shouldn't take too long it's just a few thousand nodes there you go so we're starting again and we can um actually because we created a bunch of constraints, I need to, I mean, not strictly needed, but it would be cleaner to delete them. So I have this magic piece of APOC that clears the schema. There you go. And by doing that, I get rid of all the all the schema elements and now we just stay with another relationship that was a nice trick so that i learned also this week but like i was saying why don't we go to another one and i have to you have to show that again oh yes sure sorry that was that was a quick one but uh it's um and uh let me show that apoc schema assert so you can actually that that helps you essentially i can um if i'll link to a link to the to the docs yeah it helps you defining it kind of in in one go a collection of constraints without having to call them one by one. And in the same way, you can remove it. So it's pretty convenient. So yeah, sorry. That is convenient. But I was saying that, okay, let's look at another data set. And I found this supply chain that I thought it might be a bit richer. I mean, in this case, it contains 33 columns. So maybe there's potential for additional entities there and we're going to do exactly the same process so once we've cleared the database i'm going to start again from my notebook and you know i'm running it in blocks here so that we can we can explore what's going on but this can be can be run in a single go so what I'm going to do here is okay I'm going to comment they said which is the housing market and we're going to go with the third one the second one was about interesting touristic destinations in India I'll skip that one in the interest of time and And we'll go with the supply chain example. So there you go. Another pair of metadata file plus data set, which are... And the supply chain data set is that also what was also... Okay, here you go. Okay. Do you have it or shall I paste it on the chat? I think I'll find it. But if you have it, or shall I paste it on the chat? I think I'll find it. But if you have it quickly, yeah. OK, perfect. Yeah, thank you. Then I'll put it in there. So yeah, like I said, the usual process. So we start by, OK, we're going to import the croissant file. So the croissant file, similar to the one we saw before. I mean, you can download it from here and we have it in GitHub so you can import it directly. Once we have it, we are immediately ready to explore it. Same structure that we had before. So the queries that we run should work again. So that was it. So we can return the sub graph. we'll have the usual problem with the visualization let's give it a try let's try it yeah let's oh yeah it did the coloring but it didn't do the the the naming but anyway so we have the data set at least the colors that's helpful let's make it bigger so we have the the data set the data set contains uh the record set and then there's a bunch of properties grouped by data types so and again you know it's not super readable but these should be the 33 columns that were listed there these are the floats these are the dates as you can see these are the booleans these are the integers and for each one of them we have the the name of the price pack price we have the line item insurance usd so this unit price so there's lots of columns completely flattened and like we did before we're going to present that in a way that is going to make it easy for us to, I think that was it, to pass it to the LLM. So this is the dataset name, the dataset description, and the collection of features with their data types. That's what we're doing, and that's what we're going to be doing again in our code. So here we go. So we extract these elements, and we generate the prompt. So the prompt has exactly the same structure, except that now it's about, it's not the house market, it's supply chain. There's sometimes this is pretty poor. I mean, it's kind of okay, but sometimes it's a much richer description depending on the data set. And I let you play with Kaggle and your preferred ones. But what's important is we have the collection of features with the data types, right? So we have the prompt. We send it to the LLM. That's where it's going to take a few seconds. That's where it's going to take a few seconds. And here is where it's exploring the 33 features, the 33 columns in our CSV, and coming up with the model by uncovering or surfacing the latent kind of entities and relationships that can be extracted from it. So the moment this is done, we're gonna be able to... I see we're having loads of questions about the sharing the notebook. We will, of course, as soon as the session is over, you will find it in the GitHub repo for going meta. This is the phase that takes takes a bit longer still should only be a few a few seconds and um let's any any questions that we can answer while this thing is going it takes a bit longer today um there was a question on LinkedIn. Yeah. I'm reading, is this RAG still? This is not RAG. I mean, RAG is some. Let me actually, well, oh, it's done now. But it's a very, very, very important question. And let me make sure that I get back to this one. Alex, I'm going to complete the example for.. And then we'll get back to that. This is not RAG. This is previous to RAG. This is the construction of the knowledge graph. So we run the generation of the model. If we explore it, there you go. So it's a bit more, it's a bit longer, more entities, but it looks at the structure of the same entities with the name. It says that there's a notion of a country with a country name. There's a shipment with a scheduled delivery date and so on and so forth. So a collection of entities. There's a notion of a product. There's a notion of an order. There's a vendor. And then there will be, there's a brand. And, well, I can keep, this is a much richer one. That's very, very interesting. I'm happy that we chose this one. And then, oh, it didn't find any relationships. That's annoying. No, that's not good. Bad. Why, why, why? Oh, why? Let's run it one more time. Because this is, I mean, if it doesn't work, that's the unpredictable part. I mean, it doesn't make a lot of sense that he hasn't found it. But let's give it another try. Let's try it again. And I'll take a minute to answer what rag is. And I'm going to go to the episode 24. This is rag. So rag is once we have, I mean RAG is, if you want an architecture, a pattern, yeah, kind of a integration pattern somehow, where we're building an application that interacts with an LLM, right? So we are asking questions from an LLM, and that can be to provide a kind of a conversational interface, or it can be to do any kind of customization or generation by the LLM. Now, if we build an application that just interacts with the LLM, we rely entirely on the LLM's capabilities to generate, to reason, but also on the knowledge about the world that the LLM has, which is sometimes incomplete, as we've mentioned several times, because LLMs are trained on public data and they will not be aware of your enterprise business data or your personal data. So we have to somehow inject that knowledge, these facts, these grounding facts into the process. And that's what happens in this intermediate step. And that's where we augment the generation. That's why it's called retrieval augmented generation. I like to call it augmented generation, but it's based on retrieving data from an external source, a source that's curated, that's controlled. And that in our case is a knowledge graph. So what we do is based on the task that we want to solve, that we want the LLM to work on, we're gonna go out to an external source. And there's a number of ways to do that and we've touched that on episodes 22, 23, 24. We retrieve all the potentially relevant information from these external source, our knowledge graph, and then we pass it in the context to the LLM and then the LLM will return an answer but grounded and based on these facts that I've passed in what's called the context. And therefore, I'm injecting this external knowledge, controlled, curated, trustable knowledge in the process. That is RAC. What we're doing now is we're constructing a knowledge graph out of, in this case, structured data. And this is kind of a different process. Let's see if this has finished. I hope that clarifies things. Let's see if that's returned now some relationships. If not, we'll have to unfortunately give up, but... Oh, there you go. We have, I know this is, oh, relationships we have. Yeah, perfect. Now it came up with some. So that's good. So I don't know. It's, you know, like you see, it's a little bit. Anyway, so we have some entities that have been extracted. And now we're going like we did before. I don't need to recreate this function because it's it's exactly the same and it's using the same schema elements so all I need to do is rerun call this function but this time oh damn it something has happened here of course that's when type, key arrow type, why there's not a type. Let's try to add the term. So we're not having, our name is not type. Oh, data type, there we go. No, is that what we're looking at? Yeah, I think that's it. I think it's data type. Data type, so that's, there you go. That's the reason why i did that um is this uh no that's not what i'm looking at map two relationship name is the tag no that's um relationship name is name so that's the thing the relationship name is name. So that's the thing. The relationship name is called name. I believe that's the thing. So maybe we can give it another try. Perfect. Now it looks better. And if we, so again, what I've done there is I've had to deal manually with the fact that I'm not enforcing the structure returned by the LLM. Like I said before, there's mechanisms to make sure that this returns the right JSON structure, but I didn't have time. I didn't have something that would fit in this session. But there's mechanisms to guarantee that because I did not enforce it. I have to kind of read what the output of this looks like. And it's using relationships name to from to make sure that my function uh consumes that output so that's that's the only thing that we've done so there you go so we have a a new set of uh of cipher instructions that will populate the graph and um if i uh if we run this again If we run this again, this will hopefully populate the graph and now it's importing countries, it's importing shipments. These are the entities that have been extracted from this supply chain data set. So this, as we saw before, is a flat data set with 33 columns and it contains loads of information. So what we've asked the LLM to do is to extract a model out of it, a graph model. And that's what it's done. And it should be done shortly. It's finishing now, populating the relationships. And the moment it's done, we should be able to have a look at the, let's wait till it's done. So it's item order. Good. Finished. So we can now, okay, we can remove this because this is the, this is the, because we still have in the same graph, this is the metadata description. Remember that we have the data set that contains record set with fields, et cetera. But the actual model from the import looks something like this. So we have a shipment that contains items that have,'s used in orders and and shipments you know are associated to countries so that's the model that it came up with and we can now explore it in the graph right so we can see that we have orders and if we expand orders will be linked to items an item will be part of multiple orders will also be part of multiple shipments the shipments will be associated with countries i can do the usual to make it look a bit nicer but there you go so that's uh that's where uh i probably stop and look at some questions but we've come up with a decent and reasonable graph model out of entirely unstructured and flattened CSV data and all through a fully automated process. So I hope that kind of made sense. I know in some parts I went a bit too quick, or maybe I didn't go into a lot of detail in particular the the function that generates the cipher out of the mapping but you can I mean like I say it's just a bunch of lines it's pretty straightforward it iterates over the the strong return by the llm and generates cipher out of it so you can you can explore it once I share these and that should be uh like I said relatively relatively straightforward. But yeah, let's take a pause here, Alex, and look at some questions. Alex Sarros- I think it is remarkable in a sense also that we used two datasets that we haven't really looked at a lot in detail before and we kind of like still can work out what they mean and how it works and how it fits together. So I think that's very interesting and very nice to do and to work with. Probably just one comment before you go ahead with the questions. I mean, in this case, and again, it was my personal curiosity. I found that there's this kind of structured description of the metadata of the file, right? This kind of JSON definition in cross and forward. But, you know, again, replace that with whatever is your preference. I mean, sometimes you may not have that description. And what you have to do is what Dan did, right? You have to maybe parse the CSV file, extract the headers, maybe do a bit of profiling. You have to try to extract some of the metadata. But the moment you have a description that you can pass to the LLM, I mean, the LLM, you know, the more clues you give it, the better model it will come up with. I mean, if the, you know, if the columns come with a description, then it will probably come up with a better model. So that is available in the case of Kaggle, and I invite you all to go and check your favorite data sets, and it will generate a model out of them. But if you don't have that description, you can generate it in other way. Yeah. And with that, you already answered a question from Stefan who wanted to know if this only works with structured data, but, you know, it does work with unstructured data as well, but you probably have to do a little bit more prep work and a little bit more, I guess, train LLM so that the LLM knows what's going on. Yeah. I mean, the thing is with unstructured data, the process is slightly different because there, I guess train LLM so the JLM knows what's going on. Yeah. I mean, the thing is with unstructured data, the process is slightly different because there what you're doing here, what we're doing is, you know, I know what the kind of have understandable or well-defined descriptions for my columns. And I'm going to come up with a model. Now, if you have unstructured data as a paragraph or a document, the process is called entity extraction. And I think we've talked about that in other episodes, right? And there it's more of a I'm going to extract maybe entities of different types, but it's a substantially different approach, I would say. Yeah, good point. Could you summarize what is the advantage of doing this with an LLM? Well, it did it for us, basically. That's not doing it means I have to be an expert. I mean, I'm not saying that what we did is rocket science, right? With some training and with just looking at our great documentation and materials, you will get best practices and guidance on how to do graph modeling. But sometimes for people, it can be a bit of a blocker. I mean, I'm used to work in Excel, but I cannot determine which things can be entities and how should I come up with a plausible graph model. Well, let's ask the LLM to do it for us. And that's the advantage, that it's done it for us without us having to know anything about graph modeling. Right? So that's again, take this with a pinch of salt. This is a fully generated model and this will probably require iterations, right? It might not come up with the perfect model, but it will be definitely a starting point to see your data, your tabular data in the graph without having to basically to write a single line of code. Yeah, exactly. I think that's a pretty good advantage. And especially, like we said in the beginning, this is not, I mean, you can, this is an idea and a concept and a proof of concept almost that it can be done and that you that you get quickly to a first result if you then decide on exploring this data set further and if you want to then incorporate this and work with it as a graph then you probably should step back still and and have a little think about is this actually right is is maybe other mistakes in there is it is it making complete logical sense because you know LLM is not 100% right. So they might hallucinate some stuff. They might come up with some connections that are not really the right ones and stuff like that. But I think as a general starting point to something where you might have a little bit, you are at a loss of what's actually in my data and you just have a huge table and you don't really know what to do with that. I think this gives you a great view of the, hey, how do I get from there to a graph in relatively no time and relatively no writing code and then exploring that data. Absolutely. Yeah. of questions will go into this same vein of of of why why using neo4j or by using a native graph database for a knowledge graph so a couple of people asking here is how is this better than onto text rd fox cambridge semantics label property graph are still not semantic knowledge graphs or another comment here i'm a long time Neo4j user. There are other options for data store like Vector Store, Azure, Cosmos DB and so on that work as a backend for a RAC application. What are the key justifications for having a native graph database or a pure knowledge graph as the backend data store? Wow. Many of them. Long, long question with. Long, long question with potentially long, long answer. I mean, I'll take the one from Joe on how it is better than Ontotex, RDF, Focus Cambridge, Semantics, and blah, blah, blah. I mean, we've talked about this many, many times, so I would refer you to earlier episodes when we talked about the difference between RDF and property graphs. I mean, it's, I mean, with different types of stores. And this, I would say the quick answer is we work at different levels of abstractions. Triple stores break everything down into statements, into triples. And from a storage point of view, they are a table, a table with three columns. That's what a triple store is. I'm simplifying. Of course, it will have indexes etc etc but basically you work at the triple level at the statement level in the property graph we work at a different level of abstraction which we believe it's kind of nicer it's more developer friendly more more data person friendly because we work at the nodes and relationship level so for our node is a is a rich structure that uh that that describes for example example, Alex. We can say that Alex and Jesus are friends. So there's two nodes connected with a friendship relationship, right? If you want to represent that as RDF, you have to decompose it. Basically, every node will be a sequence of triples that you can put on a triple store, and the relationship will be more triples, et cetera. So two different ways of looking at data. I mean, different levels of abstractions. There's lots of advantages in the property graphs. We've talked about that. We put them side by side in many episodes. So I totally refer you to look at that. And I think that answers the question. The other one, I think you mentioned, I don't know if it's there, the vector databases. I don't think, you know, vectors play a key role in this episode, but Neo4j is a graph database with, you know, vector search enabled graph database. So we combine the two ways of exploring data storing and doing semantic search, graph semantic search and vector search. Again, we cover that in episodes, I believe, 22, 23, 24. So please go back to those because there's a high level of detail. And like I say, it was not part of today's episode. Okay, cool. Maybe to finish today, one more question here, also all coming in via LinkedIn. Maybe going a little bit more into what you said before, is if you don't have a cross-honed metadata text file, if you have a, here's somebody asking, hey, can we do it without tabular data, but for example, PDF text data or something? I think we showed it in a way that makes makes sense it was easy to showcase but you you know um you can use more or less any other uh data set you have or data data format or however the data is structured it will take like you said it will take a little bit more time to to prep I guess but you know technically this should work yeah just more like remember that ultimately what we do I mean we have this cross on former which is super rich but ultimately what we're doing remember is that I mean we're passing I mean the key thing is is this this portion here I hope it's readable so we're going to call the completions API. We're going to call the LLM and ask it, hey, build me a graph model out of this flat data. And what I'm passing is this prompt. So how you come up with this is entirely your choice. So I'm passing, this is my data set. This is what it contains. And this is the collection of features or columns, right? This is what it contains. If you is the collection of features or columns, right? This is what it contains. If you can come up with these in a different way without using the croissant, and probably you can just, I mean, in pandas, if you use Python, you know, there's loads of functions to, you know, to read a CSV file and return the headers. You can even manually add annotations to add additional descriptions, but ultimately you have to, excuse me, headers. You can even manually add annotations to add additional descriptions. But ultimately, you have to pass these features in your dataset to the LLM. And I choose this format. How do you come up with this format is entirely your choice. If you have the cross-hand, great. If you don't, there's many other ways. Yeah. I mean, for starters, you can potentially ask an LLM to create these for you out of your file you want to send to the LLM. It might not be really, you know, but as a starting point it probably could be good enough. Yeah, and it depends in a number of factors, right? If your data is well described and you have headers that are readable and meaningful, then perfect, because the LLM is going to make sense of those. Maybe sometimes data comes, look at this, this PQ. What the heck does PQ mean? P-O-P-L-S-O. Maybe by the context, it will make sense of it, A-S-N-D-N. But sometimes this is not great, and what you have to do is do a bit of profiling. Maybe you want to read a little bit of the data and pass that to the LLM and say, can you make sense of that? And maybe these are names, these are dates, these are, and so, and there, I would again, refer you to Dan's notebook. And I think you've put it on, I think it was one of his likes It's in the video description on YouTube. It was in one of the slides, but he kind of goes into a more conversational iterative approach. So he, you know, parses the CSV and asks the LLM, what could I give you so that you can produce a model? So there's loads of things that can be done. I mean, we went with one approach that kind of fully automates the process, but it's not the only one. I wanted to kind of show how LLMs can help. We've talked many times about how they can deal with unstructured data to do the entity extraction. Today, I wanted to explore how they can also help in dealing with structured data for which we don't have necessarily a rich model description. Yeah, absolutely. Cool. I think that brings us perfectly to the end of today's session. We thank you all for watching today. Thank you for presenting and for showcasing the demo. That was really insightful. I think that should encourage lots of people to try it out yourselves. I mean, easiest way is wait for the GitHub repo to be updated and then pick your favorite Kaggle dataset and give it a spin yourself and see where you go. And obviously, you can obviously add on it, expand on it. Let us know when you do that. It's always interesting to see how you work with the stuff we're providing here. As we said in the beginning, if you have any questions, any feedback, any comment, anything you'd like us to cover in one of the future episodes, then don't be shy. Best place, I think, is probably the YouTube comment section, but you can reach us in other channels as well. And if you are interested in what's coming up, then I can give you a short glimpse of what's happening next week. I'm having Michal Stefaniak as a guest, and he will talk about the Cypher GUI, user-friendly administration for Neo4j. That should be fun and interesting. That's happening on the 13th of February, so next Tuesday. And yeah, we'll be back in March, beginning of March, and with another topic, another session of Going Meta, episode 26. Until then, I'm wishing you all the best of days. Enjoy time, enjoy the rest of your week, and yeah, see you soon. Thank you very much. Thank you. Bye, everyone. Bye. | Going Meta - Ep 25: LLMs for Automated KG Construction | 3,979 | Neo4j | 20240206 | Episode 25 of Going Meta – a series on graphs, semantics and knowledge
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Knowledge Graph Book: https://bit.ly/3LaqE6b
Previous Episodes: https://neo4j.com/video/going-meta-a-series-on-graphs-semantics-and-knowledge/
Daniel Bukowski on Github: https://github.com/danb-neo4j/llm
Croissant File: https://github.com/mlcommons/croissant
New York Housing Market Kaggle: https://www.kaggle.com/datasets/nelgiriyewithana/new-york-housing-market
Neo4j AuraDB: https://dev.neo4j.com/discover-aura
rdflib: https://bit.ly/4eSRPQE
Ep 5 Ontology-driven Knowledge Graph construction: https://youtube.com/live/05Wkg1p34ek
Ep 12: Importing RDF in Aura: https://youtube.com/live/DWINSvRxIbw
APOC Schema Assert: https://neo4j.com/docs/apoc/current/overview/apoc.schema/apoc.schema.assert/
Supply Chain Pricing Kaggle: https://www.kaggle.com/datasets/apoorvwatsky/supply-chain-shipment-pricing-data
Check out https://community.neo4j.com/ for questions and discussions around Neo4j.
0:00 Intro
5:23 Graph Jokes
9:55 LLMs for Automated KG Construction
19:25 Demo #1 Housing Market
37:10 Demo #2 Supply Chain
52.50 Q&A
1:04:33 Wrap-Up
#neo4j #graphdatabase #semantic #knowledgegraph #rag #llm | 2024-09-03T09:18:06.765119 |
https://www.youtube.com/watch?v=Sra-1xhNn28 | Hello everybody, good morning, good evening, good afternoon and thank you for joining our episode number 27 of Going Meta, a series on graph semantics and knowledge. So very welcome to everybody who's joined freshly, newly to this series, everybody who's been around for a while, thank you for coming back, thank you for giving us the reasons to go on and to continue with this lovely series here on on you know YouTube or LinkedIn or X or wherever you're watching this right now or if you're watching on on demand and you know welcome break up to you as well for later in the time and specifically you know today we have as usual a great great person with me and that great person is jesus hi jesus how is it going hi alex yeah it's going great yeah very excited to be here again it looks like you know it gets shorter and shorter between episodes i don't know it feels like really really excited and i think we're having a very interesting topic today so yeah looking forward to it it It does look like that. Even though March has not been a short one, maybe even this year, we don't even have an excuse. I mean, February was a long one this year. So we don't have anything to say about that. Just a busy time, maybe a lot going on. Yeah. Especially, I mean, I guess for some, Yeah. Yeah. Yeah. Yeah. Especially, I mean, I guess for some, it has been a long weekend at least here in Europe, it had been a nice, a nice long weekend with a couple of days off. So, so that was, that was nice. So maybe, maybe that is also why, you know, it feels like sometimes things are, are moving quickly and we're into April already. I think this is probably as early as we can be in the month as well so maybe this is another reason we are doing these every on the first tuesday of the month so i mean being today the second it couldn't be any almost any earlier than the first could have been a tuesday and obviously that's that but you know uh we are we are very um very regular i would say with with this date. We try to keep it as much as possible. So if you are looking forward to the next episode already, then the next one will be the first Tuesday of the month, probably again. We'll confirm it soon after. But yeah, I'm looking forward to today. It should be a fun one. It should be an interesting one. We are talking about building reflection agents with Langraph. So Langraph is without taking anything away from the actual thing you're going to say, because I'm pretty sure you will explain what a reflection agent is. But Langraph is a part of the Langchain library that is used pretty frequently nowadays in the Gen AI ecosystem. Lots of things are orchestrated through LanChain, specifically with the GraphRack things we talked about in a couple of these episodes on this session, but also in the wider Gen AI world. I think you probably stumbled across this. And LanGraph is a, yeah, how would you say, is a sub-part of, or a sub-implementation? Yeah, LandChain, define it as an API. It's a library kind of that's built on top of LandChain. And that helps, you know, creating applications, agents based on LLMs. And what's interesting about it is that you actually build a graph, like a flow chart of how your application reasons. And I think that's a very interesting thing. I'll spend a couple of slides explaining how I got to this idea and why, and where we're going to take it from here, because I think that brings a bit of context. But yeah, that's it. I mean, it's one of the most most i think it's by far the most popular kind of orchestration uh library in in python for building llm applications and and it's amazing the work that they're doing for me it's kind of a these these sessions are are an excuse to you know to dive deeper in some of these technologies that are moving so fast and and and, and to, you know, to, to learn. So I hope I'm going to be today sharing what I've, what I've learned over the last few, few days, few weeks. And yeah, Landgraf is really, really powerful and really interesting. So I think it's going to be an interesting one. Yeah. Yeah. I'm, I'm I think that's, that's, that's really good. I, I, you know, I'm, I'm also writing the Neo4j newsletter that comes out on a weekly schedule. And this week, I'm featuring an interesting website, and I'd like to share it with you today. I think it's a good fit we have a neo4j labs page on the genii ecosystem which collects all the interesting bits and all the interesting known libraries demos examples training courses interesting blog posts all this stuff in uh in around the genii world and lots of langchain implementations as well and i put the link in chat and i'll put it in the video description on YouTube as well. So if you are in this space, interesting in it, this is a great, great resource with many, many links that gets updated almost daily, but not really. But, you know, very, very frequently when something new comes out and we put it up there. So it's a good resource to check back on. Yeah, I think with that, I'm happy to dive in. Let's get... Yeah. Let's put this on. Perfect. As usual, yeah, well, let's use a couple of slides to give some context and to set the scene for the more hands-on part that we'll use in the second part. Yeah, the idea is to, we're going to get to building a reflection agent and we'll explain what that means and we're going to use LandGraph. But like I was saying before, I mean, the initial idea was slightly different and it started with this tag that I'm, I suppose we'll be sharing, I mean, the initial idea was slightly different and it started with this tag that I'm... I suppose we'll be sharing, I mean, you can probably copy the link down there, but this is from, again, from the the Langchain team, from Lance Martin, where he was kind of discussing or reflecting on, starting from this this tweet uh where when he was kind of uh asked whether the rag was dead when you know the the context windows were growing and and and he was practically unlimited the amount of information that we can pass to llms and you know spoiler i mean the conclusion is it's very much not dead it's it's uh it's more alive than ever and and what's interesting and i totally recommend you go to watch this deck. And I think there's a YouTube video that I think it's linked in some of the pages or several ones where he explores some of the ways in which RAG is actually evolving. a very very rich uh kind of uh um approach to to to to to integrating llms and and an external you know grounding grounding databases or data data knowledge bases if you want and uh so my initial idea was because if you think of it alex what we've done always okay we we show in a pretty linear way how uh if you do rag based on a knowledge graph, it improves significantly by providing context, the accuracy and the quality of the data that you would retrieve if you were looking at, for example, just a vector database. But the reality is that RAG, the retrieval step is becoming way richer. It's not necessarily a linear process where you go retrieve and whatever you get, filter or pre-filter, post-filter, you pass it on to the LLM. And what Lance was describing in this deck is that there are very rich patterns to do some reasoning. Like things like, for example, I retrieve, I evaluate the quality of what I've retrieved. Is that information that I have retrieved from my knowledge graph, from my vector database, whatever my grounding database is, is it relevant to answer the question that I'm trying, or to fulfill the task that I'm trying to DLLM to fulfill, or is it not? And depending on that, I might decide to do several things. if he talks about maybe you want to go and and search the web for example or use any other tool or or reiterate or refine or come back and do multiple iterations of retrieval anyway that was i was thinking that there's something really interesting there and i i wanted to to explore that but um I thought I'd put this idea on hold when I, again, Dan Klosky created trouble and posted again. He's a great source of super relevant and up-to-date information on everything grabs, but also on Gen AI, of course. And he talked about this idea of agents. I mean, he basically referred to this presentation by Andrew Eng. And at the same event, there was actually a presentation by Harrison Chase from Langchain. And they were both talking about agents. And the idea of agents is how not only the retrieval part, but how the interaction with LLMs is becoming richer, right? It's not like, again, a linear thing, which we might have shown in simpler examples, and how it becomes, you know, a lot more interesting. And in particular, Andrew, and again, these two, I think the two links are in the comments. So if you want to share them, Alex, or we can do it after. So he talks about how we can build. He refers to that as agentic behaviors or agentic patterns. So does our application behave as an agent or just as a non-agentic workload? Does it just do a thing, starts and ends, the example that you see on the left, or does it do something more like iterative something? Because that's the way we actually work. And that's what made me change my mind because that connected very well with episode minus two. That was 20, well, 27. It was 25, right? Well, we did like this model generation, automating model generation, right? We had a data set of flattened or denormalized data set. And we tried the LLM to generate a graph model. And then we moved the data from this tabular format into Neo4j. And it was fully automated. But we took this approach like, okay, we get the metadata of the data set, we produce a model, and boom, we generate the cipher inputs and the data is migrated. And if you think of it, it's never like that. I mean, in a real world scenario, we don't model and that's it. I mean, you typically come up with a first model, you share it with someone else. So your coworkworker, with your colleagues, and you iterate, right? So if you're working as a team, you iterate on that until you come up with a version of it that you're comfortable with, that you think is right. And that's exactly what he describes as one of the patterns, because he describes a number of them uh but that's that's more realistic and and and he was talking about the amazing results that can be achieved by following these these these approach right so so i thought that that was probably something worth exploring and it's somehow connected to the to the previous set which will just spark for now and and we'll talk more which is this idea of of the kind of applied the same idea of reflection on on on the retrieval step on a rug application so i retrieve but then i do some reasoning and in terms of evaluating how good it is and whether that's uh that requires uh additional iterations so well like said, that's something that we'll do in future episodes. But this was sort of a low-hanging fruit. And I thought I'd dive into it. And what I, of course, found is- MARTIN SPLITTINGER. Oh, yeah. OK. Yeah. I just, you know, I haven't seen your slides. So believe me if I say this, I just wanted to say this now makes interacting with an LLM even more human-like, because you do something, you think about it, and you change a little bit of what you do, and then you reiterate on this. And this is your slide here, right there. Absolutely, yeah. And this is the version from, you have the URL there from the lang chain, or what they define, reflection agents. But in the presentation that I was showing before, I don't know if it's readable here, but he kind of describes the same idea, Andrew Ang. So the idea of reflection is you ask the LLM to generate some code, and this is this person to the left. But then you have another actor in your application that acts as a reviewer. It might be the same LLM. It's the same LLM that you used to generate the code in the first place. But if you ask it to act as a critic, as a reviewer, to highlight points of improvement or issues with whatever has been generated, you create this cycle, highlight points of improvement or issues with whatever has been generated, you kind of create this cycle, this iterative process that kind of simulates, like you were saying, the human process of building solutions. And that's exactly the way the Langchain team describe it. And here, the basic example, which is the one that I'll be implementing today, is you have some user requests, in our case it's gonna be generate a model out of this dataset and we will do an initial generation and then we will reflect on it. And we will reflect it, I mean, the LLM will reflect on its creation and it will produce some reflections that will kind of maybe trigger the generation of a new version of the initial model, and so on and so forth. And you can repeat that process many times until you consider that the quality is the one that is expected, or the requirements are met, whatever the termination condition is and then you return the result but it's not like a linear process anymore it's kind of a more more richer more complex and in this case an iterative and cyclic exercise so that's uh that's the idea of a reflective agent it's just it reflects on its own uh production and you use that reflection to kind of iterate and and make it eventually obviously uh better right so that's uh that's the idea so does it make sense does it align with with your thinking as well yeah yeah that makes that makes perfect sense and i think that's that's that's a really interesting journey so to speak where we we came from right i mean we we if you remember going back a couple of months when when uh when everything was about chat GPT and it's going to be the new savior and and of everything and you just put in something there and then you know your answers on any kind are created back to you in in almost magically then people realize oh actually you need contents context you need there's hallucination involved there is there's problems there if you don't if you don't ground your lm that that's like you know first step but now here next step almost is like okay yeah but sure you you need all the context but still you kind of need to iterate a little bit about it you need to think about what what the the first shot will not is usually not not the best you know rally i mean if your genius maybe it is but if you're not even in the first draft or something is it's good okay but then you know you iterate on it you do some thinking you tweaks here and there and then you improve over it and then you at some point you say oh yeah that's not good enough let's let's let's ship it basically absolutely yeah yeah yeah totally and and that's that's exactly you know what uh one of the things that lang chain in general but landGraph helps us with. And again, there's a couple of links there that I invite you to take a look at. And these are the tools that we're going to be using in today's session. And again, the problem that we're going to be experimenting with is the one from episode 25. If you remember it, what we did is we took a data set from Kaggle, I think we used, and that was particularly convenient because Kaggle, as we're going to see, probably remember it, offers kind of an endpoint where you can just look at the metadata of the dataset. So which are the fields, which are the data types, is there a description? So you can very easily query that. And that's all we need for the look at the metadata of the dataset. So which are the fields, which are the data types? Is there a description? So you can very easily query that. And that's all we need for the generation. If you remember what we did is we get the metadata out of any public dataset. And with that metadata, we're gonna prompt our large language model, except that it's not gonna be a linear thing like here, one, two, three, three. So it's gonna be a more iterative one, but the process is pretty much the same. So we asked the LLM to produce a model. And then in that episode, we took the model and we did the data ingestion. In this case, we're going to focus on this initial step of creating a model because that will be not a one hop or one step, but more of an interactive, iterative, if you want. Interactive as well, because we're going to see that this opens the door to introduce a human in the loop. So we can think of entirely automating it, like getting an LLM to talk to itself, like, hey, LLM, produce a model. Hey, LLM, criticize the model that you have produced and suggest improvements. And then it will regenerate and can have a conversation with itself until you come up with a condition for it to terminate or you can uh instead of having the llm uh you know critic commenting or providing feedback on the model you can introduce a human there or you can combine the two and say give me your feedback i'll mark the ones that I approve and the ones that I want to ignore and I introduce my own. So by creating this type of more rich and more sophisticated applications, we're building something that's closer to the reality of how we work. So, yeah, so that's the plan. So I hope that gives enough context. And now we're going to hopefully see it in action. Very cool. Sounds like a very interesting topic. I'm looking forward to it. If you have any questions, any comments, anything while we do it, then please don't be shy. Use chat. We can see your messages so we will try to answer them as we go through. So it's all going to be in a in a notebook that we will share at the end of the session so we'll start with some imports uh the usual i think we need to zoom in a little bit into this one let me make that a bit bigger yeah something like that yes perfect thank you good stuff okay and then the usual, the imports, and then I'm going to be working with OpenAI. So I'm going to use my key, which is already stored in my Colab environment, but you'll use your own. And of course here, in this particular exercise, I'm not using any specific integration with OpenAI. I mean, you could use any, and I have run some basic experiments with open source NLMs running locally through Olamo. And maybe that's something we can explore in another session. But in this version today, we're going to be using OpenAI. So, right. So we said that LandGraph is a way of, is a library for building sort of, they call it stateful and multi-actor applications with LLMs. So I use this analogy and this terminology to call the different parts actors. And in our exercise, we're going to have three actors. We could simplify it to two, but we're going to have three of them. The first one is a modeling expert. And that's where I kind of imported some of what we did in episode 25. So this modeling expert is a system that's capable. I mean, you can see the configuration here that I provide. It's an expert in creating entity relationship models, I mean, which is equivalent to a graph model in our example. And I've given it some kind of basic principles, right? I said, don't over-normalize. I mean, you know, we are going to be starting from a totally denormalized flat table that we get from Kaggle, don't overdo it. Because sometimes in some of the experiments, you might go too granular and break it all down in small entities. So don't over normalize. And some general guidance. So don't use the same relationship name for multiple relationships. Don't make sure that all the features in the data set are used. Don't ignore any one of them. And also have one-to-one mapping. used don't ignore any one of them and also have one-to-one mapping i don't want to have kind of complex ctl process where you combine two fields or transform them so let's keep it simple so it's pretty much you know general guidance it's nothing super sophisticated so we create this uh this first actor which is the modeling expert which is simply uh you know uh uh we're gonna be prompting the the you know gpt4 which is the modeling expert, which is simply, you know, we're going to be prompting the, you know, GPT-4, which is the one that we're using. And all it does is, you know, we're going to get a prompt that I'm going to show in a minute, and we use these LLMs. So that's the first actor. And in order to prompt it, because we're going to be using data coming from Kaggle, I've created this utility function that will, given a dataset name, it's going to go and retrieve the metadata. That's the endpoint that I mentioned before that it exposes. It's called Croissant format. I think we talked about it. So if you're interested, you can go to episode 25 and it's in JSON-LD format. So all we do is, OK, you pass me your data set name and i'm gonna go to this url which is the the metadata description and basically i'll i'll put it in a textual format i'll put you know all the field names in a string so that's my function that's what it does and um and the prompt again is the one that we used in episode 25. So I'm giving you a list of features. I don't expect you to read the whole thing, but I'm giving you a list of features, which is the columns in the data set. And with that, you need to create an entity relationship model, a graph model. And things like attributes don't need to be named exactly after the others, but the usual thing, so you can rename it. I added something interesting that for some of our followers can be of interest because I said, in addition to that, to generating this model and I asked you to serialize it as JSON, find what's the schema.org term, the closest one that you can find for each of the elements in the model. And we'll see what it comes up with. But it's a way of providing an annotated data set. And if they will want to publish it, you know, using schema.org, that can be useful. And I tell it what the shape of the output is. Basically, it's a JSON document that contains the listing of the entities with the attributes, etc, and then the relationship. I say, well, don't give me any text or comments, just the JSON. Pretty straightforward. Then all I'm passing is the dataset as we retrieve it from Kaggle. I'm calling the function that I just show a minute ago and we put it all together. And that creates a very nice prompt that we're gonna be passing to the expert that we've just created. So let's give it a try. So this is what we did last week. This is the first part, the model creation. And I have a couple of datasets. So there's one for supply chain data analysis, which let's find it. So probably Kaggle. Let's see if it finds it. Whoops. What have I done? Yeah, I didn't type it correctly. Let's see if it finds it. Oops. What have I done? Yeah, I didn't type it correctly. Anyway, I can use that, which should be on my history. There we go. So that's that. So if you search for it, I think I went to the trending ones, the most popular ones, and grabbed one or two from that. What i looked at is something that's rich enough and and if you look at this one it has a file which has 53 columns so it's rich enough so it contains potentially a number of entities so i think that would that would be a a good example so this is the one that i'll be using for now i'm not'm not importing the data, just the metadata, just the description of the columns that I get from this. This is what I was mentioning, the metadata as croissant format. So that's what I'm doing. And I pass the name of the data set, and it will go use my function, retrieve the metadata, generate the prompt using the function that we've just seen. We're going to call our model generate actor. I call it model generate that you have it here, remember? That's what I call our experts capable of generating models. Here we go. I'm going to call this and if I run this, it might take a few seconds because what it's doing is first going to interesting. I have something unexpected. So, yeah, we're retrieving the metadata from Kaggle. We are generating the prompt saying, hey, with this data set or this structured data set, generate a graph model, taking into account the guidelines that we described both in the prompt and in the configuration. So it's done it. And if I, you know, I've stored it in, we call it model as text. If I print it, that's the sort of structure that we saw also in episode 25. But remember what it was like, right? So we have a collection of entities. So we come up with a portion of a So we have a collection of entities. So we've come up with a portion of a customer and then in a collection of attributes. What's interesting and what's new is that we are adding also the schema.org term, which is a pretty, I mean, we're not gonna use it today, but it's something that we can use in future episodes to kind of create APIs, because we have a model that's already mapped to schema.org terms. So we have a model that it's come up with. Now, one thing that I did, because at the moment this is just code and it's not very visual, but remember that we had, this was the data set. So we have things like in our data set, we can look at the columns here. We have days of shipping, delivery status category looks like there's customer information uh product information city location there's plenty of stuff in there so there's a number of entities and that's exactly what this has uncovered now if i close this textual view i'm going to try to show it visually and for that i'm going to use graph viz which is this And for that, I'm going to use GraphViz, which is this old applications that you can use to create simple graph representations. And there we go. So that's the model that it came up with. It's a bit large, but we'll try to make it fit. But it said, OK, it looks like there are several entities there. The first one is a customer, the notion of a customer. And it's annotating it with the schema or element. But basically, I see that there's a customer and the attributes of the customer would be the city, the country, the email, the first name, last name. So it's taken a subset of the columns in the in the data set and said there's a notion of a customer. So there's a kind of a node type in Neo. And customers are connected through the makes order relationship to uh to another entity which is an order the notion of an order perfect that makes a lot of sense an order has a city a country and orders or the dates the state status blah blah blah and then the order contains products so pretty cool i think that's a decent model it's a basic one but it's uh it's good enough and like this we have an an idea of what the model has come up with. Perfect. So we have our first kind of actor in our application. And we're going to create a second one, which is the model reviewer. And it's pretty similar in terms of structure is identical. It's the same LLM. It's's the same open ai except that now we're saying hey you are a data modeling expert that can analyze an entity relationship model a graph model and suggest changes so you're not supposed to generate a new model just provide suggestions so i really want you to be the critic of of the model that we've created and look at things for example like other other elements that are under normalized or over normalized like am I still creating like fat nodes or that I should be you know refactoring and extracting entities from so this is the kind of things that I'm asking and expecting from this critic from this reviewer to receive maybe renaming things that might not be great. Well, this is the definition of this second actor, which is the reviewer. Exactly like we did before, we're going to have a function that creates the prompt, and it's very similar to much more simple. In this case, we provide the model that we've just created in the JSON format serialization, and we said analyze this model and provide two or three changes. I mean, don't come up with a list of 25. Let's go small. And as usual, no preamble, no comments. Just give me your recommendations. So that's what I've created. And now we're going to test it like we did before. So are we going to invoke the. And now we're going to test it like we did before. So are we going to invoke the model reflect as we call it, what the reflection actually takes place. And we're going to store the output in a variable that we call critic. So that's the feedback that we get from the expert from our first model. And it's done and i can print it out and it's saying hey that's why don't you extract the address as a new entity from uh both the customer and the order entities and that makes kind of a lot of sense right so it's a good idea yeah exactly because you see hey the the the customer it has the city the country the you know the state the street so maybe that could be in a separate entity, in a separate node. It makes a lot of sense. And the same thing happens, I believe, in the order. The order has some geographical information. So very valid feedback, right? So that's good stuff. Then the attribute segment in customer might be confusing. Why don't you rename it to customer category? Well, great. I mean, it looks like very, very interesting stuff. It's the same LLM, but it's looking at it from a different angle. So really, really valuable feedback. And there's some others. The relationship contains product, could be renamed like this. Fantastic. So we have a second actor that's an expert in analyzing what the first actor produces. And then we have a third one, which is the one that I was saying could be combined with the first actor produces. And then we have a third one, which is the one that I was saying could be combined with the first one, which is the executor, the one that will take the feedback and apply it to the model. So I call it the model changer or the model editor. And basically, it's your data modeling expert that can modify an entity relationship model that's serialized as JSON, and you apply to it the changes that you're given. And you see where this is going, right? So this you apply to it the changes that you're given and you see where this is going right so this is going to be the expert that's going to take the feedback and the first model and it's going to modify it to create a new iteration so similar structure so it's a simple invocation of of chat gpt uh sorry gpt4 in this case and we have a prompt that's applied the following changes to the model and uh i'm going to give a list of changes pretty much in the same way as the critique or the reviewer has produced them so um i'm going to test it here so that's the prompt i forgot to put the comment here but this is the uh let's test it so Test it. And what this is going to do is, like we did before, it's going to call the model change agent or actor, as we call it here, and it's going to pass the message, which is the initial model and the feedback provided by the second actor. So if we run this and we store the output as a new model, it will produce a new version of the model that we had initially, but with hopefully the changes applied as recommended by the reviewer. So once it's done, we can print it as text, which is not going to be very helpful. I mean, it is because we see that it hopefully follows the same structure, but it will be a JSON document, and then we'll be able to visualize it. And I was thinking a bit of a longer time, which is not an issue. This is a modeling expert, which is a task that's meant to be like brain 2, rather than.. Anyway, let's let it finish. And we're going to see how... There you go. It's done. So if we print it out, we have a new version of the model, again in JSON. And if we look at the visual of it, that should look something as before. Yeah. Yeah. Interesting. Interesting. But we see that it has factored out of the person the uh the entity it's connected it through the lives at which also was part of the feedback i didn't i didn't look in in the whole detail but if you look at the first line uh extract any relationship and it's saying uh connected to the customer in order with a lead that from customer and uh delivered at so it's given me you know a very rich description of the chain. So it does a very, very good job at doing this reflection, which is the whole point of our episode today. So it's implemented that. So it's extracted the customer address, and now we have a new type of node. It's done the same with the order. The order is delivered at an address, makes the order stays the same. And this, I suppose, stays unchanged. I mean, I think there was a couple of properties or attributes renamed. But that's pretty much it. So it's done a good job, right? So we've basically shown the whole process. So what we're going to do is, so we have now the building blocks. Now we're going to go and get a land graph to orchestrate the whole thing and that's what we're going to do in this in this final step so i'm going to define the note i did in our graph i was going to draw a diagram but i didn't get to do it in the end so but we have clearly a node per actor right so we have a this node uh the model generation node is the start node. It's the one that will take the input, which Kaggle dataset we want to work on, and it will generate a first model. So we call it model generating. It's run only once in the first iteration. The second one is the model reflection. And again, I'm calling the functions that we've just seen and the actors that we've just seen. I'm just replicating. Once you get access to the code, you'll see that it's exactly the same functions that we used in the test. We're just putting it into these functions that define the nodes. So the reflection node it will do, it will take the model that has been generated by the model generation node, it will analyze it and it will produce some feedback. And I have two versions of this because I don't have time to to to look at it. But my first version is I call it human in the loop falls. So by default, we go with a fully automated version, which is I get a model generated, I get the element to review it and act as the as the the actor that provides the feedback and then I modify and then I ask the element. But at review it and act as the actor that provides the feedback. And then I modify and then I ask the LLM. But at some point I might say, hey, I don't want the LLM to do that job. I want a human to enter some input. And as a very rudimentary form of doing that, I said, well, let's call that human in the loop. And if we go human in the loop, set it to true, instead of going and calling the reviewer actor, Instead of going and calling the reviewer actor, it will just ask me to enter some input via text. So I would do something like, as an expert, go and refactor these or that. So I will enter by just to show how this same infrastructure could be used to combine the two approaches. So this is the second node, the the reflection node where we analyze what we produce and then the third node is the one where we apply the changes and um yeah so that's that's the third one where we basically pass the the original model the changes and it will produce a new version of the model so how do we build a graph so So it's a very simple definition. And this is the LandGraph API. So we create a message graph. And we said, well, add a node, which is the first one, the model generation node that we've created here. And we give them names, the model reflection and the model change. And the starting point is the model generate. So my graph starts at the model generate. So that's my entry point. And uh i have to define the edges so before i show this function the edges is okay from the model generation you go to the model reflection so that's my my flow uh of of uh in the application so generate and then reflect. Then based on the reflection, you either iterate and generate a new version or you terminate. It might sound a bit counterintuitive, but my initial thinking was more the human in the loop version. I wanted to take control and say, if I provide no feedback or an empty string feedback, it means I'm happy with the model and the process terminates. That's why I put the conditional in that direction after the reflection. Now, it might sound a bit, I mean, but as you can see, it's very simple. We can tune that and change that depending on how we want to do it. But basically, we'll take the feedback, we will decide if we want to run another iteration or if we wanna end. And once we apply a change, we go and reflect again. So that's systematic. Every time we have a new version, reflect on it and then decide if I want to iterate or not. And then the condition on whether I should iterate or I should terminate is this function that I'm defining here. So I'm saying if I'm in a human in the loop mode, if I enter an empty string, so if the feedback that I'm providing, the length of feedback is greater than zero, so it's not a non-empty string, then you go to model change, but if not, you terminate. Now if we are in a fully automated mode, which is the one that we're testing now, at the moment I'm setting the limit after a number of iterations. So I say when the state is smaller than six, you keep iterating. But then when you get to six, stop it. Right? Okay. Of course, this could be more sophisticated and maybe we could go in future episodes on how to evaluate the quality of the model. When we decide, rather than saying, you know, after three iterations, you're done. What are the characteristics of the model that says it's good enough? Stop iterating. So that would be, again, you see that this is a very simple function to show here. But you could build in some logic there. I mean, this is just for demo purposes, I say. function to show here, but you could build in some logic there and that would determine. I mean, this is just for demo purposes, I say. And if you just prove a point almost, but if anybody wants to take this on, obviously the code will be available in the repo afterwards and then you can iterate on it and and you know make it more a little bit more sophisticated exactly because you know it's it's an additional step in the in the reflection so you can say hey before you well maybe as as output of the reflection you can either uh maybe uh yeah we could even add to the prompt if you're happy with the model if the model is good enough you know return an empty string and if that returns an empty string then this logic would work i mean but you know you can you can tune that to determine when uh when the model is uh is is finalized and we want to end so that there you go so we we build the graph now we compile it so if i run this code it creates the the application in the form of a graph and Now we just need to test it. All we're going to do is we create an initial prompt. It's not even a prompt, it's just we pass the dataset and the prompt was created by this function that we showed before. We're going to use the supply chain dataset that we were looking at. I'm going with the completely automated process. So I'm setting it by default to no human in the loop. So it's going to iterate up until the state is, I mean, I didn't mention it, but this graph shares is stateful and it shares a chain of messages that are exchanged between the different steps. And basically, when that state gets to length six, it stops iterating. So that's what it's doing. So, yeah, let's run it and let's see what happens. Because basically what this is going to do is kick off the process. So go get that data set and start the process. Generate a model, reflect on it and suggest changes, apply the changes and then go back again to reflection and eventually enter a cycle and get to a termination, which in this case would be after, I believe three iterations so I might take a few a few seconds but I hope the idea is is it's clear and and what gonna be interesting is to analyze what the process where the process to cut yeah these are completely you know delegated version of it. And what's interesting about these is that you can introduce the land graph API, the libraries is really powerful because we're starting to get results. So you can set checkpoints and travel back in time. So for example, if at some point in this iteration, things go wrong, you can always travel back to that point and then iterate from there. Very interesting thing. Because we are in a, so there you go. So it's starting to, you know, it's calling asynchronously, passing an initial message, which is the dataset. And what it's doing is as events come out of these asynchronous invocation, we're just printing them. And the first one, it's generating the model. The model is in the JSON form that we saw before. And then we are getting some feedback. The customer, it's probably similar to what we saw before when we tested. Yeah, I just wanted to say. Yeah, sounds like the same suggestions. So it will go in that process a number of times, a couple of times, and it will terminate there. And we're using the same models, right, as before. So again, like every reflection is like two to three suggestions there and then incorporating those and then starting again. Exactly, exactly. So there you go. So now it's regenerated the second version of the model after applying these changes. Now it's probably going to provide new suggestions. There you go. So under normalized attributes, so in the customer, it's saying customer F name, map to two features, can be extracted to separate attributes. Well, there's a risk element here because we don't know what it is that come up with. But once it I mean, once it terminates, what we're going to do is we're going to try to analyze the results produced. And again, LandGraph has a number of elements in the library that helps analyzing what's being produced and printing it in a nice way. I mean, another thing that we're not exploring today is the use of langsmith, which is a kind of an observability element that also the Langchain team offers. And that gives you a lot of visibility on each individual step and might be another interesting one to explore in another session. There you go. So we have a new version of the model. And with the new version of the model, probably, there you go. So it's got, it produces a third set of suggestions, but because it got to size 1, 2, 3, 4, 5, 6, it decided to terminate. Then the last node in the graph, the termination node, it contains all the information. We can say, if we print the output, we see all the different steps, like initial prompt, first version of the model, feedback, second version of the model, prompt uh first version of the model feedback second version of the model feedback third version of the model and feedback and then we decided to ignore the last step because of the way we defined the the determination conditions but you know we can look at what the the model looked like and and we can uh for example look at the first uh and it's pretty similar to what we saw before, right? So we see the person, the order, and the product. We see that once that was done in the second iteration, it probably factored out slightly different, right? So in this case, wow, look at this. So we had the product. Actually, I print out the feedback that we got and we can see how it implemented it so we see a new version of it slightly different to the one we saw before and there's even a uh um a third one where we got this feedback and then it came up with uh with the final version of the model so it looks like it ended up with an order of containing products the customer, the address. So, you know, it's not miles away, but it looks like it converged into a version of it that it was considering good enough for. In this case, after three steps, that was it. So that's the idea. I mean, what's super powerful about this is that we can, you know, I'm going to, I suppose, pause here. But before pausing it, I'm going to just take another example. And this one is some crime data. I don't know why I came up with that one. I think it was a rich data set rather than something that I'm particularly interested in. But anyway, we have another 28-column data set with obviously information about crime data and uh so if i'm gonna run that because it's exactly the same logic but just apply to a different uh data set and we're going to see how it behaves on on a second model and and it should be pretty straightforward pretty similar but it's um yeah i think this is a simple but but really powerful implementation of of this idea of reflection where where like i say we generate in multiple steps and and providing getting feedback from the llm uh until we converge into uh into uh a model i didn't show the the human in the loop version but it's if we have time after this we can we can show how it goes. I'll pause here because I'm conscious of time and with the minutes that we lost in the problem with the key. Thank you again because I was not expecting that. Yeah, so let's leave that running. We will eventually spit out the results, and we can analyze them. But yeah, let's look at, do we have any questions, any comments, or even maybe you, Alex, anything that you want to? Yeah, I think that's very cool. I am always amazed how this goes and how well that kind of stuff works nowadays. So it's really interesting. I think this iterative process is very, very interesting. I think that gives a lot of opportunities and a lot of possibilities to how to work with data. I mean, you you can like we said automate it a little bit and then you know uh give get some get some already iterated version then you you put in some comments yourself and then let it iterate again to to fine-tune basically your your desired outcomes a little bit more you don't have to do everything manually uh it's very good to to have this this human in the loop sort of option there. I think that gives you a lot of possibilities. Imagine, you are the person who is developing the application in the end, and you have a data modeling expert sitting in your office, and you can talk to them and say, hey, this is what we initially came up with and give me some feedback. And then you pipe that back into it and then you develop it further. It's really good. It's a very powerful combination. Yeah, to add to what you said. I mean, exactly. Because here we're taking kind of a neutral approach as in when we create this initial model, we're kind of applying the general practices and in our training, some modeling trainings, we tend to say, well, normally nodes are our nouns and verbs can be the relationships and it's a way of analyzing your requirements, your domain. And but then it's true that, you know, when you start iterating in because that's the way we work when we come up with graph model, we not only bring this initial approach, but we say, this is the way I'm going to be analyzing or querying my graph. So we can provide additional input a human in the loop version, like, hey, that's great. And you've got to a reasonable version. But what if I'm going to query my graph with this type of workload? That's the kind of analysis. Would that suggest any kind of optimization or any kind of denormalization, normalization? So, yeah, it can really, really replicate the way we work. And ultimately, if you remember, our objective, and this kind of goes even one step further in that direction is, okay, in the simplest case, we have a dataset, any dataset in Kaggle, and after an intuitive process, we have a reasonably good model and I can move the data into a graph database and start doing my analysis without having to type a single line of code or being an expert, and just relying on the expertise that's built into the into the lm here we go so this is this has finished now let's do a quick quick analysis and uh i'll go straight to the model so if we go to the first version oh why is this vibration happening it's very interesting do you see see that or is that just on my screen? MARTIN SPLITTINGER. I think it's just on your screen. I don't see if I've, oh, yeah, I see it. Yes. And I, well, yeah. MARTIN SPLITTINGER. It's very strange. It might be my, anyway, well, now it seems to have calmed down a bit. Anyway, it comes back. But if you look at the crime data, there was information, about area so geographical information information about the victim information about the weapon status location etc etc so the first model it came up with sorry about the vibration is we have a notion of a crime is a a weapon uh and and there's the victim of a crime and then it's a location so you know very decent starting point i don't know what vibration comes from but anyway so what uh in the second second step if i run uh the analysis the feedback was uh the attribute status in the crime seems to be over normalized. It could be merged with status description. The weapon entity could be merged into the crime. So look, in this case, it went in the opposite direction. It said, why don't you denormalize? So the weapon that used to be a separate entity, now it's been brought back into the crime. Well, maybe a good idea, maybe not. I kind of like sort of the previous version more. But anyway, you know, this is, I guess, a full, and it came back to a, yeah, it denormalized it again. So it renormalized. That's an interesting point. And I'm happy that this happened because in this simple version, I'm only taking the current version of the model. Now, if we pass the LLM, this is the current version and these are the previous version. So don't travel back in time, don't suggest changes that undo the changes that suggested in the previous iteration. So it can get more sophisticated and build even more of the way we normally reason about it. But just as a simple example and with three iterations, it's a pretty cool example that I would definitely encourage people to try and to improve and to test in their environment. Because I was impressed with with uh how how good it was and uh so um it's very impressive yeah i think it's pretty pretty cool and we we didn't we didn't really add much in terms of uh of of the the graph true features that come to it so i mean you you you put it in a graph and then you get some additional insights out of it by maybe reuse similarity, maybe use community detection, maybe add vector search to it, all of it. And that could give you some insights that could then be bounced back to it and then you find even some more. Yeah, totally. And you know how we you know we like we like going meta here so this this was kind of my like episode one in this because i want to go deeper in this so here totally delegating to the llm on on how to how to um how to you know modify the model, create the model. But what if I capture in my knowledge graph my preferences in terms of my knowledge about modeling? In that case, I can start interacting with and control the way the LLM behaves. That's where I want to go, but you're right. This initial step, it just shows more the framework, what this kind of reflection type of agent can look like. But yes, we'll bring the graphs into the process as a way to kind of, again, own the way this process kind of happens. Yeah. Oh, that's very cool. So this is a great teaser, a great cliffhanger to end on episode number 27. So you can look forward to the next one then. I think that's going to be very exciting and very, very interesting to see how this develops and how we can, you know, learn and add new features across the coming episodes until we have like a super AI graph modeling, you know, expert. I don't know if we mentioned it on 25, but I mean, the way I see this, I mean, we now produce a kind of a Jason model and I think we show, yeah, we did show how it could automate the creation of the cipher to do the import, but you know, why not? For example, the integrations like, you know, with Google Cloud, what was the name of the framework? It's escaped me now. Gemini? No, but the kind of the sort of ETL, well, it's not really ETL, but the 25, it was Dataflow. Dataflow is, I actually have a slide, but yeah, the idea is, you know, once we have a model, why do we not use it to maybe to generate, let me put that slide up there. How do I show it? Let me window. That's not the one I want to share, is it? That's not the one I want to share, is it? I don't see it on my side. Oh, here we go. Well, yeah, so Dataflow is a framework or a tool from Google Cloud to move data from BigQuery into move database in the file transformations. So why could we not generate these type of mapping documents that describe how these data, because that's what we're doing here. In BigQuery, we have basically a big fat table, you know, that can be pretty much the same that we've seen from Kaggle. So we can use this process to generate the configuration and to automate the ingestion from BigQuery into Neo4j. So that's one example, but the moment we kind of have come up with a model, we can automate and not only come up with a model, but map it. Remember that the model that we generate, it maps to the sources, to the original file, to the original columns in the file. So we can basically translate that into a sort of an ETL process to to to move the data in the same way we did it using just cipher, but in using the data flow framework here. So that was another idea. Too many ideas to... Indeed, too many ideas. All right. But that's cool. That I think should round us up out very well to the end of this episode and gives us a nice like I said earlier, like a nice cliffhanger to what to look forward to in episode 28 and onwards. Next, or do you want to say something? No, I was going to say because I was reading a question from Peter on whether trying it with different models behave in the same way. I've tried only with not the commercial ones, but the open source ones on whether the trying it with different models behave in the same way. I've tried only with not the commercial ones but the open source ones and the difference was substantial. So it was the quality was much better with OpenAI but I have not tried it you know with Anthropic or any of the of these you know Gemini but that's definitely something that I want to give a try. So that OpenAI performed way better than for example which one Mistral is the one that I tested through Lama and Zephyr. I tried a couple of them and the quality was not as good. So that's my kind of data points that I have, but it's not really an exhaustive or a really serious analysis. My first impression is that the quality was much better with OpenAI. But yeah, worth exploring with others. Absolutely. I mean, there are always other options. And if you try it out with something else, let us know. Give us a shout out on the YouTube comments maybe or in the community forum. And let us know if you have some experiences with trying it out with And let us know if you have some experiences with trying it out with different models and what your results are. So before I let you go, next stream is happening on the 18th of April. So a little break next week. No session. But afterwards, I'm coming back with Tomas as a guest. He wrote a book on graph algorithms for data science. And we will introduce this book with a little bit of an in-depth look at the chapters, the contents, and what he wrote there. I think Tomas is a well-known figure in the graph data science, Gen AI world around Neo4j. He writes very, very many blog articles and gives interesting, has a great knowledge on the topic. So I'm looking forward to having him as a guest to talk about his book in more detail in two weeks time. And between now and then, what's next? What can you do? Join the Neo4j community, join our Neo4j Discord server. And for learning, you can check out the Graph Academy with courses. My colleague Martin or our colleague Martin just released something on vector search. So if that's interesting to you, then you can learn there. And obviously, there is a wide range of sessions around Gen AI as well these days, but also beginners courses for Neo4j newbies. So yeah, with that, I think we are at the end of today's episode. Looking forward to next month's edition. Wishing you all a great rest of your day, rest of your week, and hope to see you soon in one of these uh stream streams or at the community sites um you know let us know reach out and uh you know speak uh later take care everybody and thank you jesus for the great demo today it was really interesting yeah thank you see you next time bye next time | Going Meta - Ep 27: Building a Reflection Agent with LangGraph | 3,760 | Neo4j | 20240403 | Episode 27 of Going Meta – a series on graphs, semantics and knowledge
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Knowledge Graph Book: https://bit.ly/3LaqE6b
Previous Episodes: https://neo4j.com/video/going-meta-a-series-on-graphs-semantics-and-knowledge/
Links:
GenAI Ecosystem: ttps://bit.ly/4cpAY6D
Is RAG dead? https://docs.google.com/presentation/d/1mJUiPBdtf58NfuSEQ7pVSEQ2Oqmek7F1i4gBwR6JDss/edit#slide=id.g26c0cb8dc66_0_131
What’s next for AI agentic workflows (Andrew Ng): https://www.youtube.com/watch?v=sal78ACtGTc
Reflection Agents: https://blog.langchain.dev/reflection-agents/
LangGraph: https://python.langchain.com/docs/langgraph
Supply Chain Dataset: https://www.kaggle.com/datasets/shashwatwork/dataco-smart-supply-chain-for-big-data-analysis
Crime Dataset: https://www.kaggle.com/datasets/sahirmaharajj/crime-data-from-2020-to-present-updated-monthly
Dataflow BigQuery: https://neo4j.com/docs/dataflow-bigquery/
0:00 Intro
18:30 Reflection Agents with SupplyChain Data in a Notebook
35:00 Orchestrate with LangGraph
47:24 Same Logic with Crime Data
56:00 Summary and Outlook
1:00:49 Wrap-Up
#neo4j #graphdatabase #knowledgegraph #langchain #llm #genai #rag #graphrag | 2024-09-03T09:30:19.323289 |
https://www.youtube.com/watch?v=Q7E97TSmGyI | Hello, everybody. Good morning. Good evening. Good afternoon. And thank you for joining this Neo4j Live. Today, we talk about a personal knowledge vault or creating of a personal knowledge vault with Neo4j GraphRack and some chat GPT prompting. So I'm looking forward to this episode today. With me today, I have Mike Murley. Hi, Mike. How are you doing today? Greetings. I'm great. How are you? I'm good, I'm good. It's finally summer here in Germany, so I'm looking forward to it. So it's a good day. Cool, yeah. How are things over where you are? Oh great, we finally also got into summer, although I'm still wearing my toque. And yeah, I was just out in the mountains last week. It was a little snowy and cold going in, but coming back out, it was actually, we finally got up into the teens in Celsius. Yeah, yeah. It's up in the mountains, it's still cooler. So that's for sure. But when you do some exercising, I guess you don't want it to be too hot anyway. So it's true. Cool. No. So today we talk about the personal knowledge world before we do that. Maybe Mike, do you want to say a few words about uh who you are uh and and what you do when you don't create knowledge vaults sure um yeah so i i i'm the currently i'm the director of machine learning and artificial intelligence for a company based in calgary canada called r curve and a big part of that role is helping clients kind of go through and understand use cases for advanced technologies and for AI. A lot of the work that we do of course is confidential. So one of the reasons I undertook this project was to create a working example that allows our, myself and our team to explore how we would go about applying some of the latest and very rapidly moving ai technologies including of course knowledge graphs to the domain of helping organizations make better use of their knowledge and their data and in particular we do a lot of very applied work, modeling physical systems, energy, environmental, and mining, to name just a few. Architectural engineering construction is another big one. And those are areas where you have a lot of complexity, a lot of different systems, a lot of contributing systems into making a model of the world that these companies are operating in. And you need to have a lot of flexibility with respect to the data. And one of the challenges that we'll talk about today, of course, is I think there's, it's this impedance mismatch between the lack of computers to understand and be able to model data the way people think of it. So we were forced to kind of put data into a certain form of the machines to be able to model data the way people think of it. So we were forced to kind of put data into a certain form for the machines to be able to work with it. But I think there's a real innovation when you look at graph and when you look at that combined with large language models that unlocks a whole powerful new paradigm for the way we work with data. Yeah, that's a very, very good problem we all have. I think we have faced before and we will face probably for the foreseeable future until it all hopefully can be outsourced to some kind of AI assistance that would just take the documents you provide to them and say, hey, here you go, take this. It's in my format, in my way of organizing things. But you do you and you know what you do with it and then put it in a, in a form that is slightly different every time. Sometimes it's first name last. It just starts with an address, right? I mean, sometimes it's first name, last name, sometimes it's first and last name in one thing, sometimes it's last name and then the first name, you know, it's, it's in, in just very simple examples, it can become complicated. And, you know, God forbid talking about more complicated things like approval In just very simple examples, it can become complicated. And, you know, God forbid talking about more complicated things like approval processes for, you know, you mentioned a couple of things like construction roads or pharma companies process in getting a, you know, Midsom approved, stuff like that. It's a complicated process yeah it was it was interesting i was at a conference recently building sustainability conference in tempe arizona um on an ai panel for that and that was i think one of the biggest things that folks there were talking about was when you're on the construction site things are moving around you know you don't have time necessarily to be putting you know whatever to the things you're trying to observe into these little boxes. Yeah. And I think now to one of the things I'll try and show illustrate today, and the example that we're looking at is this idea of a personal knowledge vault. And again, it's a proxy for other places where we would apply this technology and sort of the business side in that it allows you to much more flexibly collect data. So I'm hoping to illustrate a bit of that. And then, of course, the knowledge graph really has a nice, it's a very human way of modeling the data. It's kind of the way I like to think of it. It provides a lot of flexibility and fluidity. And you can have, I don't have it quite working yet in the knowledge vault, but as the data is coming in or coming in, you can have then downstream agents't have it quite working yet in the knowledge vault, but as the data is coming in or coming in, you can have then downstream agents that are going in and post-processing and extracting entities and doing named entity recognition and whatnot to construct a much richer context around the data. And it's this very fluid schema, schema when you need it, but it doesn't get in the way. That's one of the things I really like about Graph. And it allows you to kind of evolve your understanding of the data as you go. Yeah, yeah. Yeah, that's always a good point to bring up every now and then. Is the schema less possibility with Neo44j you can obviously do it um as as you like it we had a neo4j fundamentals uh course last week with martin and we the question came up as well what what is what do i need to do here and then is is is uh if i have if i don't have a schema is this going to be completely chaotic but yeah it can be, yeah. I mean, if you don't think about it and just dump everything in there, but it gives you also the freedom to do and expand beyond the initial scope of a project and in the initial ideation you had, maybe you realize, oh, this could also be a good addition to this. And then you just add it in and the schema works with you and you can, you can expand it in a way that, uh, that it makes sense. And it, uh, it connects with the rest of your data. That's already there. You don't have to completely start again. Totally. Um, so I've got a couple of slides I could jump into and just to set the stage. If that's. Yeah, that, that sounds good. Um, as as as always i already see a couple questions in uh in in chat so i'd like to encourage that always if you have questions if you have comments then then don't be don't be shy uh let us know and i can read them out um um any um any time so i i see see a couple of questions already. So we do the intro first, and then I'll look up what came in. Just quickly before this scrolls away, but I'll keep my eye open, obviously, and scroll back up. Martin posted on X if this video is going to be available as an archive later. So the answer to that is yes. as an archive later. But so the answer to that is yes. If you check out on YouTube, youtube.com slash at Neo4j, you'll see all the videos that have been made in this series previously and you can watch them at any time. So that's an easy question to answer. One point, Mike, I'd like to ask you to, if you could click the little button that says studio.restream is sharing your screen, if you could click on hide there so it goes away otherwise it's a little bit on the oh this on the middle yeah exactly got it thank you cool and that um floor is yours okay well yeah if somebody has a question you know throw it up there and i can't see the questions at the moment but um okay perfect yeah so as i mentioned this this is kind of a two this this project that i've initiated um is operating at two levels so one of the things with this is um as i mentioned a lot of the work that we do at our curve is confidential we have had a lot of people a lot of executives a lot of companies asking us a lot of questions about large language models, how they could be applied, what the risks are, how they would manage the risks and a myriad of things like that. It is challenging talking about some of these things because you do want to have, it's nice to have an applied example that you can use to show folks because you do wanna have, it's nice to have an applied example that you can use to show folks exactly the impact that this new technology has and how graph kind of factors into that. So I set this project up. This is something I've been interested in for a long time and have been working on for a long time. Prior to R-curve, I was working at an environmental company where we were using, starting to use knowledge graphs to work with complex environmental data and reporting on that. And then that led to my starting a company called Minome Technologies, where we actually developed a system that was using Neo4j to continuously integrate data together for many sources. And then that was acquired by R curve a few years ago. And since then, we've just been expanding the number of organizations that we work with in this domain. Something that came up, so I have kind of on the personal side of this, I have a relative who's experiencing functional memory challenges. And it struck me that when, you know, the large language model thing showed up, if it were possible for me to hook, to create a system, using some of the technology that we developed with Neo4j for kind of constructing the graph side of things, hooked up to an LLM, you could help a person who, you know, they can't use file folders and the complicated ways we use to store data anymore, but they can still write and they can still express and their memories and whatnot. So they're writing their memoirs now. So a couple of weeks ago, I went and set this system up for them to help them start to capture a bunch of the memories. And then as I continue to evolve, the functionality will help them start to generate some of the writing work by allowing them to dynamically retrieve the knowledge without having to know where it's stored. So a bit of context on that piece. Yeah, so back when all this kind of knowledge stuff started the idea was, and if you go back and you read through all this, I won't go through this in detail. I've got a post on this up on Medium that explores this in some detail in terms of how we've ended up where we are with the way knowledge systems work. Prior to the systems that we use now, there was this, these folks, you see on the screen here, imagined the potential of computing, revolutionizing and unleashing, augmenting the human intellect was the term that I think Douglas Engelbart used for it. And the original vision for this was knowledge collaboration without barriers. So being able to move knowledge and knowledge and data and applications, there wasn't a differentiator between those things. So if you look back at the Xerox PARC, the work that was done with Smalltalk and whatnot, it was not really a difference between the applications that you worked with and the data that you had. And you could dynamically share that around across without any boundaries, or at least that was the intent. But what ended up happening was, is over the time, when we started off with this notion, the computing devices and the computing that was available in the 1980s, and you see this machine behind me, that was my first computer, was very, very limited. So while the WYSIWYG paradigm, the desktop metaphor that emerged, you know, that Jobs saw at Xerox and brought across to the Apple and then was eventually migrated over to Windows that we all now work with. So the apps, the way apps work, the way we store files with file folders and whatnot, all that stuff came across in that same era where computing was extremely limited. And what that meant was we were forced to deal with a series of conventions. You had to have an application, the application had to store the data in a certain way, in tabular data forms specifically, or generally speaking, you had to store the data in a certain way in tabular data forms specifically, or generally speaking, you had to store your files in a certain way. And what that has meant, that combined with the aspect of commercialization and the organizations that make these systems want your data to stay with them, is you have this tyranny of apps. And I think there's some stats on this where the average user has somewhere between 80 and 100 apps on their mobile devices, let alone the stuff that goes on the enterprise where there's some stats on this where the average user has somewhere between 80 and 100 apps on their on their mobile devices let alone the stuff that goes on the enterprise where there's hundreds of applications that a person has to interact with on a daily basis it makes it really tough for a normal person to know where did i store this bit of information let alone somebody who may be experiencing functional memory challenges and it's even worse when you talk about this domain of files. There's a whole history behind the file thing in terms of where it came from. And it's actually fascinating to study. And again, I've written about that in some detail. So you can look that up on a Medium post. But very quickly, in order to store a file, you have to know which application you're working with. Is it Excel? Is it Word? Is it the other things that could make files, music programs, composition, whatnot? You have to know which system you're storing the thing in. So if you're in an organization, I work with all of these. And then we have clients who use other systems. And then you've got all the things like Documentum and whatnot. All of them have this file folder structure. There's always some kind of naming scheme that somebody comes up with that's supposed to be the naming scheme to solve all the naming scheme problems, which never works. And then you have to know what name to give the file. So you're putting an awful lot of cognitive load on a user who then has to navigate all this complexity to put something in a little bucket somewhere. And then they have to remember all the things that they did to get that back, which was at Dropbox, was at Google Drive, SharePoint. And I don't know if most folks here probably use the search tools these things all provide. They're all terrible. And you can't search across them. So you can't search across them you know so you can't search across dropbox you can't like there's there's no unification of any of these things it's very very complicated um so the crux of it is working with knowledge on our current computing devices requires users to learn complex metaphors taxonomies and numerous disconnected apps, each that it has its own method for storing and retrieving data. So the idea here is, okay, we've got this new powerful world of LLMs. They do have this little problem though, in that they're not databases. They're a very useful thing in terms of being able to, and they were kind of the missing link, if you will, between the a very useful thing in terms of being able to, and they were kind of the missing link, if you will, between, uh, this is the type of thing that we were trying to accomplish with the Meno paradigm, which was, uh, kind of along the lines that I've been scribing, enabling people just to ask for something and get it back, describe the thing they're looking for and get it back. The problem was, is it's, you're still having to try and translate the person's intent, uh intent into some sort of machine understandable query. That's the piece that LLM start to help out with is that you can have the LLM act as the interface, like a human capable, human level interface, if you will. They can capture the data in a very human way. It can store the data for the person. It can help contextualize the data, but you still need the database and behind the scenes to actually act as the anchor for where these things live. And again, I think it's interesting that Microsoft published that article on GraphRag last week because it's long been my sense that graph is the way to do this sort of thing because it gives you that, again, human-like capability, linked relationships-based method for describing the way you store your data. And then you anchor that context for the LLM into that graph, and it's quite a powerful thing. So the use case I undertook to help, again, this is a multi-tiered project. One is to help provide a method that we can use to talk to clients about this, but it's also to help a person that I know out with this exact challenge of, I can't use file folders anymore, but I can just talk to this thing, and it will capture this information and knowledge for me. And then I can get it back very easily just by asking for it. And the way that I've done this is I decided I wanted to use chat GTP and a GTP as the interaction interpreter. And I'll talk a little bit about why I chose that instead of building a standard user interface. I'm using Langchain and Langraph through a Python fast API to handle the API layer. That's hosted up in DigitalOcean. And of course, I'm using Neo4j as the database, the vector embeddings and all the power that comes along with having a full database as opposed to just a vector database. And there is there is a difference there. And I'm using kind of a simplified form of graph right now. I do hope to get more sophisticated with it as I get into this more. The initial intent with this thing was to really just focus on a few key use cases. So most of those are on the capturing side and then on the basic retrieval side, I haven't done a lot with context yet. That's gonna be the next stage that I go through as I build this thing out more. The idea though is I wanted to be able to capture notes from sources such as books, handwritten notes, or voice transcription. So these are all things that in the GPT-4-0 model, hopefully I'll show you a couple of things. It's pretty amazing. The thing can read my handwriting, and I can't even read my handwriting. It's like, wow, okay. Pretty incredible. and I can't even read my handwriting. It's like, wow, okay. Pretty incredible. So then the other thing I do, I use this thing myself too for all the knowledge and all the stuff that I read. I put everything that I read into it. And somebody I think earlier asked or early on before the session had asked about are you hooking this thing up to things like OneNote? And the answer to that is yes, I will be. I haven't done it yet. I do have it hooked up to SharePoint. So that's actually where my user in question was storing all of the things that they've written over the last 10 or 15 years or so. So I actually do have a thing that goes out and grab the documents from that and process and decompose them into the graph structure, graph document structure. I wanted to also use some of the land graph stuff so i do have a little land graph agent in there where you can direct these agents to go out and research a topic on your behalf and i'll show you a bit of that where the agent can go into the personal your private knowledge or personal knowledge vault pick up stuff from there it also searches out to an external public source. In this case, I've got it looking at Tavoli because that was an easy one to set up. But you can certainly add others to that. And then retrieve word documents and other sorts of things out of the vault. That's enough of that. Let's get some demos going here. Any questions so far? A couple of comments. So you mentioned already the tools like Obsidian or OneNote and stuff like that. So I think they're trying to find a workflow that resonates, trying to find how to connect these things. I guess you can share a little bit what you learned, what your experiences was with connecting different tools. I think you have it a little bit easier if you stay with one ecosystem. I mean, like you mentioned this in the beginning, Microsoft Fabric was mentioned by Creative Sense. It's like makes it, tries, I guess, Microsoft's version of trying to connect at least their initial tools, which are not really connected at all, to make it a little bit easier for users to know, okay, I have something on Teams, I have something on email, on Outlook, I guess, or something on SharePoint, on OneDrive. How do I find all of this in these mixed things? But usually people are not strictly in one bubble, I would say, then they have maybe an iPhone and a different system already. Then they have maybe Slack and work instead of Teams and maybe a Google Mail instead of Microsoft. It's going to be, it makes it a bit more complicated. But I think it's good that the tools are getting there. I think it's great that the vendors themselves realize that this is a problem. I think it's great that the vendors themselves realize that this is a problem. I think it's not going to be solved so very quickly. No, I don't think so. Again, it's this is kind of and again, I get into a larger philosophical discussion about this in one of the medium posts. And it's something I do like talking about. And I actually did. Yeah, I mean, Alan Kay is a really interesting person to look up on that subject. He has a lot of very interesting views on how the commercialization piece of this does start to act as a barrier to the actual, to leveraging the full power of the technology. Because it does create other motivations that are actually counter to the user's benefit. that are actually counter to the user's benefit. And we do see that, I think, in some of the, the way some of the systems are, have been developed in terms of, you know, the desire to pull things back into clouds or into spaces where it's difficult to get the data out of. I think banks are a great example of that, particularly here in Canada. If you've ever tried to pull your data out of a bank to do a budget with it, it's actually quite difficult. So yeah, let's, let's give this thing a go. So let's see if this works now. So whenever I need to zoom in a little bit, or do I need to zoom in more? Okay. No, I just, I think it's over. Yeah, this is, yeah. That should be fine. Yeah. Okay. So you can see what's going on here. So whenever I read something now, what I do is I've now got an engine in here where, and you can see it. Okay. So I'll describe what's going on. So on the right-hand side, you got chat GDP. Inside of chat GDP, I've got a thing, a GDP that I've constructed called Mike's Personal Knowledge Vault. And it's a private GTP, so you can't mess with it unless you happen to have the link. But what this is doing is there's a series of intents that have been mapped in, and I'll show you that too here in a sec, where I basically, and I've got an api and an api hosted in digital ocean you see on the left hand side um and the api itself is constructed with it's got a number of rest it's a rest l endpoint written in python um and i'll show you some of the code with that too and i'm eventually going to make this public once i get it so it's not embarrassing to publish because right now right now it's kind of hacky um but it's got a couple of different bits to it so one is the thing that's grabbing the data and kind of communicates with chat gdp and it uses you know it's got a secured endpoint and stuff like that to do this um and then in behind the scenes there's also uh i'm using python celery asynchronous framework to set up the processing side of it so whenever you add something into this thing, so you see this document you had here, let's just grab that. Oops, come here you. Yeah. And we'll go over to our friendly graph database here. Comment, I just read it out because it's great from TWB. Never let great be the enemy of good enough mike good call it works you know for most part i do get you'll notice there'll be some errors and stuff that show up here but that's fine um so and probably what's happened here nature.com yeah okay What's happened here? Nature.com. Yeah, okay. Yeah. So the way this thing works is when you drop stuff into it, what it'll do is it'll go out and start to try and pull the document back. And then it'll start to decompose that down into this graph document structure. Sometimes like it looks like in this case, I hit a robot blocker. So this article, so let's try it with this one and we'll see the processing kick off. So what ends up happening is when I submit that, it grabs, there's an agent that goes out and grabs the initial article bit, pulls it in. When that's contributed to the graph database, you'll see the salary worker kicks off. And what this does is this starts to go through and it's the thing that's doing to the graph database, you'll see the Celery worker kicks off. And what this does is this starts to go through and it's the thing that's doing the Langchain decomposition of that thing down of the incoming HTML down into a graph document structure. In this case, I'm using a structure that is basically, let's pull it up so you can see it. Again, I'll get this one. So you're actually seeing the stream of output there from the linux host um i'll do this one because maybe it can't i think venture beat is typically not blocked so it comes in a bit better yeah there we go um so you can then see that the you know it comes in with the page and then what ends up happening is the page will then have these these little chunks that are generated or sorry chunks that are generated out of it and then there's questions associated with each of little chunks that are generated out of it, and then there's questions associated with each of the chunks that are generated in a summary. So that's kind of the general pattern that I'm using essentially to pull apart the documents so that you end up with a bigger document like this, you'd end up with something that has quite a number of different uh elements or pages to it and then each page has its you know the various questions and i'm doing this again this is part of the graph right pattern where you can um and i've got a thing in the code actually where one of the things you want to be able to do with this it's a little different than vector databases is if somebody asks a question you want to have multiple retrieval strategies so one retrieval strategy that's actually pretty straightforward is you know somebody asks a question, you want to have multiple retrieval strategies. So one retrieval strategy that's actually pretty straightforward is, you know, somebody types a question in. And one of the things that I'm going to do, and I'll show you the chat thing in a sec here, but is we basically go in. So here's the endpoint chat with sources. One of the things that's really important for the when we're working with clients on the side of this is you want to give back the sources when somebody asks a question. Same thing with the user that I'm working with. They need to have the sources come back so you know where the data are coming from. The retrieval strategy in this case, and again, it's fairly simple still, but what I'm doing is when the question comes in, you want to determine whether the question is specific or general. So we fire this thing over to the LLM to get an assessment of that. And this is this is something, you know, this is really cool stuff. Even if it's simple, it's still pretty cool. So then what I do is I just set that based on that, I can set the index to target either the child level. So the small chunks and I'm using the langchain semantic decomposition utility which um goes through and it actually is pretty nice at being able to break the document up based on semantically relevant uh components so it does a great job of kind of finding sentences and stuff so where this becomes also important is if you know when we're doing this for companies that are doing the environmental, taking large environmental documents and making them conversational. So if somebody, you know, environmental impact regulation or environmental impact assessment documents are a great example of this, where if you've got a big project that's going in, you know, like a mining project where there's a bunch of stakeholders that are going to be impacted by this, the stakeholders are typically non-technical. The people who write these reports are very, very technical. So you have this impedance mismatch between the level of the documentation that's being provided and the, you know, the nature of the stuff that people are able to understand. LLMs and again, graph databases are really good for this because if somebody asks a question, I live in the, you know, that's a pretty general question. So they level objects to get sort of a more general response back from, you know, the intent that the person's giving. If the person happens to be really into fishing and they happen to fish bull trout in a certain river, they ask a question very specific okay um i'm talking about the south saskatchewan river how's that how's the uh what's the potential for light contamination to affect the you know the the bull trout and then the thing will be able to target the child chunks the small chunks to actually get very specific information back about that thing the person has asked, which is pretty neat and extremely useful. So, and then the other thing that this does, it's quite impressive, is you can do stuff like this. So usually I do this on my phone. So I've got the app, you know, one of the advantages of having the if you see that. But I could make one of the bigger. Do you want to show it again? Sure. What are the advantages having a lot of reflections? There you go. Yeah. Of doing is this is a GDP is the as the interface is you can actually have the GDP resident on all the person's mobile devices. So, for example, if I want to capture something out of a book that I'm reading, or again, this is for more targeted of my user, because they're typically a little bit more analog. So I want to capture a page out of the Richard Dawkins book, The Blind Watchmaker. So stick this in here. Need to do. And I can go read the text on this page and store it as a note called my file folders of work and let's submit that to this does if life is, we will see all sorts of things happen. I found the GTP4 old model is really good at being able to read stuff. And it can do quite a good job of reading things out of page-based text like this. And out of things like handwriting and whatnot. Confirm. No, we got an error. Okay. Bummer. Let's see if it gets it this time. Sometimes it'll try a couple times and get it. Again, this is one I've been working on recently. So what it's trying to do is it's trying to upload the image to my server. And that's one area where the OpenAI API sometimes is a little messy. We can, while we have, maybe we answer a couple of questions. Sure. maybe we answer a couple of questions. Sure. There was one question from Shang who asks, how does the document get ingested into the graph? And if this is the Minumi processor or how, how would you explain the process of getting the documents in the graph? Yeah. How do you, yeah. So that basically the way that works and you'll see Oh, how do you? Yeah. So that basically the way that works and you'll see it go here once I get this. So what will happen here is this should go. I see what's happening, right? OK, let's try this. What is happening is the the I'm using the openAI front end, like the ChatGDP side of it, this bit. There's a couple of different ways. So if I'm coming in this side, I'm actually using the, as you can see here, the ChatGDP itself basically has read the text in that page. Then I can actually take that text and store it into the, I store that as a node in the database here. Okay. Then there's a post processor that gets kicked off, that goes through and actually takes the text out of this full text here, which you probably can't see all that well on the screen, but you can see all the stuff, all the text that kind of comes out of whatever the input is. Okay, yeah, yeah, yeah. And then what ends up happening is if I go to the code here and bring that up, we basically then get into this processing function down here. Let's see, where's the tasks processing, here we go. I'll make this bigger so you can see it. And this process is exactly the same regarding of the actual type of document coming in. So if we've got say a file coming in from tasks, here we go. If we've got a file coming in from a SharePoint library, if you've got a file coming in from this one here. So this is the SharePoint document processing task. It's exactly the same pattern, regardless of what's coming in. In this case, it's coming in from SharePoint. So I basically go in, I grab the document from the SharePoint side of it, the file name and then I pass this off I do basically commit here to get the to get the base node in and what I do is I generate a uid name source of it text all that kind of stuff and then it throws it over to a processing engine okay and basically that iterates through and generates the pages um and then the child documents by using the uh you'll you'll see it kind of in here somewhere where the um uh there's a the actual processing function which basically there's the hypothetical question processor which generates the questions uh and then stores those summary generator yeah this is another part of the chain uh and then stores those summary generator. Yeah, this is another part of the chain and then the actual. So, yeah, and for for chunking, what what are you using again? So I'm using the Langchain semantic chunker. Yeah. Yeah. So that thing basically allows me to go through and take the documents and breaks them down into semantically relevant chunks. So originally I was just doing naive chunking, which basically was I was saying, OK, just break these things up into 540 token bits. But I found that was kind of messy. Yeah. So or not very accurate, accurate i guess would be a better way of describing it um the semantic chunker does a better job and it does actually make a difference in terms of like if you're breaking things in the middle of a sentence um that sometimes then means you're you're answering part of the equation doesn't work as well as you might like it to cool so yeah uh where's my, I've got too many browsers running. Okay, there we go. Let's try this again. There we go. So now that that worked. So that now you can actually see what's going on is let's get this, let's get this guy here. Grab that node. So again, getting back to that question, there's your document process coming in off this note. Then what ends up having this, that triggers the Celery worker, which then you can see it processing the chunks. So it breaks the document down into chunks, processes the chunk, breaks the document down into chunks, and that keeps going in the background. So that's a background thread process. The nice thing about that then is, you know, those are long, if you're dealing with a document that's hundreds of pages, that can be a very long running operation. So I didn't want those operations to be blocking. So that's why I put that on the async background thread. It uses So that's why I put that on async background thread. It uses actually, yeah. So that then what ends up happening is you can continue on doing whatever the user can keep doing their thing. And then in behind the scenes, you get this really nice little graph pattern kind of coming out the other side of it with the page text here and then the chunks. And let's see if we look at the chunk here. You'll notice that that chunk is really nicely bounded around the sentence structure. Yeah, that's cool. And that OCR capability, because there was another question from chat from Veronique, that is built-in default capabilities from ChatGPT. This is literally just OpenAI's 4.0 model. Check this out. Add this as a note called fieldObserve. Okay, let's do this even better. Okay, read, if I get this to work, this is really cool. I did this this morning, I'm like, oh man, read this note, correct any spelling mistakes and add the notice thing called field observations. and add the notice thing called field observations. So this is revolutionary. So somebody now, you can actually set up things now. Can you show the note again in a bigger, so that it is- Yeah, I'll pull it up here. So people can see. Let's do this thing. Cause this blew my mind when I got this working, I was like, what? Yeah, so here's my really lovely handwriting dr mike is in office like it's in the office you look at this and it's like yeah there's no way this is gonna work um but the wild thing about this so imagine this like all these years we've been forcing people to oh you need to have a form and you got to put this thing in this place in this form and yet the form form has to be done in a certain way. And you have to write the, all this stuff goes out the window because what can you do? You just have the person go with their notebook. They write a bunch of stuff down, um, in whatever the, you know, whatever order they see fit. Cause these things like the, the order of these don't matter. As long as you've got some basic structure here, the LLM is now smart enough, confirm this, which I mean, look at this. There's the site name, the date, observations, the spelling's all corrected, and it even breaks it out. Like I can go in and post-process this now with Neo4j and go, okay, give me an observations node. Give me a landowner node. What does the landowner care about? This is revolutionary stuff. And I can go list. So get the node, display the text, and explain why the landowner is concerned what that should do is that should call back into my library it should get that note so you So you see it coming through here. If life is good, we'll get that note back. Wow. You can imagine now this for field people, for people who are working in the field and they're having to deal with all these horrible, you know, the paradigms that we've imposed on them over all these years. You don't have to do that anymore. So I go list. So, I mean, I got so many examples of this. It's pretty wild. Forms data capture thing. Oh, yeah. So then if you get into stuff like this, so let's ask it a question. So we'll jump a little bit here sideways. So as you know, I basically inside of this thing, I've got all the articles that I read and stuff. So there's, if I bounce this back out, almost 200,000, I'm almost at my aura limit. Yeah, well for the free one, but I mean, it's scalable to whatever, so it doesn't matter. But yeah, so I've almost got 200,000 nodes in there of basically documents. I think it's about 2000 documents that I've got stored in this thing at this juncture of stuff that I've read. Most of it's on AI, but there's other subjects in there as well. So if I fire this interaction off, what we should see is it should, in a second here, it's a little slow this morning. Oh, it's probably because I'm streaming. Yeah. So it knows it's a question. So it should chat sources. Here we go. So this should explain why we, you know, we want to use graph databases for this stuff. And of course, this is now only taking from the documents you have stored in your. Yeah, here come the sources now. Oh, okay. I see. Yeah. Yep. So then we can should be able to go. Okay. Using source three, let's go provide a topic to research further on this subject graph and LLM. Source three is Langley, blah, blah, blah, that one. Actually, let's try two. Two might be a little better, but it doesn't really matter. So this is where this also gets really interesting is, you know, now I've got this base of knowledge out there that's got a bunch of stuff in it. So I can then start having these conversations and asking a question. So when I'm writing articles now, this is how I'm actually doing it. So now what it's done is it's actually using this source. So in behind the scenes, it's actually got all the texts from that article, kind of captured in context. So then you can get this down. So research focus, right. And then what I'm going to try and do here. Okay. So now generate research on knowledge graphs and private data sets with LLIMES. Now, if this intent works, what this should do is, so I've got inside of this thing, the only other thing I did um was uh good um what i did is i set up i used a land graph and this is really mind-blowing this this as well so land graph for folks i don't know how familiar folks are with it there's a couple of these things out there the next the next phase of this evolution of this this domain is this idea of autonomous ai agents and back when we were doing the mean ohm thing we had built that entire system the data link as it was called was built on the premise of uh what they call multi-agent system design and this is the idea behind that is you have a whole bunch of agents that are acting in some kind of a pattern uh oh um oh bad request for your own huh oh it's on the tab release side. Okay. So that's an error outside of my control. Good. Anyway, it'll still complete, but that's all right. Yeah, the agents basically then can go out. You set up a bunch of agents to accomplish an objective, and they go do their thing. And then they communicate back. And then you use the graph basically as an anchor for the, for managing the agent behavior and the agent interactions. So there's a couple of them out there now, but the problem with the agents in the past was the, you had to use deterministic decision logic to try and control how they would respond to situations like the one that just came up on the left there. So if something goes wrong, if you have some bad data or if the data needs to have more subtle handling, all these sorts of things were pretty complicated to do. LandGraph, Autogen is another one. I think Lama Index has an agent paradigm now. If you inject AI into these things, you now have a really nice, if you will, probabilistic method for handling fairly complicated situations so that the agents can actually make their own decisions about stuff. And it's pretty powerful. So this is again, where the, you know, robotic process automation and, and these, all these sorts of workflows you see in the enterprise will become revolutionized by this. So a good example is health and safety, like when we were doing health and safety stuff for capturing hazard IDs. So there's this whole pattern around hazard identification, near misses and incidents. And the handling of these types of things, you know, go up exponentially based on the level of urgency associated with it. So a hazard is, you can identify hazards like this is a potential problem, needs to be reported, but it can be mitigated and your miss is something where something has happened and it almost caused an injury or a problem but it didn't you know luckily nothing or fortunately nothing happened and then it you know a full incident is obviously something bad happened um for example when we're out wandering around in the woods last week one of the things you want to know about is, hey, are there any bears in the area? So if you're out there and you report, you know, you've got your phone here and you say, oh, there's some bear scat. Take a picture of it. You drop it into the thing. There's a bear in the area. We're not in any immediate danger, but other people in the area should be aware of this. You fire that into one of these systems. You know, it can identify it can probably identify the type of bear it is based on looking at the type of scat or the track or whatever it is. It can then, the agent could then go out and make a decision about, okay, which people are in the region, which one, who do I need to notify? Yeah. And off it goes, right? In this case, what I'm doing is something a little simpler where I've got a curation agent that basically can go or that I got too many windows open as usual. Yeah, here we go. So what the thing will do is when you fire it off, it'll it'll go out, these agents will go and I've got a search agent, curation agent, a writer agent, a critique agent and design agent. And then these are put in these are nodes in a workflow and then they're joined together with edges so it's a graph structure so one of the reasons i picked land graph for this is because i thought oh great i can actually then take this and um actually project the the thing that the the workflow is doing the graph back into neo4j um so that research that kind of came out the other side of this, and it's pulling from two sources right now, as I mentioned, it's pulling from inside. So pulling in from the private knowledge vault, or personal knowledge vault, and it's also searching to an external source, Tavoli, or you can basically any source could be a corporate source, could be whatever. And then its objective is to generate a cohesive output based on curating the information that it finds by having a writer agent that basically is prompted to take the sources that the curator gives it. So it picks five of the things that come out of the search. The search grabs 10 from each source. that come out of the search. The search grabs 10 from each source. The curator basically then renders that down, basically picking the top five items that are most in its opinion, relevant to the topic at hand. The writer then writes the article up. In this case, I'm coaching it to be a newspaper writer, but you could also say scientific journal, whatever. Or you could have a take on some other persona and you have an editor that then is challenging the curator, like the writer, hey, have you actually done a good job at this? And then it publishes it out into a form that then is in the style of a Wikipedia like article. So this is based on, there was a publication that actually described this process. So the neat thing about this, as I'd mentioned, is it comes out as a graph structure. So here's the graph here and here's the actual output here, which will pull up into the into the action and pull it back another way, too. But so the first step is it does a search on those things that I mentioned. Now, the really cool thing with this is you can actually see, and I haven't got the surface in the interactions yet, but we know exactly where this stuff came from. So the thing that comes out the other side, this article that it produces, comes from these external sources that were pulled in and from these pages of these internal sources. And you can trace that back to the exact document that it came from. So when you're talking about an enterprise and you're talking about, you know, important critical knowledge that people are making decisions on, you have to have traceability. And this is something that really the graph makes possible because I can track exactly what the agents are doing, the steps that they took. I can tell, you know, the different actions that were, that happened and whatnot. And I know exactly which sources and which chunks even that this, this output was derived from. And then you can create feedback loops with this, where you can then push this back into the system to improve the responses and everything else. So it, you know, the potential here, again, is to provide tremendous traceability and grounding of the of the responses that the LLM is giving. So questions about any of that, I guess. Yeah, that's really cool. We had a bunch of questions. So I can read a couple of these and then we can discuss some more. I think one question that maybe particularly to this, what you showed here last is probably interesting, came in from Jumbo and he asks, how do you delete or update a document from your graph? So if you have something in your database already and how do you update that? Is that currently not a thing? It's on the roadmap. Yes, you're absolutely right. So one of the, a couple of things I really wanna do is detecting duplicates and versioning, that're absolutely right. So one of the couple of things I really want to do is, is detecting duplicates, and versioning that sort of thing. So if something comes in twice, or even entity recognition. So again, this is where this, this is again, where the graph is quite useful. Because all of the utilities for doing that sort of thing are built into Neo4j. Similarity detection, you know, the embedding side of it, all that stuff into Neo4j, similarity detection, the embedding side of it, all that stuff makes it possible for you to basically go, hey, does this thing already exist? Is it similar to the other things? You can then make decisions about, do I actually bring it in again and link it as being similar or do I just say, no, it's already there and inform the user? There's a bunch of different patterns that could come into play. My thought is that even with the user that I'm kind of working with, they have a lot of duplicates, different versions of things. So ideally what would happen is you keep all the versions and then you'd use a method to kind of triage the things like, oh, you've got five versions of this. This looks like the newest one based on these criteria. What do you want to do with the other versions? You know, this sort of, again, getting it into the interaction model so that that information is actually showing up or these actions can then be taken in a conversational fashion inside of here rather than doing it in a standard UX. Yeah. Cool. A couple of, there were a couple other questions on entity resolution and duplicates. So I think that's, the more data you have, the more of a problem this is going to be. But we can, yeah, if this is a future development, then maybe we can talk about this later. There was a question on the choice of LLMs. So I know you have probably worked with a couple. So I don't know if you wanna give a recommendation. I mean, what are you using? What are good LLMs for, you know, text to cipher is something that comes up. Stuff like that. I mean, is there, is there a, do you have some recommendations or you have some kind of, you know, yeah, experience from the past from where you, you decided to deploy this with GPT, um, you know, and anything you can share here? Yeah. I mean, this is one of the great things that's happened. So, you know, I do a lot of presentations on this subject. And, you know, there's some comparisons that are sometimes made with this technology to other sorts of technology paradigm shifts that have happened. One of the great things in the case of this one compared to other situations, I guess the one that's often used is the Manhattan Project. The nice thing about this particular case, though, is that while there are certainly risks and challenges, and we won't get into that in this conversation, but the great thing is that a lot of the stuff, the work has been open sourced and is available in that way. So there are a lot of models out there and they're proliferating rapidly. In the context of the work I do at R-Curve, what we do is we're agnostic in terms of specific platforms. We will use the best available technology for the thing the client wants to accomplish. In this case, I chose GDP 4.0 and OpenAI's kind of method. Microsoft does have quite a good story on this front as well. I think for most enterprises who are, if they happen to live in that world, that's a really great option. AWS has got some stuff they're working on. Of course, Google's got some great technology. We have done a number, or Databricks is a huge one. they've got a nice engine for the hosting LLMs and stuff. So there are a number out there. And again, I think it comes down to picking the one that works best for the context that you're operating in. And, and then kind of keeping track of it. Like that's one of the hard things right now is it's such an arms race in terms of the capabilities of these things. With respect to the text cipher thing. I think the work that Tomas Britannic is doing, if you look up his stuff, he's got some really great, I think the text decipher model he's been fine tuning looks really promising and it's really evolved quite a bit. So that would be the one I'd check out there. And of course, then I think Neo4j has got a really nice story around the Olami side of the equation, which is also quite a nice model because it's pretty lightweight. So another version I was thinking of doing of this thing was actually using a local LLMm which i think you can do with olama um because it's a bit lighter than you know then you're not having to use the full chat gdp thing or gdp 4o thing and token you know paying for tokens and all that kind of stuff um yeah so um maybe maybe one more question, because we are approaching the end of the hour already. Time flies. So you have some local data and you have something from the internet. How do you avoid kind of, or is it maybe if you wanted to avoid, maybe in your case, you don't mind so much, but how would you say walling off or fencing off your personal knowledge vault that it actually becomes a vault so that an external system cannot access that. And how do you fence that off? How can basically, in this case, it looks like GPT could access your local data as well as look in the internet. And how is that from your point of view? Is that a problem? Is that something you, how do you circumvent that private data that should not be uploaded to the internet becomes in the internet? Yeah, it's kind of part of what I'm trying to test out with this, actually. That's quite a good question because it is a big concern. So certainly inside of this thing, one of the things I did do, and you'll notice it here, is I'm paying for the, you know, I'm using the paid-for version of opening eyes in Gizmo and also paying for their API. Now, according to their documentation, when you do that, they are not supposed to be using the information to train their models. So I'm relying on that guarantee that that is indeed the case. The other thing I'm doing, of course, as well as the, you know, all of the data itself reside inside of the graph. So this is isolated and you can kind of see as I've been going through these interactions, this thing will ask me for permission to communicate with the link.benome.com. And it's passing parameters back and forth. Now, of course, when the data comes back into this side of the fence and to JackGDP itself, it is inside of this environment. So I am trusting the fact that these guys are not harvesting this stuff that I'm sending in here. With respect to the internet side of it, this thing is a closed system. I mean, you can go to the URL, but it's using ssl and the usual sorts of a lot of stuff so unless you have an authentication uh password blah blah to get into it you won't be able to hopefully get into my my knowledge vault um assuming i've done that correctly which yeah um yeah so that's kind of the way it's set up um Yeah, so that's kind of the way it's set up. I guess you could. I mean, you're using a public LLM, but you could as well decide, I want to keep it completely private, locked up, and I'm not going to let it talk to an outside LLM, but I have my own. Absolutely. And then the other thing, too, is, again, the nice thing with the Neo4j side of it, of course, is there is security. You can apply security down to the node and relationship level inside of this environment, and it can be wired into the various auth providers. So, you know, one of the things I can do with this, I haven't done it yet, but it is also on my roadmap, is to, when I've got the user user node here is to then basically and you say i've got another user in here pete which you may remember pete from other neo4j lectures in the past um yeah the idea here is that these are all these nodes are all connected back to me um and there's and the other thing that's on here is there's a user action node. So I do have contributor. So I have basic roles that I've set up inside of this thing. And then there's also this user action node that then tracks the actions that I think it's a bit messy there. But the various actions that I've taken, you know, Mike has added stuff. And these are the sorts of actions they've taken on the content as the system is running. So you have traceability on that. Eventually what'll happen is, you know, once we get into sort of the multi-tenant aspect of this, I can have a public and a private side. So, okay, this information I want to make public, this information I want to keep to myself, I want to share it with this person, and so on. And again, graph enables you to do that. It's a really nice way of being able to make sections of the graph available by having just a simple link through a node that's public. Anything that's kind of hooked up to that node then becomes available to open other things, maybe connect to certain people, this kind of an idea. Super. That was a great demo. Thank you to you, Mike. I cannot just repeat. Bernd, this is a great job. Jaffa has a wonderful demo. So yeah, absolutely. It was really cool, really, really interesting. Thank you very much, Mike. The GitHub repo is kind of available. I put that on in the chat and in the video description. Yeah, so that bit basically, so I have published the processing bit of it here. Where did it go? Repositories and processor, this one. So this is, it's a bit older now, but this basically gives you the backend pattern. I don't think the semantic chunkers in this one, but it gives you the salary structure and it gives you the decomp structure for, you know, basically the stuff that the taking an input and then breaking it down into a simple null graph document kind of structure is what this does. I am planning to open source the rest of the code at some point here, but again, I want to get it. It's kind of like performing, when I perform with music and stuff, I want to practice it enough so that it's not wrong sounding, but this information that's here is essentially, music and stuff, I want to practice it enough so that it's not wrong sounding. Yeah. But this information that's here is essentially this is still relevant in terms of the way the structure like I haven't changed the structure very much since I first wrote this bit a little while ago. So this is all still basically the same. Yeah. And Peter writes, he's officially immortalized in Mike Morley's world. Yeah. Yeah, that's true. Cool. If people would like to follow you, where's the best place to do that, Mike? I publish a bunch of stuff on LinkedIn. So our curve is, we're pretty active in terms of the company really has a great ethos around, you know, putting useful knowledge out there. So a number of my team members, we've got a really great series coming up on from my colleague, Farhad is actually going to be publishing a web series on a bunch of LLMs and AI related subjects. So all of that LinkedIn is probably the best place to look. I do have a series also running on the medium side of things, and those tend to be very long, but, you know, I try and do a lot of research and on the subjects that I post up there and try and put quite a bit of thought into those things. So, you know, I'm not a big person in terms of wanting to publish snippets i like kind of getting in depth with stuff so i encourage you to check that out yeah if you have questions um yeah you can i think linkedin's probably the best to try and get me in terms of answering stuff and i try and answer as many as i can cool i'll i'll share the link to your personal LinkedIn also in the video description. So if you would like to follow Mike, then that's best to do that. For now, I think we are at the end of today's episode. I have one more thing I'd like to mention on the 12th. So this Friday, we have a special Euro Championship soccer football championship special exploring World Cup history with Neo4j. So my colleague Pierre looks at the FIFA data set from almost from the beginning of the World Cups, I think. And we have put that in NeoDash to visualize the data. And also we use NeoConverse to do a virtual chat interaction with a chat assistant basically on that data set. So that's something a little bit similar to this, what we showed today, more public data, obviously, but similar concepts. So if you're interested in that, let us know and obviously follow the channel to tune in on Friday. Notes 2024 registration page is open. So I'll encourage you all to register for that. That's going to be a fun and a very interesting event later in November. And yeah, if you want to know more about Neo4j, know more about the community, more about the interesting stuff to use and to work with graphs, then look at the Graph Academy for interesting courses, free online courses. Check out Aura Free, that same tool that Mike also uses for his vault to play around with a graph database with no installation needed, or just hang out in the community forum or the Discord server and talk to people who are as much a fan of graphs as we are. Yeah. And with that, thank you all for watching. Thank you very much, Mike, again, for the great demo, the cool showcase, and the amazing demo, really. I can only tell this has been really, really cool stuff. Thanks. Looks like I got a couple bugs to fix, but that just means I'm on the right track, I guess. Yeah. Sure. Yeah, with that, thank you all for watching. Thank you for participating in a lively discussion today. That was fun as well. And I hope to see you soon at another live stream or in the future somewhere else. Take care, everybody, and see you soon. Thank you very much for hosting me and thanks to everybody. | Neo4j Live: Personal Knowledge Vault with Neo4j GraphRAG | 4,001 | Neo4j | 20240710 | Explore how to build a Personal Knowledge Vault using Neo4j's GraphRAG pattern. Discover how to transform website URLs into structured graph documents, enabling advanced Retrieval Augmented Generation (RAG). This session will demonstrate the integration of Neo4j, FASTapi, Celery, and RabbitMQ to create an efficient, scalable knowledge management system.
Guest: Mike Morley
Github Repo: https://github.com/clockworknowledge/menome_processor
Mike on Medium: https://clockworknowledge.medium.com/
Mike Morley on LinkedIn: https://www.linkedin.com/in/mikemorley/
0:00 - Introduction and Greetings
1:00 - Overview of Personal Knowledge Vault
1:53 - Mike Murly Introduction and Role
3:33 - Challenges in Managing Complex Data
5:56 - Schema Flexibility in Neo4j
7:29 - Demo Introduction and Community Interaction
13:04 - Combining LLMs and Graph Databases
17:23 - Capturing and Processing Documents
23:42 - Using Celery for Background Processing
30:42 - Demonstration of AI-Powered Data Entry
57:13 - Future Development and Enhancements
59:02 - Community Questions and Closing Remarks
1:03:17 - Additional Resources and Upcoming Events
#neo4j #graphdatabase #graphrag #rag #genai #llm #knowledgegraph #knowledge | 2024-09-03T09:45:37.933072 |
https://www.youtube.com/watch?v=7NgPGh0E0XE | All right, so hi, my name is Mejdi. I'm a first year master's student at Mila doing part of the computer science program. My research focuses around machine learning, like more specifically generative modeling and geometric deep learning around like ligand protein binding, molecular dynamics, sampling, equilibrium states of molecules and all that kind of stuff. But prior to coming to Mila, I worked in industry in a company called Advi for four years, doing machine learning on chemistry, more specifically like property prediction and for Admi, and then a little bit of generative modeling. So I'm the point of this talk. I've been had the absolute honor to be invited by Nucleate to give this talk on an intro to AI or machine learning in drug discovery. And I'm excited to give it. And I would like to thank all the people who have helped me refine my slides, first and foremost. But let's get right into it. I'm going to be sharing my slides. Hopefully, everything's all good. All right. So this is an intro to AI and drug discovery. This is a talk I'm getting very clear. Hopefully everything's all good. All right. So this is an intro to AI and drug discovery. This is a talk I'm getting very clear. So let's jump right into it. So I'll be starting off. The frame of the talk will be, I'll give a brief background on drug discovery, like the entire pipeline and how ML might be able to help out in certain areas and then i'll dive into a background in two popular areas of ml for molecules and then briefly talk about open problems and challenges and i'll kind of conclude the entire talk. All right, so the chemical space, it is massive, it is huge. The number of molecules and number of compounds that can be created or possibly created is I believe 10 to the 60 compounds. And so I think I'd like to think about why it's called drug discovery is because there's this massive pool, this massive space of compounds that we can discover or sample from rather than, you know, I think some people call it drug design, but I think the more common term is like drug discovery right and so we're discovering these compounds that might become potential candidates to be effective drugs towards certain kind certain kinds of diseases and uh the compound to drug uh journey is usually follows this pipeline. So the drug pipeline starts off with this first component, which is drug discovery. And there's about four components into this. And so it starts off with target discovery, which and then identification, hit to lead, and lead optimization. We will dissect a little bit each of these categories in the drug discovery pipeline. But usually if we have a compound that has made it through these four stages and we have a drug candidate at the end of the lead optimization phase, this will then progress forward to another set of trials called the clinical trials, which is broken up to three phases. We won't be really talking about clinical trials in this talk. This is sort of out of scope for this talk. We're mostly focusing on drug discovery. But you just have to think of the clinical trials. We go from essentially wet labs and virtual labs and all this kind of stuff to testing on uh bigger and bigger batches of people trying to see if this drug uh will not have any harmful effects if it does the thing that it was designed to do and documenting all this very very rigorously and if it we have a drug that actually makes it all the way through all this, it gets submitted for FDA review and possibly approved to be sold in the market. And so, yeah, it's a long, arduous journey. I think it's quoted to be around 12 to 15 years for this, for the drug discovery pipeline, the drug discovery, sorry, like the whole drug pipeline. And really for the clinical trials and FDA review and approval, you can't really speed up that process. That process is like, it will take its time. It needs to take its time. There's like a set strict rules for it. But the area where we can save time and accelerate is this drug discovery process. If we can find drug candidates much faster thanks to computational tools and faster or higher throughput screens and we'll talk all about this in the talk we can we can hopefully find drugs much faster to to cure certain diseases and I believe so we're going to kind of go down and we're going to dissect each of these categories, but I think it's helpful to think of this whole pipeline as essentially being a funnel where in the early stages, you might have a lot of candidates. And as we go through each and every stage, you're filtering out ill-suited candidates and only keeping the ones that meet certain criteria. So just keep that image of a funnel in mind while we are talking about each of these categories. So as I said before, so drug discovery starts with a target discovery. Target discovery is essentially saying that what is the target that I need to be targeting, right? Whether it's a protein, let's say for simplicity, we'll refer to a protein, a protein that we need to target that would have some downstream effect on some disease, right? some downstream effect on some disease, right? So trying to find that protein that is like a long and painstakingly like heavy research process. But let's say if you found a protein that you know has some relation to a disease, is associated with a disease, we move on to the identification phase where here we are trying to essentially throw a bunch of molecules onto the target and seeing which one sticks. We're just trying to filter from that huge chemical space that we know of, trying to reduce the size of it to see just passable candidates. And once we have some passable candidates, we go further on to reduce that volume to find potential leads. So this goes from the hit to lead phase. And then from the hit to lead phase, let's say I think this goes from about 1,000 to 10,000 candidates down to like about a hundred candidates. And then the hit to lead phase will filter out those hundred candidates down to about like 10, like a dozen or so candidates, which then we go through a lead optimization phase where a program is started and that revolves around these molecules trying to optimize their properties. And we're gonna actually even go even deeper further to each of these categories. And then this is the funnel that I was talking about earlier. Just keep this in mind. We're going from thousands and thousands of compounds in the hit identification phase to hundreds of compounds in hit-to-lead to about a dozen in lead optimization. And out of the lead optimization, maybe one or three candidates will come out of it. They'll go through the clinical trials. Okay, so starting off with target discovery, I've mentioned how, so part of target discovery kind of boils down to two things two things first is identifying uh what is the relevant entity to disease of interest this doesn't necessarily have to be protein but for the sake of the rest of this talk we'll be referring to proteins but finding these targets and their associations with the disease uh requires a lot, a lot of research. There's a lot of data mining, like data mining literature and trying to study like omic databases, other reports from public research, experiments of various kinds. You're trying to put this all together in order to determine that if I target this protein, I will have some, hopefully, downstream effect, a likely downstream effect on this disease. That's what we're looking for. But the other criteria to the target is that it needs to be druggable. And what do I mean by druggable? So this is kind of like a sort of inaccurate analogy, but I think it kind of gets the point across. If you think of a lock and key mechanism, and if you think of the lock, what's important for a protein, for example, is that it has these binding pockets, as you see here on the the screen where this compound is sort of resting these pockets where compounds could nestle themselves in if you think of this pocket as like a lock and you think of the compound itself as a key you're trying to design or discover a key that best fits this lock in order to do the certain thing that you want to do, for example, open the door. But let's say that, but how can you design a key to open a door if that door doesn't even have a lock, right? So essentially what I'm saying is that if there is no suitable pockets for a compound to bind itself to the protein, then it is unable to drug the protein. It's unable to elicit a response or be bioactive against that protein. So these are the kind of two, I think, more important criterions essentially is that, is it associated with a disease of interest and can we drug it? And there's been a a lot of there's been a growing field of ml methods and even computational methods um around this two ones recently came out i think in the last year append and grasp um but for example pinned i think they just calculate a lot of uh different chemical physical chemical and uh features of the protein. They pass it in through some neural networks, and this neural network will output a score of how ligninable or how druggable this protein is. And GRASP does the same thing, but it uses things called graph attention networks. And I think there's a growing interest in this field. And you can also have ML methods for the other use case of disease target associations, where you apply ML on knowledge graphs, right? You have, you've accumulated all this literature, you form this graph of relations, and then you try to extract new relations, new disease target associations, using either graph methods or ML on graphs, right? So now we found our target. We are at a great like a initial start. We're at the start of the race, right? And so we're going to move on to hit identification. Now, what does it mean to be a hit? If you want to think of the target as being like a like a like a dartboard. And if you want to think of uh compounds as being the darts themselves and if i grab a handful of darts and i throw all of them on the dartboard uh the ones that stick to the dartboard it doesn't matter where it sticks it could be dead bullseye or it could be at the very edge of the dartboard. As long as it hits the board and stays there, these are hits. Essentially, initial compounds that have expressed some form of activity towards a target. And normally, the way these are done are, I think, mainly two methods. You can either do high-throughput screens, which are these large automated wet lab experiments and Essentially what they do is they they Essentially find they throw a bunch of compounds at a target and a solution and see which ones stick and then you can do this via virtual screens for use computational tools instead and you can either lightly dock these molecules against the protein see if it docks predict these properties using ml methods like predict certain bioactivities and or generate or use generative models to generate hits, right? So generate these molecules that might be potential candidates. So we've identified about a thousand compounds that might be possible hits to a protein of choice. And we move on to refining that candidate pool even further and like filtering it down. So we move on to a hit to lead phase where now we have a defined threshold of activity that we would like our compounds to be. Maybe like we'll take the top 100 compounds that have the best activity, but also we want to balance out other profiles. And so bioactivity is essentially how active the compound is towards the protein target. There's other things like ADME properties and we'll have a slide after this that will kind of explain what ADME properties are and selectivity which is essentially is my compound being selective towards this protein target? Is it I want to make sure you know I don't want this compound just binding to any other target because that might have some harmful downstream effects. Am I selectively targeting this protein? And so, and you have to think about like the, this whole process is like a spectrum essentially where some compounds are going to be active, but some compounds are going to be even more active and have better profile, like admin profiles than other compounds. So we're trying to get the best of the best out of this candidate pool. Think of it as if your favorite sports for tryouts. If there's a lot of people, they're going to have multiple rounds of tryouts in order to filter out, to filter for the best candidates or the best possible candidates for the team, right? And we're doing the very same thing for compounds. Example methods of where ML can be involved is like protein ligand docking. An example method is like DIPDOK. There are several other methods now since then that have also do protein ligand docking. Predicting bioactivity to a docked ligand and ADME property prediction. So this is kind of like a main line of work that I did during my work at ADME, where essentially we're trying to accurately predict these different ADME properties for molecules. And so what is ADME? I keep mentioning it, but I haven't defined it. And essentially, it governs four different things, but some people also refer to ADMET, which has a fifth component. But it's essentially is how well does this compound get absorbed? How well does it get distributed? Like, does it get distributed to where I want it to go? Does it get metabolized? How is it broken down? Is it broken down in the liver? Is it broken down in the gut? It depends on where you want your drug to be broken down, like where you're targeting. And then how is it extruded from the body? And then there's a fifth component called toxicity, which is essentially, is my compound toxic where it will cause some harmful side effects, right? And all compounds are in a sense toxic. It's just how toxic are they? If I can take a small dose and I'm okay, that's great. But if I can take a small dose and it leads a huge spike and a huge response in my body that causes ill effects then it's a pretty toxic molecule right um it's sort of like the analogy like all things are edible but how many times is it edible right like i can eat a rock but i can only drop once right so um these all these properties are very very very important for the drug discovery, for like a drug and how well it will behave in certain environments, right? So, we have now filtered out, we have now about a dozen or so compounds, and we're in the lead optimization phase. And usually, so what this has in this phase, you have had strong hits that have high potential to be effective drugs. We are now going to dedicate resources to these dozen candidates and we're going to develop a series of program around these candidates. program around these cannabis essentially that um we're going to more resources more focus is going to be addressed to these and you're going to design analogs of this lead where essentially you iteratively refine the molecule in order to increase its bioactivity um so essentially going from like 10 micromolars of activity down to one micromolars of activity, like closer to one micromolars, meaning like if you go, if you decrease that, the better it is, the more tightly it bounds to the target. I'm going to optimize towards my desired ADME profiles, like if I need it to be permeable, like a certain permeability, certain solubility to be cleared in a certain way. And you're going to try to increase the selectivity and reduce off target attractions. We don't want side effects. We want to minimize the side effects of this drug. And so you can see on the right, we have the starting compound. And if you take note, like you can see that there's minor changes happening to the molecule. Some of these are more major, but some of these are minor. And these changes, however small or however big they are, could either lead to the compound still being active or even being more active than its original starting state. Or even being inactive, like a small change to one of the substituents here. Like we added, I think, a carbon here and removed the chlorine, led the compound to being inactive, like a small change to one of the substituents here, like we added, I think, a carbon here and removed the chlorine, led the compound to being inactive. So you can see how painstaking it is to find, to optimize these compounds. Minor changes could really throw you off the other way, right? You could have had a really active molecule and a minor change nullified all that. So it is a painstaking process and having computational tools that can help you iterate faster or even suggest you ideas to like what to add or explain why this change that you did, that the medicinal chemist has done, made it inactive or made it more active are all like suitable areas of like research or like where ML might be able to help. Okay, so I think I've sort of implied a little bit of why the need for ML or how it might be able to influence the drug discovery pipeline. But I would like to summarize these points in the sense that we have this design-make-test-analyze cycle. And essentially where ML can help or even just computational methods in general is that it helps accelerate the design cycle of the drug. It's this whole idea of like in silico versus in vitro and in vivo. Now, what do these three terms mean? There's essentially the in silico means on the computer, in vitro means in the test tube, and in vivo means like animal testing. And so in vitro will maybe take a week or so, or a couple of days. In vivo might take anywhere between three weeks, essentially. And then in Silico, depending on the resources, the computation that you have and how expensive the method is, it can happen anywhere between seconds to maybe a couple hours. But essentially, it is much, much faster than the other two methods. And so this whole idea of can I design faster, can I iterate faster and get through these ideas much quicker, right? And then you can have things like generative models that will aid medicinal chemists in generating new ideas. Sometimes they just kind of need to be given like 10,000 analogs of molecules. And then you profile all these molecules with other computational tools and then filter down to another 20 analogs of that lead compound, right? Or 20 like minor change, minorly changed molecules for that same molecule. And, you know, test those out to see if they are even, they push in the right direction, right, for a better drug candidate. And then we get to reduce use of costly resources, such as materials and time, right? So a lot of times, like in vitro labs, for example, you have this well plate, it would be uh it might have like i don't know let's say like 96 tubes that you could fit on there um and let's say you're running an experiment for that week uh and you've filled up 48 of uh spots in that well plate well once you use the well plate it's done like you know you've lost out on essentially 48 potential other things right so um if let's say i have a model that i have strong confidence and that predicts really well and um and um uh predicts my property very, very well. Well, maybe for my high confidence stuff, I can just trust in the model's predictions. And then what I'll do is I'll fill the well plate with lower competence tubes. I think I'm gonna change my example real quick. Imagine if you had a well plate and then you filled up a little bit of the other well plate. Well, now you've kind of like wasted a whole well plate. So let me remove the compounds that I have strong competence in with my computational tools, and then put all my lower confidence predictions into that one well plate and then save on a resource. Right? So, and you get to save on time. So these are all like, I think there are many more benefits, but these are things that the three key ones that I can, that come to mind when I'm thinking about how ML can influence the drug discovery pipeline and then like i would like to remind uh like ml isn't magic and it's not going to be the end all be all it's not going to replace at least i don't well maybe not in a short time but maybe in like 100 years or 200 years but not in the near time future anytime any time that I'm alive. I don't think ML is going to completely replace the whole drug discovery pipeline. Right. And so I think it's a tool like any other computational tool and it's used in tandem with these tools. And here I'll point to an example. This is the CASH challenge, the Critical Assessment of Computational Head Finding Experiments. And recently there is a challenge um uh in this challenge um a lot of people they have like a list of all the methods or all the types of tools that people use for each of their hit finding experiments and uh a growing number of these tools, the ones I have highlighted in red, have machine learning involved in it. And in a lot of these cases, you'll see that ML is used in tandem with other computational tools. And it's fascinating. These things working together have helped potentially find a really good hit for a certain target of choice. Whatever, I forgot what the target was for this challenge. There was actually an ex-post, someone that kind of wrote out some of these pipelines for a lot of these methods, but I couldn't find it. So here I am just highlighting them from the webpage. But it's fascinating. It's like really cool. Like we should think of ML as being something that aids the scientists and not replaces them. So I've kind of gone over the whole drug discovery or parts of the drug discovery pipeline and covered a little bit of how ML might be useful in these pipelines. But I've mentioned things like generative models and graphs, neural networks, and all that kind of that sort. And so I kind of want to dissect these a little bit more and dive a little bit deeper into these popular ML methods on molecules. So the first topic will be governing something called geometric deep learning works so well. And it sort of boils down to like, we have different geometries that are represented. Like, for example, images can be represented as grids. Molecules can be represented as graphs. You know, language, like a sentence could be a sequence of nodes. And all these things have some symmetry, or some regularities in the data that we can exploit and take advantage of. And it helps the model generalize better or learn better for these cases. And so I'll be kind of diving a little bit deeper into that and so the first thing I want to like cover is graph node networks I think this is a very natural representation for at least the topology of the molecule so we see that we have here this is penicillin and you can see that they have your atoms that are connected via these bonds and it forms this topological graph structure of the molecule, right? So we can easily form a graph of nodes and edges where these nodes are characterized by certain properties of these atoms and these edges characterize like what kind of bond is it? Is it a single bond, double bond, triple bond? Is it in a ring? All that kind of stuff. And we can then, given this graph-like structure, use things called graph node networks, which help preserve certain things. Like, for example, in graph node networks, if I were to reorder the nodes, I want my output of the network to also match that reordering. And so this is called permutation equivariance in the graph known networks. But and we'll kind of dive a little bit deeper and subsequent slides about what is equivariance and invariance. But I'm just kind of just want to briefly expose you to the topic. And so you can pipe these into the graph neural networks, which leverage this innate and natural representation of the molecules in order to do certain things like property prediction or generation and stuff of that sort. And we'll go even further. We're not only going to be considering the topology of the molecule, but also its geometric representation, like the positions of the atoms, and be using these things together, right, for these neural networks. And so I just kind of want to point out that like before this before graph known networks became more popular recently in the i think since 2016 2017 uh before uh and still now to this day that's a very commonly used method is they used to pre-compute chemical like physical chemical properties of the molecules using computational like tools and methods and you would get essentially like this vector or this tabular data, essentially, of all your molecules as rows and all these different properties. And then you'd feed these into random forests or other methods or even a deep neural network. But really, random forests and XGBoost really did a... were actually really solid baselines for all of these. So just because deep learning is becoming the rage shouldn't mean you discount using simpler methods to at least baseline to see like, why do you need a golden hammer? If just a regular hammer works just as well or even better sometimes in some cases, right? Especially in low data regimes where non-net networks might struggle a bit in certain cases. But I've talked about graph neural networks and I kind of want to now widen it. So that was kind of like a narrow scope. I'm going to widen the view a little bit to talk about geometric deep learning and why is it so nice. So you can see here on the right in the picture the gnns is part of this group you also have other things that you might be more familiar with like cnn's and transformers and deep sets and rnms where it's this kind of like zoo of uh known networks that all kind of try to uh they're kind of framed in a way where uh it's framed it frames deep learning through the perspective of symmetry and variance and i think this is a popular uh perspective in physics and geometry so this is kind of where they take their inspiration from when when they apply this perspective to the deep learning But they have shown why certain design choices and models have performed so well. They're trying to inform future design choices to follow suit. Why should we follow these design choices? Because it generalizes better and reduces the complexity of the learning task. We'll dissect it a little bit more. But let's talk about let's look at an example. So let's try to discern what is invariance and what is equivariance. So let's say you have an image of a cat and let's say you apply some transformation S on it. This transformation essentially is translating this cat from the bottom right corner to the center top of the of that window right and so when let's say you have a classifier that's trying to identify what this thing is it's a cat right and and on the left hand side it's a cat and when i translate it it's still a cat it doesn't the properties of the cat don't change just because it's moved somewhere else in the image so this is something we call invariance essentially that your neural network is going to be invariant to certain transformations based on how you've designed it for like the output of the models give the invariant to certain transformations in the input of the model. And then we have, in the other case, equivariance, where your transformations that are the transformation on the input into the model is going to have a corresponding transformation on the output of the model. So here we translate the cat in a similar way, but now we're outlining this cat and essentially coloring it black, we're segmenting this cat. The segmentation is going to follow the cat when we translate it in the input, and we're going to have that similar translation in the output of the model when we're segmenting it. Now so this was kind of an example to kind of briefly introduce you to this but this talk is about molecules right so what does that look like like what does invariance and equivariance look like for molecules so let's say i have this molecule i believe this is caffeine maybe but um and let's say that we have a property for this molecule uh this is like the aqueous solubility of that of this molecule now if i were to rotate this molecule right does the property change is the were to rotate this molecule, right, does the property change? Does the global property of this molecule change? No, right? Like the solubility is still the same despite rotating the entire system, the entire molecular system, the molecule, right? So no matter how I rotate it or even in the case of when we translate it, this property is going to stay the same. we translate it, this property is going to say the same. Now, in the case of equivariance, let's say you're doing molecular dynamics, and you're trying to see where all the new positions of the molecules are going to be. So you're going to have these vectors that are going to push atoms in certain ways. Well, now, if I rotate the entire molecular system, right, all these vectors should rotate with it, like with the, you know, with respect to the rotation we applied onto the system. So that's kind of what it means to be like equivariant in the sense for molecules. So this is how it plays out when we rotate it all the way through back to its original position. All the vectors are just going to be rotated. And then we eventually end up to our original position when we rotate by 360 degrees. So I hope this kind of gave a clear understanding as to maybe, actually, let me say this. So why do we need this, right? So think about it in the case of property prediction, when we rotate a molecule, if we don't have this invariance kind of baked into the model, well, it's then what we're going to have to do is we're going to have to augment the data by rotating each molecule a set number of times. And then essentially we're going to do this for every molecule. So let's say you have 10,000 molecules and now you need like maybe 20 augmentations per molecule, like a little bit of rotation each. This expands your data set now to 200,000 molecules. Well, you kind of increase now your compute time because now you have to train for much longer. compute time because now you have to train for much longer and you've increased um uh the model has to try to learn this uh property of being invariant to rotations which kind of now puts more um more strain on the model to learn other things as well and um other things as well. And essentially, that had we baked this symmetry in before, we can now train out a 10k molecule, and it would still be able to generalize for any rotation of these molecules. So essentially, baking in these symmetry, preserving these symmetries in the molecule and baking them into the model um makes the training more efficient and will help the model generalize better because what if i rotate the molecule by rotations never seen before in the case where symmetry is not baked in well the model might uh mispredict uh the property of that molecule so just want to emphasize the importance of these things. And it's been pretty cool. Like there was this recent work, actually some folks from Mila and then I believe Cambridge worked on this essentially summary, a hitchhiker's guide to geometric GNMs for 3D atomic systems, where essentially they cover different molecular systems, like small molecules, biomolecules, materials. They have this timeline of all these geometric GNNs and then other applications like property prediction, dynamic simulation, general modeling, and structure prediction. And they've categorized these geometric GNNs into these four categories. It follows like invariant GNNs on the top in the green, Cartesian equivariant GNNs, spherical equivariant GNNs, and unconstrained GNNs. And so what's the difference between all these things, right? So invariant GNNs, you have these, what they do is they compute these properties of the molecule that would be invariant to these rotations and translations. So think of, for example, if I have two atoms, if I have two atoms that are a certain distance away from each other. So like this is maybe their bond length, right? And if I were to translate this molecule across by a certain translation vector, the bond lengths between these two atoms are going to stay the same. They're not going to, you know, all of a sudden grow wider or smaller because we've applied the same transformation on everything in the system. And you can say that you can easily extend this to bond angles and torsion angles, which is, and so all these properties, these invariant genomes take advantage of and incorporate them into their message message passing step which uh which uh essentially um uh preserves the the uh which bakes in the symmetry into the model therefore it becomes like rotationally invariant for the case of property prediction or rotationally equivariant in the case of the dynamic systems dynamical simulations so um and i think I've forgotten to mention this, but I've mentioned this topic of message passing, right? And you could think of in graphical networks where essentially if I have two nodes and some edge that connects them, they're gonna be communicating or sending messages to each other, passing message, passing information about their node state to the neighbors, right? And so it's going to involve both this, like, receiving information and sending information. So just think of message passing. Sorry, I should have clarified that earlier, but I don't know. Because we'll be talking about it a little bit as I dissect each of these different categories. So something to keep in mind. different categories. So something to keep in mind. So in varying GNNs, remember we've kind of pre-computed these bond lengths, bond angles, torsion angles. These are kind of like the differences between these models through time. But Cartesian arc-olivarian GNNs, essentially what they do is instead of computing these properties, what they do is they constrain the message passing in a way where it will preserve the symmetry. So instead of, so it's just kind of like they've designed the message passing in a certain way to preserve it. So it's kind of like a constrained message passing. But this helps because it avoids the need to calculate or pre-compute these like bond lengths, bond angles, and torsion angles, which could get very, very expensive as the molecules grow. Because you could think of like the minimum bond lengths you have to compute as n minus one atoms, if n is the number of atoms, and bond angles would be n minus two and torsion angles would be n minus three. And you can imagine if you have a protein of let's say, if you have anything like a protein, in the case, like if you have a thousand atoms in the system, calculating all these will be very expensive. So let's avoid that and instead just design the message passing in a way that we don't have to compute this, but we'll be able to preserve the symmetry. So that was pretty cool. We don't have to compute this, but we'll be able to preserve the symmetry. So that was pretty cool. And then what we have on top of that is a spherical equivariant genin. So before we were in Cartesian space, and now we're going to spherical space, which is essentially what they do is they work with spherical tensors that are irreducible representations of the rotation group SO3. And we're not going to really dive into deeply on it, but essentially is what they do is they, it allows for really cool things that they could do with essentially that, but they preserve the symmetry by essentially working with this rotation group with spherical tensors. And lastly, we have unconstrained GNNs, which has been pretty recent. It's been pretty interesting, actually. So in the above three methods, they've all kind of in a way tried to either work in a space like for example spherical genomes they work in a space that has rotation submit like uh equivariance and invariance uh or like a rotation symmetry and then cartesians have done this where in the message passing invariant genomes have done this in input features but unconstrained gs and all these prior ones, they are sort of like constraining the model. These are hard constraints on the models, like baking and that sort of thing. What unconstrained GNNs do is they kind of softly constrain it, where instead they go through these things like they do data augmentations on the input data, or they incorporate loss functions that would have the model learn to be rotationally equivariant or invariant and stuff of that sort. And what this does is now you're not constrained to a certain type of model architecture. Now you kind of have freedom to expand and it also allows for freedom in certain optimizations in the model. So just to take note is that the unconstrained genomes kind of enforce the symmetries in a soft way, while everything else does it in a hard way. So I hope you kind of understand like the importance of geometric deep learning and its kind of influence on the field. But now we're gonna move on to generative models and there's a whole slew generative models that we'll be focusing on ones that are kind of state of the art today and have been a very popular method over the last few years during diffusion models. And so we'll kind of start off with images and then I'll give a demonstration on molecules, but essentially, so let's say you have your data distribution X zero on the left and on the right, you have your X of T and in this case, and in diffusion models in general, this is a Gaussian distribution. And so why the need for this, like, you know, these two distributions, right? So the data distribution, we don't know how to, you know, what's the generating function for this distribution. We don't know how to sample from this distribution. If data is complex, there's a lot of things that make it really, really hard to sample from this data. But Gaussian distributions is something that is really, really easy to sample from. And we can do it really fast, essentially. We know the function. It is easy to sample from. And that's amazing, right? know the function, it is easy to sample from and that's amazing, right? So how do we get the benefits of being able to easily sample from a sampleable distribution but get something that is useful like something in the data distribution? So they do this concept called a diffusion which essentially you're slowly transforming the, or smoothly transforming the data from the data distribution to the Gaussian distribution. So you're essentially transforming the probability density so it matches that of a Gaussian. And what you're going to do is you're going to learn essentially the score function, which is the gradient of the log likelihood with respect to your input. So in this case, it would be the pixels of the image. But essentially, it's going to be learning the change in probability density at varying noise scales. And so here, noise scales are just like, you know, capital T, like we have T different, adding a little bits of Gaussian noise to it. So we're gonna essentially try to learn that gradient of the log likelihood and then that gradient of the log likelihood, and then try to reverse it. And what was really cool is that the diffusion, the reverse of a diffusion is just diffusion in reverse time. So this is kind of what they exploit here, where they will transform now sample from the Gaussian distribution, and slowly and iteratively remove noise or denoise the image so you get samples from your data distribution. So and they do this like the whole sampling scheme follows like follows like longivine dynamics which if any of you are familiar with molecular dynamics is a very um key thing in molecular dynamics so it's really interesting to see this like uh these diffusion models kind of uh take inspiration from physical um like how things work actually physically and so i think this is part of the reason why for molecules has been working pretty well. But and we'll see here, like this is an example for the molecules. So you can start on the left. This is like the Gaussian distribution on 3D coordinates. And then we iteratively, like slowly, we'll denoise the molecule till we get something that would be a sample from a distribution, from our learned distribution. we're in images, we do this with like the RGB values on the pixels. For molecules we would do this, like if you're doing conformal generation you just do the diffusion on the positions of the atom coordinates. And if you're trying to generate new molecules, you do the positions and you'll do like the atom features and um if you haven't already i recommend watching last month's talk uh with paul where he actually goes over the diffusion stuff for proteins by hand um and it's pretty informative and interesting. And what, you know, you can easily extend the things that he does in there and what he displays on there to molecules. So, you know, this is in the context of proteins, but it's easily transferable in the context of molecules. So I recommend watching that if you haven't already. And so these generative models have been very, very useful in things like So I recommend watching that if you haven't already. And so these generative models have been very, very useful in things like de novo generation, so generating novel compounds. And you probably don't want to just generate any compounds. So a lot of times that are like property guided generation. So using conditioning the generation process to generate molecules with certain ADME profiles. And so you can meet criterions, you can produce analogs of compounds to help you generate ideas. Or if you're given a protein pocket that you're trying to target things like I don't have a reference in here, but like a structure based drug design, this is work from Ijvom, I believe. I might be wrong, but essentially where they tried to generate a ligand within the protein pockets of a protein. So all fascinating works that involve diffusion. And I think there's a lot more to come. So keep an eye out for that. And then there are some other areas. I'm not gonna dive into them too deeply cause this is a little bit out of my expertise or I'm not even an expert, but just out of my domain of what I know. And so there are things like reaction synthesis planning. So like how to synthesize desired molecules. This is very important are things like reaction synthesis planning, so like how to synthesize desired molecules. This is very important for things like compounds. You can't just like make them. You have to design and follow like constructed from materials that you have and perform the right reactions in order to get your end result. So this whole reaction synthesis planning is very interesting. I think folks at Connor Cooley's lab in MIT have been doing that. Active learning is something like essentially it's this iterative cycle where you're trying to find the next best thing to generate data using labs that will fill in gaps in the model. So I'd say your model is predicting really badly on this chemical, this space of chemical, like molecules. Well, then let's just try to generate more data to fill in the gaps and then train the model to have it perform better. I think there's a recent trend in growing interest in machine learning and multiomics, and then learning on knowledge graphs, which is kind of leveraging all the data and relations accumulated to derive even new relations. And then finally, we're going to go over the open problems and challenges. Essentially, that's a few, I guess, these are mostly centered around generative modeling, but there's other stuff as well. But for generative models, for example, I think we need to rethink a lot of things about how we evaluate these molecules. For example, there's this metric on there called validity but it doesn't really measure if these are valid molecules on these from these generative models. Essentially a lot of times these generative models will produce molecules but these molecules are not synthesizable they're just like really weird or really reactionary. So kind of like these methods of measuring validity is not so accurate. And I think work and being able to better measure this validity and account for it and account for it in generation process. But other challenges are like data, right? We have unfortunately like image domain or natural language, where there's like, you know, lots and lots and lots of data out there for those domains, but for molecules, it's not so much, right? In the public domain, we have okay data sets. And even in an industry, things are things are you know they have a lot more data but um it's kind of scattered and broken at times but uh but i think um uh some collaboration between the two like a lot of pharma companies are benefiting from the research efforts in the public domain for drug discovery. So it'd be great to see more pharma companies or maybe forming like consortiums. There's like things like IQ Consortium where companies can come together and collaborate on something. But it'd be great to see them to, you know, being able to provide great quality data sets that the research community can use so that they may benefit from it, right? And so, because a lot of complaints from industry are like, oh, you know, these models don't transfer well into industry settings. Well, it's part of the reason is data. But even in industry, we don't have the volumes of data that we have for images in natural language then we have auto distribution generalization which is essentially how well can my model generalize uh when i'm really far out from my just my training distribution and biological complexity like uh you know biology is very very complex and you know we've made a lot of progress biology is very, very complex and we've made a lot of progress, but these are all still in isolated settings and biology is like this whole system of interactive things. So being able to scale up to that would be a really, really interesting challenge for the future. So I'm looking forward like developments in this area. And then just kind of like a, I think I gave a brief intro in the beginning, but like my research and kind of focused on like modeling, these things are called stochastic interpolates. This is from like a term point from Michael S. Embargo from NYU. And it's essentially trying to model it's these things of like diffusion and flow matching and all that kind of stuff. But I really like this paradigm where we're trying to have these generative processes that are very much like actual real physical processes. And I'm combining that with geometric deep learning for all these different areas, like protein-magnet interaction, sampling equilibrium states of molecules, and like incorporating physics in an efficient way to kind of reduce wonky molecules, kind of like those molecules I was talking about with the generative models earlier, trying to produce more realistic ones that obey physical laws, but we don't have to pay the price of physical laws. So I'm hoping to push that field a little bit further. And so yeah, we're going to work on some things like trying to incorporate these physics a little bit more in my research. So yeah. yeah, incorporate these physics a little bit more in my research. So yeah. So thank you for listening. And if you have any questions, you can always reach out to me. I probably should have put my contact on these slides, but always happy to discuss about this area of research and help people kind of get into it. I believe Nucleate will be releasing a resource page around this as well, which I think is a great first start to kind of getting into this. And I would like to thank Nucleate again for inviting me to give this talk. It's been an interesting and really fun challenge to kind of construct these PowerPoints in a way to introduce you guys to the school. | Introduction to AI in Drug Discovery | 3,348 | Nucleate | 20240305 | AI in Drug Discovery: Introductory Overview by Majdi Hassan 💊🧬💻
🎙️ Presenter:
Majdi is a Master's student in Machine Learning for Drug Discovery at the Mila Institute in Montréal. He has worked at AbbVie for several years, developing state-of-the-art deep learning, including knowledge graphs, geometric learning and molecular simulations, all within the context of early-stage drug discovery.
https://www.linkedin.com/in/mhassa22/
🔔 See more content of our initiative here - https://linktr.ee/aixbiotech_nucleate/
📧 Contact: [email protected]
#drugdiscovery #ArtificialIntelligence #BioTechAdvancements #ComputationalBiology #BiotechEducation #ScienceAndAI #InnovativeDesign #BiologicalSystems #TechInBiology #FutureOfBiotech #AIInHealthcare #ProteinStructures #TechAndScience #BioinformaticsExplained #SyntheticBiology #ScienceExplained #BiotechRevolution #AIInScience #MolecularInnovation #TechBreakthroughs #biomedicalresearch | 2024-09-03T11:07:13.481852 |
https://www.youtube.com/watch?v=-hl0jpwWbV4 | You're listening to Gradient Dissent, a show about making machine learning work in the real world. And I'm your host, Lucas B. Wald. Today, I'm talking to Max and Sergey. Max Jaderberg is the chief AI officer at Isomorphic Labs, and Sergey Yeknen is the CTO. Isomorphic Labs is a drug discovery company spun out of DeepMind, and so we talk about the drug discovery process itself and how it's been impacted by recent advances in deep learning. I hope you enjoy this conversation. Well, why don't we start by, could you guys introduce yourselves and your company? Yeah, sounds good. My name is Sergey. I'm the CTO here at Isomorphic Labs. I have been with the company for two and a bit years. Actually, me and Max started like maybe a week or two apart. Max first, me second. But, you know, we feel like we've been there since the very beginning. And, you know, I've had a long history in tech, working in a number of different areas, building tech products for different companies and various sectors and financial technology and risk management and geospatial software in sort of e-commerce and then also spent quite a bit of time working in healthcare for the past 10 years plus really focusing on research and the cancer genomics landscape and then also took some of that research into the industry working at a company called Sophia Genetics where I was CTO and then you know now for the last two years plus in the drug discovery space. for the last two years plus in the drug discovery space. Awesome. And yeah, I'm Max Jaderberg, the Chief AI Officer at ISO. Yeah, started with Sergey. I think there was four or five people in the company when we started. Been building out all of the machine learning models, the research, and how we apply those models to drug discovery here. I was at DeepMind beforehand for seven, eight years, working on a lot of core deep learning, generative modeling, reinforcement learning, these big challenge domains that DeepMind loves, with a background in, before that, in computer vision and deep learning, really building out some of these first networks for computer vision with deep neural networks and had a company in that space as well. And so I guess actually neither of you are approaching this directly from a bio background then. Yeah, I mean, I would say we're learners somewhere along the journey. I've been, you know, in the sort of healthcare slash bio space for about 10 years now, but I always feel like this is such a deep field, and there's also so many different disciplines as well. You always feel like you kind of know next to nothing, but you get comfortable with being uncomfortable about things. Yeah, there's a lot of leaning into asking the stupid questions, probably repeatedly, a few months apart, and yeah, just trying to really take on board this new science, which is fascinating. And biology in particular is so deep that I don't think you can ever really feel like you've finished that. But we're quite lucky in the sense that we have some amazing colleagues and fantastic world expert chemists and biologists around us that really help us along the way. Well, so that's great. So you can empathize with me and probably most of the audience here and maybe kind of take us along for the journey. Maybe the place to start is what drug discovery is at a high level and how ML fits into that process today. Yeah, sounds good. Do you want to take it away? Yeah. I mean, you know, very high level. What you're trying to do with drug discovery is effectively modulate the pathway of a disease. So there's a disease and a whole patient population out there that we want to help by designing drugs and these drugs go into the body and modulate on some functional level the process of this disease in the body. So when we think about this at ISO, we're really thinking about designing small molecules, the sort of things that you can take as a pill. And these molecules will be absorbed into the blood, into the cells even, and attach themselves to these proteins, which are the functional building blocks of people. And by attaching themselves to that, they'll either disrupt or modulate the functional behavior of those proteins and so change the disease state of the person and so you know drug design is all about okay what exactly are those molecules that are a going to attach themselves to these specific proteins that are involved in disease or a whole suite of proteins and also be you know good drugs in the sense that you can actually take them as a pill. They'll get into the bloodstream. They'll get to the right part of the body, and they won't cause any toxic side effects, for example. And so what's the typical process of doing drug discovery? What did it look like 20 years ago? Maybe what does it look like today? Sorry, I think it's a great question. I think one thing that, you know, Max picked up on there, that's really important is something that our CSO Miles often tells us that this is a drug design process, less so than a drug discovery process in the sense that we don't find these drugs. We want to be able to design them specifically to solve a problem. And this is actually a bit emblem you know deep understanding of chemistry by human experts a lot of intuition you know some understanding of the disease but it's a very much trial and error process in the sense that you might form a hypothesis based in your experience that well this type of molecule might fit really well into the type of pocket in a protein where I wanted to go. And then you would go and you would make those molecules, you know, you'd synthesize them in a lab and then you would actually test, you know, does it go there? Does it bind? Does it have some functional impact on, you know, the disease of interest for you? And so when you measure it, oftentimes you would discover, no, actually it doesn't. And so, you know, drug design is actually a very frustrating discipline in that there's a high degree of failure. Even currently, when we think about how this process is performed, it takes 13 plus years on average for one drug to reach the market. It costs $3 billion plus per drug on average. And the process is really rife with kind of toil and failure. And one of our hopes is, of course, to be able to build a technology that will be able to reason over the space in a much more rational manner, such that it becomes less about this, you know, trial and error and much more about rational design. And this is one of the things that excites us the most about what we're doing. And so I guess what my impression of ML applied to drug discovery is it kind of had some sort of early hype cycle, maybe five or 10 years ago, where people got excited and then a little bit of, you know, kind of disappointment. And it sort of seems to be re-emerging with a lot of enthusiasm. Do you think my outsider perspective is accurate? And I guess, do you think that there's some technology that's driving that feeling? Yeah, I think there's something to what you're saying. to what you're saying. There was a first wave of companies, I think, that were really taking this approach of using machine learning to aid product discovery. And I think there's sort of two fundamental strains of how you can think about machine learning in this space. One, and it goes towards this notion of are you building local models or global models that generalize? And I think in the earlier days of machine learning for drug discovery, a lot of focus was on building local models. And what I mean by local model, it's where you take a small amount of data on the order of thousands of data points that are very related to the exact problem that you want to solve today or tomorrow. And you train a small model like a small NLP, even random forest SVM, against that data. And then you apply that model around, you know, just in that local region, around the sort of data that you already know about. And so this can be really helpful if, for example, you know the specific part of molecular space that you're designing around. You've got some initial wet lab data, you know, real experimental data about what works, what doesn't. And you train a little model and you help use that to interpolate and extrapolate a little bit around this region of molecular space. And sorry, before I let you go further, what does that data look like? I'm trying to picture, like, well, what are the, you know, what are the rows and columns here? Yeah, you know, think about the problem of does this small molecule bind to a particular protein? Now, in the world of local data, those rows and columns would literally just be particular protein. Now, in the world of local data, those rows and columns would literally just be the chemical formula of that molecule and the activity of that molecule. In the most critical, that could be a one or a zero. And how is that data collected? Like, they put the molecule on a test tube and shake it or something? Yeah, yeah. There's like a few forms of shaking and the whole protocols for how you shake and for how long and you know it depends on the the protein that you're measuring against each each protein is might have a different set of subtleties in terms of how you set it up or how you might measure activity of this molecule against that protein um but ultimately you know we think about this for example in a in a test tube test, an in vitro test, so to speak. Got it. Okay, so that's the past. Sorry, there's like millions of potentially different proteins, and there's many, many more different molecules. if you were building this model for a program, you would literally have the one protein that you're interested in, and then some small number of potentially related molecules. And so the coverage of the space was quite small in that name, right? Yeah. Got it. So in that world, when you train this model on this small amount of data, it's very specific to just this protein that you're targeting, very specific to even the bit of molecular space that you have data or molecules around, which means it's limited in its utility. It's limited to that particular drug design program, and it might be useful, and in many cases it is, but at the end of the day, you can't really walk away with that model and reuse it on the next drug design program because it's too specific. So you would have to do that process again on the next program. And then I think what we've seen since maybe that first wave in 2015, 2016 are actually fundamentally new models. And I think really emblematic of this is AlphaFold and AlphaFold2. These are sort of models which I would call global models in the sense that they train on as wide a variety of data as possible in molecular space. And as a result and of course as a result of amazing deep learning and research, these models after training they actually generalize outside of the distribution that they've been trained on to completely novel spaces in terms of protein sequences, in terms of the molecules that they're able to predict accurately. So it's that type of model, these global models, that we're very much focused on building at ISO because we believe that actually training on the most data gives us the best support for any drug design program that might come our way. Of course, we can specialize these models further if there is specific data, but really it gives us the best starting point and we're able to use this platform again and again on different drug design programs. And so just to make sure I'm following what you're saying, the input here is like a molecule and the output is some behavior. Is that what you're saying? Yeah. So an example model would be the input being a protein and a molecule, a small molecule that could be a drug. And the output would be, okay, do these things interact? I see. And how do you feed a molecule or a protein into a model? Do you actually treat it like a string of chemical structure? Do you put in the name of the protein? How does that work? Yeah, that's the cool thing about these general models is you can actually just specify these inputs as strings, strings of amino acids, or what's called SMILE strings for molecules, for example. There's a lot of knowledge about the actual structure of these objects. So although it's just a string, in the neural network or the way that you process that string, you can start embedding all of the known structure about these objects, the bonds that are there, and even how these might look in 3D to be inputted into the neural network and to be featurized by the neural network. And I mean, how do you feed that in? Do you actually somehow preserve the graph structure or is this literally just like a longer and longer string sort of specifying all the aspects of how the molecule looks? It really depends on the application on the model. Like a lot of this is very empirical. So, you know, sometimes raw string works. Sometimes you want to embed the graph structure. Sometimes it's point clouds in 3D. Yeah, it really depends on the application here. And, you know, a lot of the work here, like, you know, the rest of deep learning is very empirically driven. I guess, like, one thing that is worth also mentioning, Lucas, is that drug design isn't really one machine learning problem. It's actually a whole wide variety of different problems that call for specific representations of these systems. And when we think about even the moment when AlphaFold2, for example, was announced, there's a lot of hype around like, well, maybe AlphaF 2 like has solved drug design and we're done basically and we're not done in fact it's opened up the door for really amazing breakthroughs and it's really fundamentally solved the problem of predicting protein structure but we feel there's probably on the order of ten or so you know alpha-fold like challenges that need to be solved in order to actually be able to resolve this, this really complex set of questions, scientific questions around how do we design a drug that is going to solve, you know, solve the ultimate problem and have a whole bunch of other desirable properties as, as it's administered. But I think what I heard you saying earlier is that, you know, one of the big changes versus in 2015 is that, you know, kind of solving one task in the space informs other tasks. So you have some sense of like, I'm sort of imagining the foundation models that we have in language. Is that a fair analogy? Yeah, I think it's a fair analogy. Yeah, I think it's a fair analogy. And this is the sort of thing that we're starting to observe and really work out as a community is what are those foundational modeling problems in chemistry and biology that really translates into lots and lots of downstream performance and lots and lots of different tasks. It's a really exciting time for this field because I think there are a few hints of this bubbling up with things like sequence modeling on protein, sequence modeling on DNA, on base pairs, on structure modeling with AlphaFold and everything that can be done with that model. If you just take AlphaFold 2, that's a single supervised learning task of predicting the 3D coordinates of proteins and protein interactions. But if you look at the depth of publications that use this model, that hack on this model, that extend this model further, you can see there's so much dark knowledge in this model that allows it to be used for many, many very useful downstream tasks that aren't directly structured prediction, but structured prediction clearly is a precursor to thinking about these other downstream problems. But doesn't it kind of require a consistent representation? I would imagine if the core model had, you know, say the molecular structure encoded in the way you feed in data, it'd probably be hard to feed in data as like a point cloud or something, or are these models sort of multimodal in the same way that like GPT-4 is? A lot of these models operate in sequence space. So whether they're protein sequence models or alpha fold itself, like the input is a sequence. But as we know from, you know, vision space, from NLP, there's lots of opportunity to even use a model that's been trained in a single modality and extend it into other modalities as well, even without keeping these trunks frozen. So, you know, I think there can be a lot of creativity around how we use these models and how we leverage them outside of the particular training domain. I think one thing worth mentioning as well here is that, you know, in biology, there's room for several foundation models, I feel, because there's actually multiple scales and resolutions at which you can look at these different phenomena. And so, you know, at sort of the highest resolution level, you can look at sort of subatomic quantum level effects, and then you can step up to sort of atomic molecular level interactions. But then one level up is modeling what is going on inside cells or how to, you know, how to model the behavior or the function or the fate of a particular cell. But then, of course, these get organized into tissues and these get organized into organs. And then there's whole humans and even how humans interact with their environment. And so I think we have to be quite specific about which level of resolution we're talking about and we can build models that reason over that space. And there's also, we feel, a lot of opportunity to actually then think about how we can integrate, how we can reason over multiple levels of resolution. But, you know, this is very much an open research question. I guess I feel like, you know, the NLP space, which is something I'm more familiar with, you know, people used to talk a lot about, you know, kind of higher order patterns, you know, like, you know, semantics and things, and even the data you would feed in would typically be already chunked into words and sentences. And then there's been sort of a general trend to be, you know, less opinionated in the data you feed in and, you know and maybe operate on a granular level, like just treating information as a set of tokens with less priors. Is there a similar thing happening as the data size increases in the realm of biology? Yeah, we're seeing a very similar trend where more and more of the input representation gets to the lower and lower granular level. I think two examples would be actually AlphaFold itself and moving to this AlphaFold latest model that we've been developing with Google DeepMind, we really think about tokenization of the particular elements and treating things in that sort of homogenous way allows more scalable architectures to be used. And for these neural networks to actually extract that high-level structure themselves. Same things with this recent model like Evo, which is doing sequence modeling on the base pair resolution. Again, putting minimal structure in, but larger amounts of data can hopefully allow a model to internally extract these high-level representations, which could then be quite generic and be used for many downstream tasks. So I think it's a similar trend here. Are there any differences in the way that you model this kind of data? Are you using transformers? Is it functionally just like the LLMs that come out, or are there any kind of key differences that you have to do? Yeah. The big difference, Lucas, is that the scale of data is many orders of magnitude lower than in something like NLP or computer vision. So although we want to move in this direction, there's still not enough data, in my opinion, to have completely flat sequence models and extract, I would say, enough about the world of proteins from just sequence data. You can get far, and there's many protein sequence models that have done very well. But I also think there's a lot of opportunity to actually inject priors, whether these are physical priors, biological priors, chemical priors, inject priors, whether these are physical priors, biological priors, chemical priors, structural priors, into the way we represent the data, the architectures that process that, and also on the loss side, the types of loss functions we use, or theory losses or other modeling tasks that are associated with this. So I think that's where the difference occurs today. But just because of the quantity of data, you don't quite have enough to just say, hey, we'll just stick a super deep flat transformer and let it run for a month or two. At the same time, I want to say that I think in the space, kind of the context length, the input size can be very, very large, actually, depending on, obviously, like how much of the system you want to be able to describe. But if we think about a problem where we want to be able to predict something about, you know, a clinical outcome or something like a disease prognosis for a person, you know, one might envision, you know, wanting to put in the whole genome, which could be, you know, hundreds of gigabytes of data, essentially just a genome, but you may want to be able to also add, you know, other supplementary information around some of the imaging they may have done or different interventions, you know, all of the basically like clinical history. And so when we think about how to model that, we need to be able to, you able to model quite large input sizes. And even though we've seen quite a lot of progress, for example, with the latest version of Gemini on really large context links, I think there's a lot to be still done there in terms of being able to support these kind of large scale inputs. And to your point, though, of course, we use Transformers, an absolute workhorse of our neural network stack today. And then the question comes into how do we transform these inputs to really be best processed by these generic and scalable neural network modules? Just in the same way as for VIT, you have the question of, okay, how do we transform images in a way that's congruent with transformers? And okay, you've got this patch-based mechanism. It's fairly straightforward for images. What's that equivalent for proteins, for structure, for genomes? These are all things waiting to be, we're along that path of discovering that, but I think it's really early days. Interesting. So you don't feel like it's resolved. I mean, I guess I'm imagining with strings, you could do a really similar tokenization strategy, but I guess with a graph structure or something, or point cloud, it's kind of less obvious. Yeah. Lots of these things could be natively thought of as graph structures. There's a lot of people working on graph neural networks for this sort of space. But even think about, okay, how would we then transform a graph into something that could be ingested by a transformer? And does any of this actually make sense when we really think about the biology or think about this thing as a molecule rather than just a generic graph? And there's a lot of subtleties. There are things that work, but I just think as a field, we're still quite early in this and there'll be lots of space to innovate. Even when you think about genomes, right, you could think ostensibly, OK, that's a linear sequence. You can follow like a very simple tokenization method. And indeed, actually, many of the current methods do that. But that may sort of ignore the fact that there's massive amounts of structure that are not linear in the genome. fact that there's massive amounts of structure that are not linear in the genome. There is secondary tertiary structure in actually how those molecules are packed inside a cell, what could be actually the units that divide interpretable units of information in the genome. Oftentimes, even different pieces can swing together for a particular event. And so I think when we think about data representation, it's important to also inform our tokenization strategy by actually some understanding of the underlying biology and sort of, you know, how these molecules actually behave in the real world. Although it's kind of funny as I listen to you, you sound like, you know, NLP researchers 10 years ago. I kind of, you know, I. I feel like the tokenization strategies, from my perspective, get simpler and simpler and more and more painfully losing information and priors that seem really important. I wonder how this will play out. I'd love for us to be in that regime. I'm curious. I remember the Nature paper that came out with ImageNet a long time regime. I'm curious. You know, I remember like the nature paper that came out with ImageNet a long time ago. And I think one of the things that really was just so amazing about it, was taking, you know, CNNs trained on ImageNet and then applying them to detecting, you know, melanoma. And it kind of makes me wonder, like, is there ever a world where you would take, you know, models trained in entirely different, you know, domains where there's tons of data? Would they have any bearing on the domain that you're in? I feel like the, you know, the multitask applications just seem to be expanding beyond, you know, kind of anyone's wildest dreams. applications just seem to be expanding beyond anyone's wildest dreams. Could you imagine a world where GPT or Gemini becomes relevant for what you do? Yeah, I mean, it's really interesting. There are these papers, I'm thinking about an Igor Mordech paper, for example, where you take a pre-trained transformer large language model, and then you can just project arbitrary new problems into token space and just bootstrap off the internal reasoning that happens in a large language model for a completely different modality. I don't think we've ever tried that, you know, going from language into natural language into protein language. But I'm really curious to see if this sort of thing could emerge. Or similarly, if you start training a large language model on DNA, does any of that functionality, for next token prediction, does any of that functionality translate into something when you're thinking about, I don't know, protein structure? Yeah, I mean, I don't know, protein structure. Yeah, I mean, I think, you know, it's possible and we can get some transfer there. But, you know, in my mind, there's also, for example, a difference between solving a problem enough where you can write an interesting paper about it. And I think there's potentially lots of space for that. Or like solving a problem where it's solved in the real world and you know this there's a big difference for me there and you know especially at ISO our biggest focus is to solve problems in the real world and my experience tells me that there isn't necessarily like an easy lift and shift like that you need to spend a lot of time actually with the data and with the domain to still be able to get that real thorough solution. I'll give you as an example, my experience at Sophia Genetics previously where we were doing segmentation of tumors on the basis of MRI and CAT scans. And of course, we tried a whole bunch of fairly sort of established vanilla methods from vision. But, you know, what we learned actually is tumors are not like a lot of the other objects that you might normally train your model on. They're not, you know, they're not cats and dogs and sort of other images on the internet. And so, you know, the morphology of a tumor, the space it takes, it's, you know, it can have a lot of different variability. And it can be, for example, very globular, but it can also be very diffuse as it develops in the organ. And so, you know, we found that we really need to build quite specific models in order to be able to reach that performance where I'd be willing to start using that model to actually, you know, guide clinical decision making, for example, right? And so this is where I'm kind of going with that difference between what's enough to write a paper about versus what's enough to put into a medical device, for example. And some of this really interestingly comes down to that use in, you know, training these models for actual use in novel science. And it's that novel science bit which is quite interesting because with machine learning, what is machine learning? Well, you take your training distribution and you try and fit a statistical model to it. But when you're applying this model to novel science, by definition, you're trying to use this maybe in the very, very tails of that training distribution, maybe even completely outside of that. And that's quite unusual often when we think about the application of machine learning compared to, for example, self-driving cars. And so there's a lot of subtleties in how do you transfer or how do you set up your training distribution, how do you set up your model training? How do you even apply that model downstream to actually do novel science? Do you think that there is overlap with what you do in research in material science? I feel like we've had a lot of folks come on here talking about kind of doing chemistry. Like, is that relevant to you? Are there data sets that you can draw on from that? Or is biology just kind of a completely different world? I think there's a lot of interesting similarities, especially to my previous point on training these networks to then extrapolate and discover new things. Somewhat annoyingly today, it feels like that inorganic chemistry is kind of a different world to the organic chemistry that we're working with. The types of problems, the types of data are different. Saying that, fundamentally, physics is the same for these systems, right? So at some level, there has to be some notion of overlap or something we could transfer. But I haven't seen much of that intersection between those two fields like material science and, for example, organic chemistry. I guess switching gears a little bit, how do you think about open source in your world? I feel like there's sort of this big debate around open source with these language models I feel like there's this big debate around open source with these language models and these multimodal models. And I think there is starting to be a little bit of open source models coming out. How do you think about that at Isomorphic? Yeah, I guess the open source community, I think, is a great benefit, and we've all benefited from that, and I think many also participate in it. So I think it's a great working ecosystem. I think it's important to basically balance the ability to contribute to that ecosystem as well with the ability to develop specific, unique capabilities that actually sort of sustain a business. And so I think, you know, that's what we're doing here at Isomorphic Clamps and so on. Totally. What about data? I mean, I would think that, you know, here the data sets are probably much more proprietary than folks would be used to. Where do you think that goes? than folks would be used to. Where do you think that goes? Well, I think there's actually quite a lot to be said for public data sets. As we've seen, you know, tools like AlphaFold have been able to do amazing by working on, you know, being trained on the protein data bank. And, you know, I hope that statement doesn't trivialize the protein data bank because it's actually an amazing resource, the result of like, you know, thousands of years collectively of research. So, you know, I think there's still a lot to be said. And certainly, you know, I would imagine any organization that goes into this space, you know, seriously would want to look at these public commercial data sets at the first instance. But there's, of course, also a lot of unique data sets. And, you know, one question to be asked is,, how useful might that data be for training the type of models that, you know, me and Max have been talking about. If you're thinking about building a general model that works across all of chemical space, you know, you can see potentially limited usefulness in data points that are very near to each other in that space, actually. And so when we think about data, you know, we think very carefully about data set diversity, being able to cover as much as possible. Of course, some of these spaces, you know, we think about the space of, for example, all of drug-like molecules is something like 10 to the power of 60 molecules large. So sort of any notion of coverage of that space. I think, you know, we'll be a long time synthesizing data for that, but it's still quite important when we think about data set design. And so when one thinks about data, we basically need to go both to data sets that already exist, but also be ready to generate new data sets that will help us actually, you know, go into and get great performance on areas that have never been tried by humans before. You know, it's interesting, I guess, I don't want to take this in a weird direction, but I sort of have to flag, I mean, 10 to the 60 possible drugs actually seems quite small, right? i mean that that actually is much smaller than i was expecting i mean i feel like you could compress that you know until like 30 characters or less but i think maybe now i'm wondering do i actually even understand what the space of possible drugs is here i think it's pretty huge um I'm not sure I can think about rationalizing the 10 to the 60 space when you think about what each one of those objects is. It's this unique 3D arrangement of atoms, and it's not just about the 3D arrangement, but that arrangement in complex with the environment in the cell, so the protein, and that arrangement, you know, these are not just identical atoms, but these are different charges, you know, that precise, both local and global arrangement changes that whole electron density around that 3D structure. Right. So totally. So, but how do you get that down to 10 to the 60? I mean, I'm just picturing, So totally. But how do you get that down to 10 to the 60? I mean, I'm just picturing, you know, if I could string together, you know, 60 atoms of choice, like that would really that humans have developed of drug-likeness that basically constrains the set of molecules. And you can create infinite numbers of molecules, but there are certain constraints around size, for example. So one of the things that needs to happen for your drug to work, especially if it's a small molecule, is it needs to be able to go into the cell. And so the cell is only going to be able to take in molecules of a certain size. And then there's other issues like these molecules need to be soluble. These molecules need to be able to go through membranes. They need to have other behaviors. And so basically you can write a few heuristics that constrain that space to 10 to the power of 60. But, you know, in fact, you may need to search a much larger space in the end. But this is just something that helps us, you know, think a little bit around some boundaries around what the usable sort of useful molecule space might look like. Okay, well, Tim can decide if they want to leave that in. But that was actually kind of enlightening for me. Thank you. Do you, at Isomorphic, have an ambition to take the drugs all the way to commercial viability, or do you kind of stop short of that? How does it work? Yeah, it's a fascinating space, actually, and one that's been really interesting to learn more about as we have worked in this space for a couple of years. But I think there are different stages of that drug design pipeline. And a particular asset has an amount of value that is essentially corresponding to the amount of risk that is inherently left in that asset. So when you think about, you know, the stages, it's sort of like target discovery. And so when you've identified a really cool target, there's already some value in that. And then you go into hit ID, sort of lead optimization stages. At some point, you're going to go through preclinical studies, clinical trials, you know, one, two phases, one, two, three. At each point in time, you will have addressed and managed certain amount of risk. And so the value of your program goes up. And so there isn't actually one single answer as to where to stop. It really depends on your own appetite to take on the risk and on your own ability to actually do something with that. And so when we think about where should one stop, you know, we have created, of course, Isomorphic Labs to bring a unique angle of how machine learning can help transform drug design. And so that's one, you know, lens that we apply. form drug design. And so that's one lens that we apply. And then the other one is basically how much of sort of that process are we willing to go through in order to maximize the value of each molecule. And so as we rationalize our portfolio, we're basically working very actively to say, well, we think this needs to go over here. And that's a sweet spot for that particular asset. And for this one over here, we should be able to partner with somebody who we think will be much better able to actually execute, for example, the clinical trial stage of that. And so that's a great idea to work with them on that molecule. Interesting. Would you do your own clinical trials? Wouldn't that be quite a departure from your core competency? Or is there like ways that ML could play a role there? I see lots of opportunities for ML to play a role in that. And, you know, there are ones that we're thinking about as well. To me, you know, the most obvious is, you know, when you have a, there's kind of two problems. One is you want to identify a disease and then you want to find the right molecule for patients with that disease. But there's also sort of the other side of that problem, which is when you've designed a molecule, you want to find the best patients for that. And so part of that is solved within the clinical trial sort of phase of drug development. But even after that molecule is on the market, we have things like companion diagnostics. How can I find patients with the right, for example, genomic signature to be best fit to benefit from this medication? And so there's plenty of opportunities for us to apply machine learning and thinking about, for example, how does the genetics of someone influence, you know, the type of disease that they have, or how can we better classify diseases based on molecular signatures. And so, you know, we see all of these as great opportunities for us to be able to optimize things like clinical trials. If you think about the numbers, Lucas, basically 90% of molecules that enter clinical trials actually fail. So, you know, these are not great numbers. And these are, you know, some of the ones that are behind what we're seeing as these like really high costs, really long timeframes for bringing these molecules to market. And so they seem like really amazing opportunities to actually be able to go afterwards. And so I think there's, you know, there's a lot to be said for working in that area. And I guess, dumb question from the outside, but why do so many molecules fail? I mean, you described like a pretty simple sounding task, like, okay, like, does it like, you know, bind to a protein or not? And like, that's the plan. Presumably, you'd like a pretty simple sounding task. Like, okay, like, does it like, you know, bind to a protein or not? And like, that's the plan. Presumably, you'd know if it binds to the protein before you ever, you know, put it in any living organism, I would think. I mean, is it because the drugs turn out to have like unanticipated side effects or what happens along the way? Yeah, I mean, there's the side effects, but just back to this point, okay, of course, you test that this drug binds to the protein, but maybe that's just in isolation in a test tube. Now, when it's in a cell, does it actually get into the cell and bind to the protein? Okay, even then, does that cause the functional effect that you hypothesized this would cause, that this binding event would cause on this whole signaling pathway, for example, that this protein might be part of? You might have just had this biology wrong. So even your hypothesis of the signaling pathway might be wrong. You might be targeting completely the wrong protein. So when you start when you start you can measure all this efficacy in cells even and when you go into looking at the effect on the disease you don't see any effect there. And that's before you even begin to look at all the the toxicity as you were talking about just how this this molecule breaks down, what effect that can have on the body. But also, you know, sometimes these molecules, more often than not, they don't just hit one protein target, they'll actually end up hitting a whole range of targets. And so what's the effect of that hitting of completely unintended targets? That can have some very serious, you know, consequences. So yeah, there's quite a lot that can go some very serious you know consequences so yeah there's quite there's quite a lot that can go wrong in this pipeline and you know as we talk about this you can see that there's lots and lots of opportunities to understand and model more and more about this microscopic world about disease biology how all of these things connect together. And as we understand more about the different pieces of this puzzle, that means that when we go to design a molecule and we select the target that we're designing against, we can do that in a way which we have higher and higher confidence in our hypotheses of this will actually produce the desired effect on the disease that we're trying to address. How do you decide what diseases to target? Like, is there like an experimental phase where you just kind of contemplate like anything and sort of, you know, see what looks promising? Or do you kind of pick like one disease where you feel like sure that there's a big market and there's sort of like a known kind of pathway and you just sort of like look to, you know, find something new to do there? Yeah, that's a great question, Lucas. In fact, you know, just like deciding how far to take your molecules, this is another quite complex problem. And, you know, I feel like we're actually very lucky to be able to build, you know, these general models at ISO that allow us to target many different diseases. This is one of the things that excites me the most about what we're building here in terms of technology. And so when we think about that space, actually to begin with, anything is game. And then you need to form some hypotheses indeed around what market opportunities exist there. You need to be able to think about the technology that we have, where does it actually perform best? And so we need to have a set of opinions about what is going to be more or less tractable. Every drug design program is a long mission that takes many years, it takes many millions of dollars. And so the commitment to go and do one is actually a substantial decision. And so you end up building this kind of model that builds in all of these factors around what is the disease burden, what is the patient need there, what is the market opportunity, what is the market opportunity what is the technology fit all of these different factors and then that allows you to go into particular disease areas and then you look deeper and you know you you need to find potentially unique important targets and you have to just basically decide where where are you going to enter you know another another aspect of this is, you know, who else is doing what in that space as well? You know, how far have people already advanced their understanding about a particular disease? And so if you go in there, are you going to be building, for example, a first-in-class medicine? Nobody has a medicine for this particular indication. Or are you going to be building a best-in-class medicine? Somebody has already a drug on the market, and are you going to be building a best-in-class medicine? Somebody has already a drug on the market and are you going to be able to build something better because the economics and all of it is quite different in these two cases. Is there a medicine out in the market today that I might use that was discovered using these machine learning techniques? I don't believe we have anything on the market at the moment that can be sort of straightforwardly linked to having been discovered using these types of techniques at the very least. I mean, I think people have been using, as Max has said, you know, various sort of types of machine learning for probably the past two decades. That said, there's a number of molecules and clinical trials now that would have been a number of molecules in clinical trials now that would have been developed sort of with at least what I would call, you know, the last generation of machine learning methods. I mean, do these new advances in sort of the early part of the funnel mean that there's now lots more candidate medicines that seem likely to work that people would want to try like i would think that these techniques would work much better at kind of the top of the funnel if you even think of it like a funnel i'm imagining a funnel that sort of goes from like you know kind of cheap candidates that seem to work in a test tube all the way to like gone through every um every trial is there sort of like a glut now at the top of the funnel of like now we got really good at that and we need to figure out like which ones to send through like are there going to be like kind of changes downstream coming from all this effort i focus on the top you know i i think i think this is part of the opportunity at least when we think about hit identification or is you know it can look a bit like a funnel, right? You start with more stuff and over time more things drop out because you realize the negative aspects against that particular design or that particular molecule. But of course, if you have techniques that allow you to discover or design more and more initial hits, and then as you go through those stages of further and further experimentation or further design even though some things might drop out you still have tons in your funnel that you can take forward and then by having a wider array of molecules that you're taking forwards by the end of your design process when you're thinking about okay what do we actually lead with and even thinking about, you would hope to have much higher quality molecules there and a lot more to choose from for then subsequent clinical studies. I see. So you think that your rate will go up, like more than 10% will get through the funnel because there's more options and you can actually model what's going to happen through the life of the drug? Yeah. I mean, I think we're going to be much better at being able to tell what's going to happen. And in fact, we already are. And so I very much expect that rate to go up. There's many different inputs into that. But I think it's worth saying, Lucas, like nothing is really easy in this space. It's not really like a low hanging fruit kind of space where, you know, there hadn't been like big technological breakthroughs and then sort of technology comes in and kind of swoops in and just solves it all. because it's really cutting edge science, you know, at the edge of sort of what is known to humanity now. And there's been a massive industry of, you know, the world's smartest, most impassioned people thinking about the space. And so, you know, breakthroughs, I think, are going to come by developing these fundamental capabilities. They're going to advance our knowledge. And then we're going to apply them in very smart ways to be able to solve the hardest problems so i would say it's less of a question of like picking up a whole wide of wide range of like simple problems and just going to town on them it's actually unlocking things that nobody's been able to do before you know we have this disease we have this target we have no idea like where to bind to the target and how to actually make it work. And then by being able to model that space rationally and, you know, rather quickly in silico, we should be able to make inroads into some of these like really fundamentally difficult diseases. Well, why do you say that there's not a lot of low-hanging fruit? I would imagine being able to model this task in Silico versus actually trying it in a lab would just be a massive increase in efficiency. Is it because the modeling isn't reliable? No, it's not that. I mean, I think it is a massive increase in efficiency, but it's a very complex, multifaceted problem, right? And so I think while we're going to be able to massively improve the speed, massively able to improve the accuracy with which we're making predictions, and so I expect things to contract, but actually the whole problem of designing a drug from identifying a target to sort of having something that's been proven to be efficacious and, you know, non-toxic and have all the other properties, I think is going to be a series of very, very hard challenges. And so, you know, I think we need to set our expectations accordingly as well in that sense. How far along are you in that journey? Like, can you talk about like how many steps have happened with your furthest along drugs? Yeah, well, look, the way to think about Isomorphic Labs is we're only two and a little bit of years in existence. you know two and a little bit of years in existence and so you know of course our first port of focus has been to develop our technology platform and so this is something that we have been doing since the very start of course building on top of alpha fold that has been sort of a key enabling technology in the space that gives one a foothold into this you know structure based and silico drug design space and so we have developed a number of these methods and then over the course of the past year we have started doing actual drug design programs using the technology that we have developed and so you know a year in drug discovery is a fairly short period of time and so we have a number of these programs that are both our own set of our own internal targets, as well as the ones that we have partnered for with, you know, the two partnership announcements that we have made with Eli Lilly and Novartis. And so I would say it's quite early days for that portfolio, but we have been making, you know, pretty steady progress through that. And already today, we're seeing how all of that modeling work that we've been doing over the last two years is actually changing the way that the chemists are approaching the day-to-day drug design. And that's super exciting and really speaks to this longer-term inflection point in how chemists approach drug design long term. What is the role that chemists play? I would think in language modeling today, I don't think there's a lot of linguists involved at this point. Yeah, how does that work? It's interesting you say that there's not a lot of linguists involved at this point, but then, you know, the fact of the matter is every single machine learning researcher who works on NLP is an expert at language, right? Just by being a human. It's the same thing with computer vision, that every single person is an expert actually at computer vision or human vision. And so it's very easy to take these internal priors and directly translate them into machine learning models and your workflows and even on the product side how you then take those models and change the way you do you know you write a blog or you write a script with a large language model we all have these intuitions internally. It's very different in chemistry space because me and Sergey, for example, we're not native chemists. So we wouldn't have that native intuition about how chemistry can relate or can be maximally exploited by some of these models. So I think there's two ways that chemists come into this picture. One is on helping us develop this platform, helping us build these models and really attacking this fundamental science in the right way with machine learning and deep learning. And the other place that chemists come in, of course, is on the actual drug design. Maybe there'll be a world, I'm sure there will be, where we can press play and out pops your drug. But before, at that point from a model, we're of course going to have human drug designers, chemists in the loop, working with these models, being very creative together in that process to design molecules together. Okay, we always end with two questions and I want to give some space for them because I'm really interested in your answers. One thing that I would love to hear about is kind of on that journey from, you know, more research paper oriented work to trying to make this like really, you know, productionized and really working for your use case. What have been the been the unexpected challenges? Well, I would say, from my perspective, and sort of like building on the theme that Max has just covered, is the data understanding is a really difficult problem, essentially, right? We cannot just eyeball, you know, a chemical reaction, or we cannot eyeball something that happens inside a cell at a molecular level and be like, oh, yeah, that makes sense to me. Like, that's a good data point. You know, we should keep that one. So I think it's, you know, it's a big perennial challenge in the space to really be able to have great experimental design, to be able to generate really high quality data sets, to be able to actually use and training these methods. And so, you know, basically making progress in the space is, you know, linked a lot to our ability to design these data sets and understanding. And I think it's kind of a challenging scientific pursuit, essentially, to be able to really make heads or tails of a lot of these phenomena. Yeah. And then on the other side of the sort of model development spectrum, I think how these models should be applied to actual drug design, to actual science, I think is always work in progress and ever evolving as these models and the characteristics change. Again, it comes back to this point of, we train these models on a generic distribution of data, and then we try and stretch and apply them to the very tails and the very frontiers of what this model could know. Taking those raw capabilities, a raw capability like AlphaFold, and then applying it to a very particular drug design challenge, a target that no one knows anything about, the very frontier of medicinal chemistry, poses lots of challenges. tier of medicinal chemistry poses lots of challenges. There's of course going to be lots and lots of dark knowledge in this model that we need to work out how to extract for this very particular use case. And I've seen again and again in this context, even in previous contexts at DeepMind that there needs to be a lot of creative work in how you extract that juice, how you extract that dark knowledge for these really particularly challenging applications. And that's really a magic spot as well. Really exciting, but very difficult as well. Can you say anything about how you've done that or how you've approached that? A lot of this would, I'd say, fall under the hacking sort of moniker. And I use that in the best sort of way. I think we see again and again, it doesn't matter what domain actually we're deep learning, that you put these models in front of people who are really passionate about applying them and really extracting juice out of them. And people will find amazing ways to mold and hack and just completely mash up or pipeline these systems together in a way that they were never trained for, no one ever thought about before. But actually you do this and you can find some way to interpret that statistical inference in a completely new way. And it really gives value to people. And we see the same thing day to day you know for chemists for example so you're talking about the chemists you're working with or the machine learning engineers primarily doing this they'll be side by side doing this together um so yeah the chemists might not be doing the native hacking but the machine learning engineer will do that and they'll do it together. I mean, actually, I do want to add that to me, this has been probably, you asked about the hardest things. One of the things that has been the most enjoyable is at being able to actually integrate these disciplines together. You know, we often hear about some horror stories where, you know, company X for, you know, AI for drug discovery goes off, raises money to do this and build technology. And the technology doesn't quite work yet. You know, you need to do drug discovery. And so you go off and you end up having this disconnect between essentially your drug discovery teams and your teams building the tech. And one of the things that we have built into kind of the DNA of the company is we're in this together to solve these problems together. And so this is, you know, reflected into how we've structured projects. You know, we have chemists deeply involved in all of our machine learning projects as subject matter experts, as folks that are, you know, team members. We sit together as teams sort of interspersed with each other. And, you know, similarly, when we think about doing the drug discovery programs, our machine learning engineers and researchers are deeply embedded in those programs to make sure that we're getting the best of the tech. And so to me, this has been like a really integral part of actually how to crack this for real. It's funny. I really try not to inject weights and biases into these conversations at all, but I kind of can't help myself because I always wonder if people are using our visualization stuff for bio examples. Like, you know, we have a molecule viewer that always, it demos really well because it just sort of seems like science. I think it probably demos better to like the non-bio customers because everyone fantasizes about being a scientist that can look at structure and see that. But is that actually like a useful way to look at data to literally like see the molecule or how do you actually look at the data? I mean, I guess it's so much harder than with texture images. What's the process? Yeah. And it'll depend on who wants to view the data. If you're a medicinal chemist, you will happily sit there and look at a page of hundreds of molecules. Literally like a picture of them or like the scientific like notation of them from my chemistry class the 2d picture graph notation of them roll through that and you know these amazing people you know i think about like someone used to sit next to eric like just looks the molecule be like because real off characteristics about that just intuitively um but then, of course, like, as we scale up the number of molecules that we could be, you know, assessing or think about generating, then this could go into thousands or hundreds of thousands. And you could think about, you know, very unique visualizations or landscapes of these things, as we're able to measure more and more properties, or predict more and more properties with our models, we can then think about how to surface those properties, that whole multidimensional characterization of each individual molecule and then project that in a space that a chemist can browse through and really start to intuit about. So there's lots of opportunities. Actually, this raises a great point as well, Lucas, even about the wider, like about the wider field, which is product, like how do we actually, we can develop really amazing tech, we can build really great models, but we want those models to solve real world problems. And, you know, product plays a really crucial role in that. And I feel like that is a muscle in sort of the wider tech space that we've been able to grow really well like how to obsess over what that problem is like who are the users what do we need to build to really move the needle and so we've like integrated that as well you know we have a product team that are working really closely with the machine learning side to really understand these capabilities and both learn and give feedback into that research roadmap. But then on the other side, also working very closely with chemists and biologists to understand those workflows. And so we don't just have a collection of models. You're not just running a bunch of collabs as you use this. You have sort of an interface, you have a fully formed product that you can log on to, and you can run your drug design project inside. And that puts all of those models into the context of what you're doing as a chemist, and, you know, helps with your workflow, and you need to be able to visualize all of this. And so, you And so I think it's a key actually aspect of how we think the translation of some of this technology into real world impact really, really works. And I feel like that's an important piece even for the wider industry as we have a lot of cool technology that is looking for that real product insight and product application. Totally. Well, if you have any suggestions of visualizations or queries you'd like to push into us, we would be overjoyed to get that form of feedback. We'll start featuring. Please. I would love to hear them. We could take it out of this conversation, but I'm dying to hear it. And my last question, to take things in a totally different direction, if you don't mind, but this has often been interesting is like, when you think about, you know, machine learning in general could be inside your field or outside of your field. Is there any kind of research topic you think is under explored or like a research result that really excited you that you think didn't get, you know, the attention it served? Or maybe another way of thinking about it is if you didn't have your current job, is there something interesting that you would want to look into? Well, on my side, maybe is, you know, I'm really passionate about healthcare. So I think my topics are, you know, quite often about healthcare so i think my topics are are are you know quite often about healthcare but i you know i see i see sort of neuroscience as like one of the biggest mysteries that is like left in our overall understanding of actually how biology works we you know we still have quite basic ideas about how brains work and how we form like these larger you know larger sort of concepts around our world models and our memories and so on and so to me you know if I was to think about a domain where you know we haven't really had that yet like that aha moment yet that would be the domain and I think there's lots to be said for machine learning in that space, but also for, you know, other forms of modeling, simulation, and so on. But, you know, I would look for that as a key, you know, key space to really make impact in. On my side, I think at the very core, you know, I love deep learning. I've sort of been at the core, always developing new neural network modules and ways that these pieces could be put together or how you add new functionality into neural networks. you know um ways of conditioning ways of modulating attention mechanisms and and playing with these things and putting everything together in new ways i think it's great fun i said because we get to you know look at all of this develop new stuff and leverage all of it for completely new data types so mashing up all these things in in ways that aren't defined in the original research. I think if it wasn't in ISO space, you know, I think back to some of the later research I was doing at DeepMind on open-ended learning, really thinking about, okay, how do we use then these fundamental building blocks of deep learning and create these scalable learning systems that can just learn even without human-labeled data or just bootstrapping perhaps off the initial bit from human-labeled data against environments, whether that's simulations or the real world, to learn more and more and more about the world, ultimately so that you can come along and give this agent any task down the line. So I love that space as well. Awesome. Yeah, me too. Any research paper you want to point us to there that we could put in the show notes? I'll plug the X-Land work, which was some of the work from us from DeepMind, which was baby steps along that way of, you know, how do you just create these environment universes and train agents and populations of agents to get into space? So at the end of the day, you start getting these agents which can exhibit, you know, start to exhibit general capabilities to zero shot to any new task. Awesome. Well, that topic seems more and more relevant these days. Yeah. How to supply that in the language space. Or maybe in bio, you know, what would happen. Yeah. I think there's lots of analogs to how we can do drug design here to be able to. Very cool. Well, thank you so much for your time. It was a lot of fun. It's been a pleasure. Yeah. Great to see you. Yeah. Good to see you. And yeah and great to work with you guys. I really mean it on future requests. I also think healthcare is the coolest ML applications and I personally would just love to give you guys better tools. It seems like the one where it's the hardest to actually look at your data and I do feel like actually looking at your data is like this the key to success broadly yeah i mean as we ramp up the usage of the of weights and biases i'm sure there'll be lots of ideas coming i really don't mind either like really specific tiny stuff like i feel like that's the stuff we never get but if anything is just like irritating you just don't be shy like you know tell you tell your team just send it to me yeah yeah yeah i mean it's a it's a key piece of our sort of technology infrastructure and i think you know people have loved working with it but there's always more things there's always more scaling and so i'm sure there'll be no shortage of requests we'll definitely take care of your word lucas. Okay, great. Really appreciate it. Have a great day. Cheers, man. Thanks so much for listening to this episode of Grading Descent. Please stay tuned for future episodes. | Accelerating drug discovery with AI: Insights from Isomorphic Labs | 4,223 | Weights & Biases | 20240425 | In this episode of Gradient Dissent, Isomorphic Labs Chief AI Officer Max Jaderberg, and Chief Technology Officer Sergei Yakneen join our host Lukas Biewald to discuss the advancements in biotech and drug discovery being unlocked with machine learning.
🎙 Listen on Apple Podcasts: http://wandb.me/apple-podcasts
🎙 Listen on Spotify: http://wandb.me/spotify
With backgrounds in advanced AI research at DeepMind, Max and Sergei offer their unique insights into the challenges and successes of applying AI in a complex field like biotechnology. They share their journey at Isomorphic Labs, a company dedicated to revolutionizing drug discovery with AI. In this episode, they discuss the transformative impact of deep learning on the drug development process and Isomorphic Labs' strategy to innovate from molecular design to clinical trials.
You’ll come away with valuable insights into the challenges of applying AI in biotech, the role of AI in streamlining the drug discovery pipeline, and peer into the future of AI-driven solutions in healthcare.
✅ Subscribe to Weights & Biases → https://bit.ly/45BCkYz
⏳Timestamps:
00:00 Episode introduction and guest overview
05:42 Max's transition from DeepMind to Isomorphic Labs
12:37 Sergei's tech background and move to healthcare
18:54 Early challenges at Isomorphic Labs
25:58 Integrating AI into drug discovery
32:16 Impact of machine learning on drug design
39:07 Introducing AI to drug discovery teams
47:29 AI's role in predicting drug effects
54:55 Future prospects of AI in biotech
01:01:00 AI ethics in healthcare
01:05:00 Broader AI applications in healthcare
01:09:00 Reflections on AI's impact on medicine & wrap-up
🎙 Get our podcasts on these platforms:
Apple Podcasts: http://wandb.me/apple-podcasts
Spotify: http://wandb.me/spotify
Google: http://wandb.me/gd_google
YouTube: http://wandb.me/youtube
Connect with Sergei Yakneen & Max Jaderberg:
https://www.linkedin.com/in/maxjaderberg/
https://www.linkedin.com/in/yakneensergei/
https://twitter.com/SergeiIakhnin
https://twitter.com/maxjaderberg
Follow Weights & Biases:
https://twitter.com/weights_biases
https://www.linkedin.com/company/wandb | 2024-09-03T11:25:58.420006 |
https://www.youtube.com/watch?v=DWINSvRxIbw | Hello and welcome everybody. Good morning, good evening and afternoon. Hi Jesus, how's it going? Hi Alex, happy new year. Happy new year. I'm very well, thank you. Happy to be here one more time. A bit later than usual, but lots of things going on. Yeah, so we had to shift the week one week and we had to change the date, the day from our regular Tuesday to a Monday because otherwise we would have to shift it even more and so we are sorry that this is not the usual time and well, it's usual time but not usual day of the week in what was going meta for the previous 11 episodes but so episode 12, the one year anniversary episode, I guess, is a little bit different. But we try to be back on maybe a more, our usual slot for the next month, if possible. But I hope that's okay and you tune in nevertheless. If you tune in, it would be nice to hear from you now that we've completed the 12-year episode. Any thoughts, any feedback, anything is always welcome. And maybe this 12th episode sparks you to give us some of your thoughts. Either if you watch this afterwards in the comments on the YouTube video, or if you watch it live, obviously always open to hear you in chat. So please don't hesitate to type in anything really you'd like to get out to us. So I'm having an eye on two eyes uh on the chat um that's that's cool i think we we had a we had a great great 12 episodes and i think you you posted recently i need to find this this this article you you recently posted on linkedin this was maybe i'll i can find this quickly and link it and i'll say a couple of words about it because you know I guess that's the first of all credit to Michael Hunger who's been insisting me to do a session on this one because yeah I mean most of us in this series are familiar with new semantics but that we have been working on a kind of alternative. Well, not really an alternative, but it's kind of a different implementation of the same principle. And we thought it would be interesting to look at it, to share it, first of all, but also to look at how it's been implemented as a way to explain another way to interact with Neo4j, and also to you know to welcome uh any contributions to to the code base so that's that's the idea and now we'll uh yeah give a bit more context on where that comes from and and what what is it for exactly yeah i found the link in the meantime, so I shared it. So perfect. Yeah. Good. So I mean, if I don't know if you need to give any additional updates or anything coming up, Alex, or maybe at the end, or if you want me to get started, I'm happy to share my screen and we can kick it off. I think we can. We can get started. I can mention the other upcoming things this week or next week when we're finished. So yeah. Perfect. So we can get going. So today's topic, while you get your screen ready, is importing RDF data into Neo4j Aura with Python and RDF lib. So sounds to me and as usual, I got this topic like, well, 10 minutes ago. But it sounds more like a more like a hands on. Well, all of these episodes have been very hands-on and we explored some technical background and some theoretical background rather and some explanation and then went into the technicalities and into practice and hands-on. But reporting RAF data sounds very, we don't explore, I mean, maybe you, I mean, I guess you can tell. To me it sounds like we have some data, we're gonna get it in a graph database. Exactly, exactly. And you know, can you see my screen first thing? Yes. Just make sure it looks so good. Perfect. So as usual I'm going to give a little bit of an introduction on slides and then like you say we'll jump into the code and we'll get our hands dirty as usual but uh yeah the idea is you know the title i mean maybe some of you who are in this space i mean work with rdf and python are familiar with this rdf lib package we'll work with it and and we'll show how we have extended to be able to interact with with aura and let me tell you why where this all comes from so i was saying that what we know what we're all familiar with, and I assume, you know, it's not crazy to assume after 11 episodes that most of you are familiar with NeoSemantics, you know, N2S. So this is a library that has been, you know, a Neo4j extension that has been around for years now. And that helps us precisely with this problem. I mean, it's importing RDF into Neo4j. And in the process of importing Neo4j, sorry, RDF into Neo4j, solving the problem of translating this triple-based representation in RDF into a property graph representation. So that's what we all know know and love of course don't we Alex so but uh it um I would say that it has uh a limitation and let me explain what I mean by that so Neo semantics is implemented as a uh an ifj extension is a set of store procedures so in other words it needs to be uh deployed on the server side so that we can then, and I've literally copied a couple of statements from the manual, where you can see that we can, I mean, basically we use Neo semantics by calling store procedures. And the syntax in Cypher is called by the name of the store procedure. And we do things like, you know, configuring the graph and telling it how to behave when it imports RDF. And then the actual import works in two ways. We can either pass some RDF payload, or we can pass a reference to an external document or RDF generating an endpoint. It can be a Sparkle endpoint, whatever. But the idea is that all these calls require this NeoSemantics module installed on the server side, which is what this diagram kind of indicates. So the client, the Cypher client, expects some elements to be available on the server side in the form of stored procedures. Now, this is an assumption that it's not always valid. And maybe some of you who have requested this kind of alternative are joining us today, because you know that there are deployments of Neo4j where maybe you don't have control over what kind of extensions are available. And one of them, one prime example of that is is aura right aura in its current version does not allow the the deployment of uh custom uh extensions right so you can do that in your self-hosted uh install but you cannot do it in aura but people still want to be able to input rdf data in aura so the question is what do we do so does that give the the right context? It does. Yeah, that's a question we got a lot, I guess, since the beginning of this episode or this series. And I guess in combination with there being Aura Free, an easy and a very accessible way of getting into Neo4j with a graph database as a service, the start is made super easy. But then the limitation, considering RDF data, is that it's not compatible at the moment or it doesn't used to be compatible. And therefore, people were getting ready, getting excited and realizing, oh, I can do this. Exactly. We've been doing it. I mean, in most of the series that we've, in the episodes that we've run here, we've used Sandbox. Sandbox is a completely different environment and that happens to be, you know, it's a temporary instance that you don't really, I mean, you just own for a number of days. It's for playing around a little bit. But it's very different from Aura, right? And Sandbox does come with these extensions installed, but that's not a general assumption. And that's the problem that we wanted to address today. So, yeah, just to complete the title that we came up with, the solution is based around RDF lib. And in particular, an extension to RDFlib and I'll double click on that in a minute. But it's a Python-based solution. So RDFlib, for those of you who are not familiar with it, is a Python package that deals with RDF in general. So we'll make it possible for you to parse RDF, to generate RDF. It's an RDF management library in Python. So one of the possibilities that RDF lib opens is the extension in a number of ways. And one of them is the type of backend storage. We'll get back to that in a minute because in the second half of this episode, I want to have just a quick look at the code and the implementation of how we build that. But first, I guess we're going to look at how to use it, which is the most immediate, I would say, need. But you can find it in the package index, as usual. So if you do a search by RDF lib, you will find it as the first match. It's still in an early version, but it already supports the ingestion of RDF in any of this validation formats. And it's as simple as import, I mean, doing a install of the package itself, RDF lib Neo4j, which has a dependency on RDF lib. So it will import RDF lib with it. And then you're off to the races. I mean, you just need to interact on RDF lib, so it will import RDF lib with it. And then you're off to the races. I mean, you just need to interact with RDF lib as you would if there was not Neo4j behind, but in any RDF lib type of example, and you just have to specify that you want your graph stored in Neo4j. And we'll see that in a minute in a bit more detail, but the idea is that it's entirely seamless if you are used to work with RDF. So you will be working with your library without necessarily knowing that behind the scenes, you know, the data is being stored in Neo4j. That's what this library takes care of. So that's the idea. That's how we've done it. And we want to see it in action. And for that, we built a notebook. So let me stop the presentation and go into my notebook. But before that, we said we're going to be using Aura. So I should have actually kicked it off earlier. So I'm going to start an empty instance just to show that we're working alive i'll copy the password and continue i'll save the password here on my desktop just to be able to use it but yeah i mean now you skipped it over but it's always there's now with with aura uh this option to download the the the file the the you know login file so you get the the url you get the password uh as well and you can download it there uh from from the screen you just saw before and with that you have it on your on your local machine and you you you don't have to worry about forgetting it uh or losing it yeah i'll i'll you know i'll that's that's the kind of the right way the way people should be should be doing it i'll i'll probably do it in a in a quick and dirty way just copy the password use it for now and at the end of course i'll destroy this instance exactly this is this is just for demo purposes not not the problem but if you are planning to use this uh more often than uh you know it's better better not to do to save it in some text file you you might lose again exactly exactly so well let's let's give it a few a few seconds because that's the the time it takes to to get started but um while it does let's let's have a look at them of this of the notebook let me make it a bit a bit bigger to hope it's readable you tell me alex if it's uh oops that's not you yeah am i doing it right yeah here you go yeah i think it's great yeah right so well like i was saying before all we have to do to begin is just do a pip install. So I'll kick this off. And because there's dependencies defined, it will bring RDF lib and Neo4j. Both drivers will come as a consequence of these installs. So no need to worry about anything more than RDF lib Neo4j. And once this package is installed installed all I have to do like I was saying before is say well I'm going to and maybe it's worth just for let me do a quick well not even a search I can even copy the this manual on RDF lib so this is the documentation on on RDF lib if you want want to, you can Google that, of course. But basically, I'm doing exactly the same. So I'm doing the installation, and I'm going to take you through some examples. So like I say, if you're familiar with RDF lib, you know, the way you work with RDF lib is you instantiate a graph, and then you parse data into it. So that's exactly the way in which we're going to be working, right? So the only difference, like I was saying before, is that when I instantiate my graph, I'm going to be saying, hey, I'm setting the store parameter to Neo4j cipher. What I'm saying by that is that I want the back-end of my store to be Neo4j and to use the specific implementation that I mentioned that we're going to be seeing later on. But it's the one that we built with RDF lib Neo4j and to use the specific implementation that I mentioned that we're going to be seeing later on. But it's the one that we built with RDF lib Neo4j. So if I do that, I'm creating a new graph, an empty graph, that's going to be backed by Neo4j. Now, I need to provide some configuration, and in particular, where is that graph located? And in this case i need uh the detail from aura which hopefully should be running by now exactly so we have aura here and we can uh we can copy the uh the connection uri again this is probably not the most elegant way to do it but uh for this simple example i'm going to just paste it and i'll i can say i will uh destroy the the instance hang on is this that's instance and i need to paste uh the password that i copied earlier so listen to alex not the right way to do it but for now let's do that you know quick and dirty here i'll paste it and I'm going to install, I mean, this is my configuration with basically the URI, the database I want to connect to, my user and my password. And I'm setting, I'm calling the open method, passing the configuration. And if everything looks good, basically it's testing that there's connectivity to that database and that we are ready to go. So we now have a graph, right? And the way we interact with the graph, I'm going to use you as an example, Alex. But I'm following, especially for those of you who are familiar with RDF lib already, I'm going to use very, very similar examples to the ones that you had in the documentation. So RDF lib deals with RDF data, right? So because it deals with RDF data, it expects interactions in the forms of triples, right? So that's what I'm doing in this next section. So I won't go into all those details because we've talked too much about that. But, you know, triples, all the elements in the triple subject, predicate, object are uniquely identified by URIs. URI has a namespace component plus a local name. What I'm doing here basically is I'm defining two namespaces in my vocabulary. I'm going to have individuals, Alex and me afterwards, and I'm going to have a vocabulary as a schema, an ontology, if you want, that adds structural elements to it. So I define an individual's namespace by giving it a base URI and then a schema namespace, a vocabulary namespace or ontology namespace. And once I have them, I can create resources, which is the term used to identify URI in RDF. For example, I call one Alex, which is an individual called Alex, identified by this string Alex Erdo. So it's basically a concatenation of the individual's namespace, followed by this name. And the same with me, JB, is another individual followed by this name, and the same with me. JV is another individual identified by Jesus Barraza. So once I have these individuals, I can generate triples out of them, and that's the fun part. So I'm going to say, okay, Alex is of type person. So Alex is, of course, I'm referring to this individual that has been created here, and RDF type is a standard rdf is a vocabulary type is kind of a in our property graph world is a way of giving a resource a type which would be the equivalent of a of a label in a property graph right we're tagging it we're saying that alex is a person right we could add more of them actually we could say hey alex is a person but he's also a neo4j employee and so on and so forth so as we know things can have multiple types or in the property graph we would say that nodes can have multiple labels that's perfectly fine let's add more triples so i'm i'm saying also that alex has a name which is a Erdl, which is basically the same as the URI, but I'm giving it a nice spaced human readable version of the name. And I don't know, just an additional property. I said Alex has a Twitter handle that I have taken from actually your real one. So these are one, two, three, four statements, four triples about you that I want to write to Neo4j. I mean, that I want to write to my RDF graph, which is what RDF lib does. But I expect the magic to happen behind the scenes. And these writes, which I am effectively doing when I say add a triple to my graph, G is the graph that I created up here. add a triple to my graph g is the graph that i created up here right so when i run that and i hope everything goes well here so uh and i go i haven't opened it yet let's go back to aura and uh go to my uh let's go to query let's open a browser on my aura instance i'm gonna find already some information in it i need to copy once more my password. That's why it was not a good idea. But hey, that's me being a bit... So if we come here, let's give it a minute to refresh. Hey, we have already some notes. And interestingly, one is a new employee and a person. So let's see what it is. Hey, it's Alex. Not surprising. So we have a property graph. I mean, we have the RDF data that we've just created, that we've just written to our RDF lib graph, being persisted on the Neo4j side. And we see the translation that we're familiar with. So we see how all literal properties are translated into attributes of the node. So we have the name, Alex, the figure handle. And we have an additional one, which is the URI, which is the unique identifier. And we have seen also how the types have been translated into labels, which is the way Neo4j, in the property graph, to set the types of individuals, right? So, well, so far, so good. Let's add a bit more of information because, you know, the interesting thing, of course, in a graph is when we create connections. So, I'm going to just quickly copy it and adapt it to this other individual that I've called JB and let's do the same with me so I'm going to have another person I'm also an info J employee I have a different name and I have a different Twitter handle and and finally why not let's add another connection. Let's, I'm very lazy and like the copy paste, but I'm going to say that, that Jesus is friend of Alex, right? So these, as you can see, I mean, it's easy to interpret that. This is going to create another node with a set of properties, pretty much like what happened with Alex, but now with Jesus. And ultimately what we're going to do is create a connection between the two, a triple that connects two objects, JV and Alex, using the predicate, the vocabulary element friend. So if I rerun this block, it will hopefully not recreate Alex because it's already there. And this will not happen because the translator is aware of the fact that subjects URIs are unique. So if I'm using the same URI and that already existed in the database, it's not going to recreate. It will only reuse it it so if we go this has successfully finished so if we go back to our instance we now have uh see two notes and a friend of relationship so if i click on that we see that we have the two nodes uh you know with very similar structure the uri the twitter handle and the name and a connection between the two. So great, right? So pretty cool. What's interesting and probably, you know, this is more visible to the RDF-Lit practitioners is that, you know, if you look at this code, that is, there's nothing RDF related here, sorry, nothing Neo4j related here. And that's why I was bringing the manual. So you have an example, and that's exactly what they provide in the manual. So they create an individual that's done in this case, we're saying there's a person, he has a name, blah, blah, blah, nickname, a mailbox, blah, blah, blah. So exactly the same way in which you would interact with RDF lib, I mean, that doesn't change at all, is what you do, because effectively you're working with RD outlet, except that, you know, transparently and by setting these initial configuration parameters, saying that the store is an Neo4j cipher, every change to the graph, every interaction to the graph will be ported and persisted in Neo4j. Two important things here. Well, first one, the interaction in this example has happened at the triple level. What does that mean? It means that every time I run, and we're going to see the code in a minute, but every time I run a graph add, every time I add a triple to the graph, behind the scenes, I'm starting a transaction to the graph, and I'm writing that triple. So that is okay if we're in a kind of transactional situation, we're writing individual triples one after the other. But it probably is not the best idea if we want to import a few thousand or a few hundred thousand triples. Because I don't want to create a transaction for every triple. I probably want to write a bunch of triples in a single transaction and then commit it at the end. So that's what we're going to see in the next example. So here, instead of me manually adding triples one by one and manually creating them, what I'm going to do is using another feature in RDF lib, which is the parse function, where I only need to set the location of the RDF data that I want to import in the serialization format, and it will go fetch it from there and import it. And this in particular is this one here. So this is a dataset that we've used in other episodes in the past, but basically it's a SCoS-based. SCoS is a vocabulary, an RDF-based vocabulary, with a large taxonomy about software systems that we extracted from Wikidata. You probably remember, I think we used it in the, I think it was in the semantic search episode in the two or three. So I persisted that as RDF, turtle serialization, and we have loads of elements in here, right? For example, I'm sure this Neo4j here. Also there's an instance representing Neo4j and probably it has a unique identifier. Remember how Wikidata used these numeric identifiers? We're saying that Neo4j is broader than, I mean, has a broader element in this concept, which probably will be something like, let's, oops, I clicked too many times. So it's a label property graph. So you see how it's a taxonomy, right? So it's a bit hard to follow because this is in theory machine readable data. But the idea is that this large RDF set contains a taxonomy of concepts, as we call them here in SCOS, but basically represents software systems or software technologies. So yeah, this is a large file that I can access through this URL. And what I wanted to show is what happens if in my notebook, I want to import a large data set like this one. So I don't want this interaction to happen at the triple level. So I have this extra element, and this is Neo4j, RDF Neo4j specific. I can say, hey, and this is a command or a method on the back-end store. I can tell the store, hey, back-end store, go into batched write mode. And like this, basically what I'm saying is open a transaction, right? All the writes that I'm going to send you now, and these will be thousands of triples coming from this file, keep executing them in the context of the same transaction. And only when I end the batched write, we will commit the transaction. That will also create not only a single transaction, but also it will chunk the triples in batches and it will make the exchange with the server way more efficient. So if I do that and we can run that and that might take a few seconds, I'll touch in the indexes in a minute, but this is running the input of this large, I believe it contains, well, you can see it here, large data set. I believe it contains, well, you can see it here, 15,000, maybe a few more triples, something like that, right? 15,000 triples. So still a tiny data set, but these, instead of running it one by one, it does it in a kind of batched mode. So again, when this is done and should be done anytime, we can go to our browser. We see that the data is already here. So we have now a concept I have not deleted, you and me, Alex, doesn't matter. That's all good. We now see that we have we have a concept related through the broader relationship which is the one that we were using in scos let me zoom in and we see things like well there's uh programming languages and we have uh i don't know i have never heard of drama but it seems to be a a programming language uh george i don't know i have never heard of drama but seems to be a a programming language uh george there's plenty of of super i don't even know how to name that but there's there's loads of of of uh this is a very rich uh taxonomy that comes from wikidata yeah this is only a snapshot of the data this is by no means exactly yeah it's uh um yeah uh yeah it's it's not necessarily a tree because it's a taxonomy with, it's effectively a graph. But we can do something like, we can explore it and say, okay, let's find a concept. Because we're labeling things as concepts in Scots. That is, we see that all the elements have a preferred label. So we can say where C preferred label equals, oh, we think we saw it was Neo4j. I don't know if I'm going to be using the right capitalization, but we can do that. Return the concept if I've done it right, good. So that's this node that we saw before. And if I expand it, we see that it links to, we saw before that it was a subclass of graph database or property label, property graph, but it's also free software and proprietary software, interesting. So it's again, this is, I mean, the data comes from Wikidata, so it doesn't really matter. The focus here is on the fact that we've managed to import this large taxonomy of RDF into Aura directly. So, I mean, just because, you know, I suppose one of the interesting things is that, you know, we have offered the possibility of interacting with the graph on the RDF lib side as triples, as RDF, but at the same time, we have this property graph interface, and we can do cool things like, let me do that. So if I take the concept, I want to find the whole taxonomy for Neo4j until we get to a root. And we know that we've got to a root because the root does not have any parent. So it won't have, I mean, that's the way of indicating that, so the idea is the same. So I'm still looking for a concept that's called Neo4j. I will navigate the broader relationship, basically follow the taxonomy, until I get to a root. And I know that it's a root because it does not have any more broader than. So we go to the top. And we can store that in a, we can call it subgraph. And return this subgraph. So we run that. Little detail, of course. I'm not wanting to go just one, but as many levels as possible. So that's what the asterisk does. Otherwise, it was just going one hop. So with this, there you go. So again, the browser doesn't do the hierarchical layout, but we see pretty much here that Neo4j, as we saw is an instance of graph database, which is a database management system, which is a software system, which is a some software and so on. So that's kind of the tree for Neo4j. And again, we can interact with that on Cypher and as a property graph on this side, as opposed to in the form of triples as we were doing before. Right. So now what's happened here? And if you remember, let's connect with the introduction of the problem. So there's no extensions installed on the Aura side, of course. If I do a call db, I mean, that's of course apoc, which comes with it, but if I do vbms procedures, that's not what I meant. So that was procedures. What we can do here. Oh, the syntax has changed. It's show procedures. Ah, I can't remember. Anyway, there must be a way of... without this, I think. There you go. Show procedures. So, we'll see that there's some APOC elements and that comes bundled with Aura, but there's no Neo semantics right here as you would expect when you have it on your self-hosted. And so how did that happen? I mean, how did we manage to import this RDF data? Well, let's get back briefly to the slides. So what was happening before in NeoSemantics, so in this type of requests, it was up to NeoSemantics to go fetch the data. And the translation took place within the NeoSemantics and within the server-side extension in Neo4j. within the the server side extension in neo4j what's happening here what's happening when we use uh rdf lib is that the translation takes place on the client side so it's rdf lib and in particular the extension that we're going to see in a minute when i do add that it's transforming this triple into a cipher statement because because we cannot rely on Neo semantics being installed on the server side. What we need to do is create a cipher statement that actually stores the RDF data in the form of a property graph. And that's what this Neo4j cipher indicates. So it's a Neo4j backed store, but it's also cipher based. So it does not assume that Neo semantics is installed. it's a Neo4j backed store, but it's also cipher base. So it does not assume that Neo semantics is installed. I see. So you can run this in anywhere basically. So that's- Exactly. You can run it in a self hosted as well. If you're running it locally, but you don't have Neo semantics and you want to use this, all that's happening is that the interaction between my notebook here and my Neo4j instance or in this case is purely based on on cypher over bolt so it makes no assumptions on what type of you know components may or may not exist on the other side and I say that because there's there's two RDF lib implementations, and that's what we're going to get in a minute now. So the next slide was what's under the hood. So we've seen it in action briefly. So that's essentially, there's not too much to it. I mean, you can just interact with RDF lib as normal, but it's back to Neo4j. But then what's happening? So what we've done is in the RDF.Lit Neo4j module, and we're going to jump into that code in a minute, we've extended. Let's make that a bit bigger, although it's not super useful here. But one of the things that RDF.Lit offers is an API, store API, that makes it possible. And actually, why not? Let's look at this, because that will show us. So if I go to my ADFLib. So we see that this description of the store API makes it possible for developers to create multiple backends. There's some available backends for RDF Live already, and some of them are relational. If you look at Berkeley DB, that's a relational database. You can create a backend of different types, even based on SQL Alchemy, which is another SQL general relational database module. So you can create the backend that you prefer. So it gives you that possibility. And that's exactly what we've done with RDF lib Neo4j. And that's what I can show you here here any any questions so far i don't know if there's if there's any anything coming up on the but we can i don't see uh any questions but um i'm happy to take uh any questions that are that are coming so yeah if you if you're wondering i i posted the links to RDF lib and all the other information in the chat, so you should be able to see them. I will post the API link that you just showed in a moment. Perfect. So, right. So this is the, I was saying the GitHub repository, again Neo4j Labs, that we've built these implementations. So I'll go straight to the one based on Cypher, but there's this code basically creates two different implementations of the store backend for RDF. Right, so the first option is implemented relying on having NeoSemantics in the server, and then that makes the interaction slightly simpler, and it can leverage all the capabilities already built in NeoSemantics. But because there are cases where we don't have NeoSemantics, and that's exactly the Aura case, we have an alternative version that's down here. Is it here? Hang on. I'm not following. Initialize the database. Check the following section. It's further down. Here we go. So this implementation is the cipher-based one and assumes no new semantics on the server side. So what's happening here? So let's look at the code very briefly. And I don't want to... It's rather simple code, I have to say but uh it might be interesting to have a quick look at it so uh there's i mean the the store api which is what the cypher neo4j store implementation kind of uh implements uh has a number of of uh of required methods so the open one uh that you remember tested the connection basically takes care of instantiating the driver and creating a session. So basically it checks that there is a connectivity and that we can write to the database. But what's more important is the implementation of the add method. So remember how we essentially kept calling the add method and passing a triple as parameters. So the logic here is pretty straightforward. So if you remember, there's two different types of triples. And we can go back to our example. There's the triples that have a literal in the object. So remember, a triple is subject, predicate, and object. So if the object is a literal in the object. So remember, triple is subject, predicate an object. So if the object is a literal value, so what we're doing is we're giving an attribute, a property to that node. So if it's a literal, it means that we are giving the node a property. And that happens here as well. But if the third part of the object is not a literal, but it's another object, then it means that it's a relationship. And that's the case when we were connecting Alex and Jesus, right? So the first check that we need to do to understand what we're doing, if we're creating a property or if we're creating a relationship, is to understand if the object, which is the third element in the triple, is of type literal or not. And that's exactly what happens here. So if the object in the triple, and the triple is, like I was saying, a subject, predicate, and object. So if the object is a literal, is an instance of literal, then what we're doing is we are creating a property value. And we deal with the data types and so on and so forth. But remember that ultimately what we're doing is we are creating a cipher that can be sent to the database. So before I look at the specifics of that one, what we're doing is we, in the case, let's say literal. So, the URI is the subject, right? So remember that we store the subject, the URI, the identifier of the triple in the uri property in uh in um in in the node that's being created and then the rest of the parameters are stored in a in in a resource i mean everything in neo4j when we store it we create a uh a node of type resource we can go to the, so all these things are in addition to being a concept or whatever types we give it, it's also a resource and that's a way of having a way of merging on a unique identifier, which is the URI. We make sure that if there's already a resource with the same URI, we don't want to recreate it. We want to reuse it. And in order to do that, we leverage the merge keyword. So that's what's being generated here. So we are merging on a resource node with a particular URI. So that's the URI that we get here. And then because we're in the context of a literal property, we're setting a property and with the value. So that's what we're doing. And we're using the format to pass the actual value for the name of the property and the parameter to the right. But you kind of, without going into too much detail, we see that what we're doing is we're doing a merge and setting a property, right? If the predicate is type, is RDF type, what we're doing, remember here, is we're setting a label because types in Neo4j become labels. So the second check that we do here is if it's type, then we're setting a label, right? And that's what we're doing here. So we're again merging on the same URI, but then we're setting a label and the type will be, I mean, the label will be the actual type that we get in the object. Does that make sense? And finally, if it's not a type or a property or a literal, then we will be creating a relationship. So basically we're generating the cipher that merges on the from and the to, because that's what we get in a triple. So we get the from, we get the to, and we get the type of relationship in the middle. So that's the from is the subject. We get the to from the object, and we merge the relationship with the property name that we get from the predicate, right? So, you know, again, there's some additional details in terms of how we, instead of running that directly, we can put that in a batch and we execute it if we are in this kind of batch execution mode. But essentially what we're doing is we're taking the triple that we get and we're generating a Cypher statement that basically stores that triple into Neo4j purely based on Cypher without relying on any additional element. So yeah, I think that's probably enough in terms of looking at the code. I don't expect people to be super interested. Probably worth mentioning that there's the other one that we said that relies on NeoSemantics. So there's a very similar implementation. But if we look at the add implementation of it, interestingly, all it does, and look at the difference in the code. I mean, I don't have to deal with all the exceptions. All I have to do is, hey, I'm going to call ntns import because this method knows how to deal with it. So all I have to do is take my triple and then pass it to that method. But then, of course, that assumes that this method is going to exist in my database, which is not the case in the other. So, right. So there's the possibility of working with RDF lib in Neo4j, both in the case that you have Neo semantics or where you don't, but because it was about Aura where that's not present, we've shown how it's possible to do that in a purely cipher-based approach. So this RDF-lib implementation just takes care of the translation and sends the cipher over bolt to Aura, and the interaction takes place just like that. Good. I'm going to take a pause here. I mean, I think that's the core of what I wanted to show today. And we've had a look at the source, so we kind of covered the content for today. So I think I'm going to take a pause here. It's slightly unusual that we're slightly short. We're a little ahead of time. Not a bad thing, I suppose. But I think it was particularly probably dense in terms of But I think it was particularly dense in terms of the code and all that. But any questions from your side, Alex? Does it make sense? It makes a lot of sense to me. I think it's very interesting so that you can use this with, like I just said, with NeoSemantics installed or no. or a particular user of RDFlib, I guess, makes it very easy to just continue using it and you have connected to Neo4j Graph Database no matter what setup you have been given. Let's say you have an IT department that sets up your Neo4j instance for you and they don't want to install NeoSemantics for you, you can do it. Or if they have installed it for you can you can still continue using it and I guess it makes it very easy to to continue working with with in in your in your environment and your current stack I guess where you don't have to learn how to work with Cypher and Neo4j and all of that. So you can just go with it. And on that point, I'm going to interrupt you. Sorry, Alex, but it's very important because, you know, we've decided to start with this Python implementation using RDF lib. But in the same way, there's this package of RDF lib in Python. There are packages in Java. There are packages in.NET. There are packages, you know, to deal with RDF in Java, there are packages in.NET, there are packages to deal with RDF in JavaScript, right? And we're talking to different teams that we're into, because basically what we've done is we've taken the input part of Neo semantics and we've kind of extracted it from the server side and put it on the client and make it a kind of a client-side feature. We've done it with Python, but if you're working with another programming language, the same thing can be done or even can be created as a, I don't know, let's call it a service like a Lambda or something where we just pass the RDF and it does the translation. So there's multiple implementations, but the idea is exactly what you say. So you can stick to your stack, to what you're comfortable with, whatever your RDF library, RDF package to interact with RDF, and let that take care of the translation into Neo. So that's, in a way, kind of a proposal. Is one possible. Of course, you're free to use it, but it can be used as inspiration to implement something similar in other programming languages. So very, very, very good point there. Yeah, I think that's very important to make it as easy as possible to combine the two worlds, graph and RDF. And that's, I guess, the goal of this work. to combine the two worlds, graph and RDF. And then just, I guess the goal of this work. So it's cool. I think you are looking forward to feedback. So if you try this out, if you have any feedback for Jesus, for the team that's working on this, do it via GitHub, I I guess or what would be the best ways to I mean the usual I mean if it's something specific about the code GitHub is perfect because all these projects are in GitHub so you can open of course an issue or even better you can you can submit a pull request and and any contributions are more than welcome but as an alternative there's always the the community where you can ask more general questions and and any contributions are more than welcome but as an alternative there's always the the community where you can ask more general questions and and we can point you in the right direction so any any channel is is it's great probably much better than the comments on on youtube sometimes you know we get a youtube no yeah yeah that's probably not the best way super welcome but for this kind of more code specific, you know, this type of, I mean, it's a lot easier. So, yeah, that's, yeah, absolutely. So feedback is more than welcome and suggestions and comments and even shared experience of what's it looking like for you. What are we not covering yet or what have you extended to cover? yet or what have you extended to cover and um and yeah so i hope that the you know the the ingest i mean the fact that that aura uh is your preferred uh deployment it's not an obstacle anymore for importing rdf data into your into your graph and and working with it pretty much the way you were used to with uh with new semantics although you know let's let's make it clear at the moment this is you know in in early stages and and we've covered the import part but there's a number of features that still not supported here for example the export part i mean it sounds like a kind of a symmetric one it should be relatively straightforward but this is work in progress so keep an eye on this and and we you should be seeing soon uh the the equivalent part on on soon the equivalent part on querying the Neo4j graph and streaming out RDF from it. So what I don't know is if we will get feature parity eventually, because there's a number of things like mappings and other types of things that we can do in Neo semantics. We'll see. Basically, again, this all comes down to people's interests. And if we see enough, enough interest, we'll, we'll, we'll consider implementing that or, or, or maybe we'll just keep it more, more simple and more basic for these, for this type of access. When you're using this, I mean, RDF lib, I guess it's used to, to work with RDF data with, with not, not necessarily Neo4j, I guess, but with other, other tools. Yeah, exactly. So can you, and this is maybe a stupid question, but could you use this to work, to do the same, to do, to parallel work, basically say, okay i have my my rdf data i want to one one one way goes into neo4j database because i want it there um to to know and then my in parallel it also goes to this rdf tool i'm i used to i used to work with in the past so just to see for transitional areas. Absolutely. So that's perfectly valid. I mean, you could have, you know, that would be, I'm going to just sketch it here. But you can say, hey, I have a, let's call it, that's Neo Graph. That's backed by Neo4j. But then I have, let's say, triple, triple graph that's backed by, and I'm not very familiar with all the implementations that exist there, but we can look at some of them. Where was that? So, here we look at the documentation, oh, that's the API. That's not what I wanted, but we wanted... Hang on. So, where was it? So installation, more examples. I have to say that this is you know not the best the best extra i mean i want to be critic but it's a bit tricky to to uh loading and saving so let's go here further help source code for developers i think it's here uh plug-in stores here we go so for example here you go so there's a a So, for example, here you go. So there's a Sparkle store, for example, or different types of the Berkeley one. I mean, when I use a relational, but you can, you know, Oxigraph. I'm not familiar with all of them, but these seem to be HCT. This is another triple-based store. So whichever is your preferred store, you can have, you can have your choice here. And you can work interactively with the two of them. So you can, like you say, maybe want to have a replica. And everything that goes to my triple store goes to my Neo4j graph. And again, the good thing is that the syntax would be identical. So I would be just writing ads to one and the other. One would be writing to Neo and the other one would be writing to the triple store. Or you can decide which triples go to one and which triples go to the other. I mean, whichever logic is the one that makes sense for you. But yes, absolutely. You can have as many as as you like um and work with them in yeah the same way you would i think that could be interesting uh and particularly for for users that have been working with with rdf previously but not with graft of india for j and want to see uh how how it looks like uh but continue working with the data and then see the advantages and then maybe get into the world. Yeah. And I mean, we haven't touched on it today, but there's the, I mean, the whole, I mean, if you, what you're looking at is that a migration, of course you can, you can suck data from one and move it into the other because, you know, in the end, RDF is a, is a data exchange model. So that's the whole purpose of it. So whether that's to move data out of a triple store into Neo, or maybe query something we've done in the past here, right? Query a triple store like Wikidata, DBpedia, any of those with Sparkle, stream triples out and use that to populate Neo4j. So that's absolutely fine. I mean, conceptually, it's exactly the same thing that we've done here, except that instead of reading from a static file like this one that we were showing here, right, instead of reading from a static file, you would be reading the output of some execution, whether that's reading from a triple store, right? Totally. So it's, again, the important thing is that you are working with RDF-Lit pretty much in the same way you've been working with all the time, except that, you know, by using it as a side effect somehow, your triples are being persisted in Neo4j. So that's the benefit of it. Yeah. Very cool. We have comments from Peter Dimitrios. Performance looks pretty good for basic RDF Python notebook with Neo4j Cypher for doing analysis using Cypher queries. Well, if that's, and I've just realized that I didn't do it. I mean, if that performance looked good, I mean, that's nothing compared to what you would get really because I should have created an index. I mean, that's nothing compared to what you would get really, because I should have created an index. I mean, that's something that I should have done and I didn't. So it's because, you know, keep in mind that we're constantly merging in the database because it's all about, you know, when you have triples that refer to each other, same subject, you have to look it up and reuse it if it exists. So you're doing constant merges, which is lookups and writes. So if you back that by an index it's going to be a lot faster and i completely forgot to do that but essentially all you have to do and i have my my notes here but yeah i mean kind of interrupted you there alex but uh yes it's quite quite decent so what you would do normally is in a in a first uh interaction you would probably just create uh an index that's what i normally do you create a uh you know or if a constraint would be even better but an index is fine you would say i want the uh uri property in a resource to be unique and once you do that, essentially, we can do it automatically creates an index right for the for the constraint. And something like what we just done, I hope it hasn't broken things. But now, that should be significantly faster, right? Cause it's, I don't know if it's still keeping the session, but anyway, there you go. So you see how it was, you know, it took, you know, it still is a small dataset, but the performance is decent, right? So it's, keep in mind that we're translating into Cypher and it's not as efficient as with new semantics, which is a much more kind of lower level uh type of access but but it's it's quite decent so we're looking at you know taking just uh you know getting to to a few thousand thousand triples per second right so here that's uh that's the kind of yeah i think that's always a good way to do it. I mean, indexes help a lot. Another question from Mark. If you add extra properties with Cypher to a new node generated by RDF lib, does the RDF view still work? I guess that's the expert view, which is not implemented. Absolutely. I mean, the thing is at the moment, I mean, it won't be a problem at all. So we can just, I mean, whatever's persisted on the Neo4j side can be modified pretty much in the normal way. So you can have a, I don't know, take a concept and you can delete it. You can add a property. You can do whatever you want. That's absolutely fine. At the moment, the RDF implementation is one way only. So it only writes to Neo4j from RDF. So it's absolutely fine. It's a very valid question when we talk about regenerating the RDF or reading RDF out of Neo4j. So maybe in that case, and we'll figure out how the implementation goes, maybe there are some requirements for the information in Neo4j to be serialized correctly, or at least consistently with the one imported from RDF. But in principle, you can hit the graph from two sides. So you can write to your Neo4j graph from RDF with RDF lib, but at the same time have some cipher scripts from this side or have another element. That's the whole idea of Neo4j is built to take multiple, multiple write write throughputs and write workloads with through any channel. So shouldn't be any problem at all. And it kind of makes sense, right? Because sometimes what you do with importing RDF is enriching a graph that already exists. I mean, in this case, we started from scratch with an empty or instance, but sometimes very often, I i don't know you have a graph already and you want to enrich with geographical information with organization information with you know stakeholders so it's very common that the graph is is populated not just uh via rdf but through other other other channels and other other ways yeah that makes a lot of sense. Very good question. Peter, different Peter, Peter asks, are you, Jesus, are you working on a book on semantics? I'm doing work here and there. I mean, it's not specifically on semantics. I mean, on the new semantics. And I know that that's the problem is there's so many hours in the day, but I, you know, I feel sometimes guilty about the fact that there are some features in new semantics and in the different components. For example, this one, that's not sufficiently documented, but I think it was, you know, more interesting to run this session, to put out some examples, rather than write a proper documentation. So the answer is that I'd love to write a book. And even before a book, I'd love to write some proper documentation or get some help on that. But I'm struggling with that. I mean, this is Alex. Are you going to help me on that? ALEX KOLBARNIKOV me on that? Writing the book? Of course. We can call for that. Yeah. Let's see. Yeah, it's always difficult. I'm doing the GraphQL Book Club with Will, and I think Will has very enjoyed the process, but it's a lot of work. It is indeed, yeah. How many hours you need to pull into this book. It's a lot. It's a lot. Yeah. Nights and weekends. And yeah, it's a lot. Okay. Peter, different Peter again is interesting to have some guidance on use of constraints or just indexes on things don't want to go overboard but eventual generic tuning techniques for Neo4j could apply. I guess that's true, right? I mean if it works generally for Neo4j, if it's a good best practice it will work in this case as well yeah absolutely absolutely yeah in this case the only i would say strong requirement is uh it's not even a requirement because you've seen that we've run the session without it but the thing that will really make a difference is to have this constraint this index on the uri because that's the fundamental element when when working with rdf the uri is the unique identifier and it's, you know, the whole RDF graph is based on these notions. So you have to have a fast way to access and to retrieve and to modify them quickly. So that's the only thing. All the rest is, you know, it will not have an impact on the RDF input, but it can be beneficial in the usual way, depending on the type of of queries or the type of analysis that you run on your data. Indexing can definitely be a useful tool. I would recommend maybe have a look at the manual or the developer guides that your team, Alex, produces. It's probably the best reference that I would recommend. In the docs on Neo4j there is a lot lots of uh you know important guiding uh and and help and there is something on indexing which i don't why find right from top of my mind but i'll work it uh into the uh the description of of the youtube video so you find it there in a second i'm sure the graph Academy also touches on that. So I know. It does. Yeah. Yeah. I mean, this is, yeah. So it's everywhere. Cool. Yeah. I think that's all the question. Thank you for your questions. Thank you for the participation again, if you try it out and give some feedback afterwards. So it's always, it would be interesting to hear your thoughts from now you've seen it when you use it. And then we can see what else there is we can implement or what's a, you know, are you RDF lib hardcore user? And this is the fun feature you'd like to see, let us know. So we can see, okay, this is maybe more important than something else. And we can prioritize, I guess, development. And I would say also from the point of view, you know, because the whole driver to this, I mean, RDF lib is probably the most popular package to deal with RDF in Python. But the whole driver for this is being able to support, I mean, RDF lib is probably the most popular package to deal with RDF in Python, but the whole driver for these is being able to support, you know, the integration with RDF from Aura. And, you know, if there's something specific that you want on your Aura deployment or your Neo4j Aura graph that's currently not supported or that is only partially, I mean, this is the kind of thing that we also want to want to hear and as usual uh the code will be on will be on github uh you know probably show immediately after after this this session so all the code that we've run in the absence of a good documentation at least we'll have the we'll have the examples the example exactly in the video i mean mean, this helps obviously a lot with follow through, at least as a demo kind of step-by-step guide and then do it yourself and give it a spin and that usually helps a lot as well. Yeah. All right. I think that's the end of episode number 12. So one year of going meta concluded. So it was really- High five, Oleg. High five, exactly. Really good, good, good, good fun. Good, good series. Looking forward to season two, as they say. So I guess we continue with this looking forward to next next month hopefully back on the usual slot the Tuesday Tuesday of the month right first Tuesday of the month exactly so that's what we try try to to to be back And yeah, if you are interested in more Neo4j live streams, there's a couple of going on this week and over the next couple of weeks still. So tomorrow I have Tom Azel join me to talk about Wardley maps and how they work in Neo4j. So that's an interesting session, I think tomorrow afternoon, European time. And on Wednesday, we go into the next chapter of the GraphQL book club. I just mentioned earlier, Will Lyon, our colleague has written a book on GraphQL, a full stack application development. And we are going over each chapter in each session. And this is chapter seven. So if you're interested in GraphQL, join this. If you're interested in seeing how, what is a Wadley map, how does a Wadley map work and how to combine the two with Neo4j, tune in tomorrow. And with that, yeah, thank you, Jesus. Thank you for the work on all the episodes, specifically on the, now on the, on the i guess neo4j aura import functionality of rdf that's really cool and thank you all for watching yes thank you thank you bye | Going Meta - Ep 12: Importing RDF data into Aura with Python + RDFLib | 4,057 | Neo4j | 20230116 | Episode 12 of Going Meta - a series on graphs, semantics and knowledge
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Going Meta 12 Month Recap: https://www.linkedin.com/feed/update/urn:li:activity:7016121635761971200/
neosemantics: https://neo4j.com/labs/neosemantics/
RDFLib: https://rdflib.dev/
RDFLib Neo4j: https://github.com/neo4j-labs/rdflib-neo4j
RDFLib Persitence: https://rdflib.readthedocs.io/en/stable/persistence.html
0:00 Intro
3:38 Topic Overview with RDFLib
11:11 RDFLib with Neo4j
16:10 RDF Data Handling
24:10 Parsing RDF Data
30:10 Working with the Data as Graph in Neo4j
35:30 Code of RDFLib with Neo4j extension
46:10 Summary
51:25 Q&A
1:05:45 Wrap-Up
Check out https://community.neo4j.com/ for questions and discussions around Neo4j
#neo4j #graphdatabase #semantic #knowledgegraph #rdf #semantics | 2024-09-03T11:48:37.466948 |
https://www.youtube.com/watch?v=5_WXr0GtVas | Hello everybody, good morning, good evening, good afternoon and thank you too and well happy 2024 I guess I should also include to this one. Hi Jesus, how's it going? It's going very very well thank you happy 24 to you too. Thank you very much yeah good to see you too it's it's been a while it's I must say I'm just re-emerging out of this time, you know, between Christmas, New Year, what day is it actually? But yeah, it's been good. And I'm happy to do this very first live stream of the year with you together, Jesus. I know we kind of discussed whether it would be a bit too early in the year, so I'm sure we're catching someone still with one leg in vacation, one leg coming back. What better thing to do, even if you're on vacation, to watch an episode of Going Meta, right? Yeah, totally. Just a little service update. It's Thursday today, the 4th of January so just in case you're wondering that's not a variation i mean but we still are the the webcast that runs on the second no the first tuesday of the month right yeah some minor exceptions i mean it would have been a bit you know really really too early in the year right yeah yeah yeah that would have that would have been would have been a tough sell i think but yeah um but yeah today uh the fourth so i'm hoping everybody had a good a good new year's had had some some some nice days off if you had them or you know if you work through it all then obviously um welcome nevertheless to to day's episode um it's already episode 24 so we are we are well in into uh into the higher numbers now almost and we have yet another topic it's like a little mini series within the series i would say so i think let me quickly look at this yeah the thing yeah we started rich and so so timely that it's difficult to get out of it, right? So I'm sure we'll take a break. But we're kind of connecting it to something we have done in the past, right? So we're staying on the topic of knowledge grabs and LLMs and RAG. But we're connecting it to ontologies and to kind of the essence of our series, right? And we're going to hopefully go meta today. So we're, again, building on what we've done over the last two, three episodes, and bringing in some additional elements. So that's the plan today. Sounds good. Sounds great. I'm looking forward to it. If you haven't watched the previous episodes, don't worry. Everything is on our episodes, don't worry. Everything is on our overview page. If you click on YouTube, you'll find the playlist there as well. I'll post a link in the chat in a second so you can catch up on the episodes of the past. But you don't really have to. You have to, obviously. But you don't have to do it now. You can stick around and you can watch this one. As Jesus said, I think we're going to tie together a little bit to the previous ones, obviously. But I think it should be okay. Sure. I'll open with a brief recap of the previous episode and and yeah like you say anyone who wants to dive deeper you know all this all the assets all the recordings are available so feel free to go and and consume them at a time that works for you should I um actually I'm going to share my desktop as usual. Let me do my entire screen here. Let me know when you can see it. Actually, quite conveniently, because after choosing and preparing the session for today, I got tagged in this note by Dan and on something that uh jason had shared today and it's you know this time of year when everyone publishes you know the predictions for the year right yes it was very interesting right because they it's totally the topic that we've been working on for the last few episodes and that we're talking about today right and one of the predictions and it's actually the one that Jason mentions, is all about how RAG will evolve from semantic as in vector search to something richer, to something like based on graph networks, right? A lot of what we've been doing. So it was another, you know, confirmation of what we're doing is perceived as really something that's valuable, and it's the direction that this space is going to take. So it was quite nice to get that note from both Jason and Dan. So, yeah, I was going to say that, as usual, I'm going to open with a Yeah, I was going to say that as usual, I'm going to open with a couple of slides to set the context and then jump into some code. So I hope it's all visible and readable. But yeah, the idea is talk about still RAG, still integration of knowledge graphs and large language models. But we're bringing in ontologies. And let me explain how that happens. Again again two minutes on on what we've been doing so it's all about rag i mean for those who have been uh who have missed the previous episode so have been living under a row under a rock uh rag is this uh pattern this architectural approach if you want of uh this integration that for interacting with large language models that involves an external source of information, right? So we know that large language models are great at the creative, generative capabilities, the language capabilities, they can do analogies, they can do reasoning, they can do generalizations, they can generate language, but sometimes the knowledge of the world is limited and it's even more limited in uh your enterprise specific knowledge right so they're they're not being trained they've not been trained on that so we have to inject that knowledge somehow and rag is is the kind of most widely accepted approach to do that so when we ask a question to an llm first we go out to an external source, Neo4j Knowledge Graph in this case, we retrieve all the information that we believe, that we think it's going to be relevant to answer our question, and we pass it to the LLM as context so that the answer generated is based on this information. So we kind of take control over that. And that's the idea of RAG. I mean, like I said, we've been talking about it for weeks, so I won't spend much more time on it. What was interesting is recently in the last episode, we focused on what we call advanced RAG patterns. And these are the ones that actually combine two elements. One is, I mean, basically the two types of semantic search, vector search and graph-based search. And it's really when it becomes interesting, right? When you can combine the two. And in an attempt at explaining how that works, I've created this visual, which I hope is going to be useful for this. So we have a knowledge graph, right? A knowledge graph that captures information about your organization, about your problem, about your domain, right? And your knowledge graph will contain notes and relationships, obviously. But some of these notes will contain properties that may be fragments of text. Maybe because that comes, I mean, that's human-generated, human-entered information. Maybe because it's the result of parsing unstructured documents. There can be many reasons, but there may be fragments of natural language in there. We want to make use of that. The best way to process and to do semantic search on natural language is to create embeddings and index them in a vector index. The way I see that is somehow like a projection of these nodes in a different search space. I mean, I put it like a plane there. It's not a plane because it's multi-dimensional. I mean, we cannot even draw it. But the idea is we pick some of the information in these nodes, we vectorize it, and we project it over a different search space, which is this vector search space. And what we can do there is we have like, let's say a question or a string or an image or an audio or whatever element, and we can do a vector based search that basically will scan that the index and return some results. So when we do a search we find some I mean search is based on on on distance right on on this distance in a back in a vector space so that will be typically Euclidean distant cosine a distant but the idea is that you find the closest vectors to the one that your your search element right and for those vectors what we're going to do is we have to get back to the graph because these vectors are stored in nodes and that's where we do this kind of second step where we enrich the results so we not only uh uh stay in this vector search but we once we have the vectors we navigate back to the to the graph and we can expand we can because we're aware of the structure in the graph we can enrich these results and keep in mind that all this process of going to the server to the vector search to the vector index and navigating the graph is what happens in this intermediate step in the rag interaction so we are in the process of retrieving all the information that's potentially relevant to answer a question or to generate some content, to provide an answer to something. And that's the information that I'm going to pass to the LLM. So I'm interested in having the highest precision, the highest accuracy in this information. So I want to make sure that I combine in the most powerful way this vector search with the graph search. And that's what we've been doing. You know, I don't know if this sort of visual helps or confuses more, but that was my attempt at describing what we've been doing before we go into the code. Right. So, yeah, I think it's helpful. It's helpful. I think it's helpful. It's always especially with I mean, you notice we talk about the graph space since many years now. And I think now, I think we come at a time where people understand when we talk about graph, we don't talk about charts. So at some point, people understand this. now we bring in the vector comes around it's like a completely new concept for many many people it's again very very abstract very unlike graphs very abstract very very difficult to grasp what it is and then you know i think this is a good visual to understand okay this is this there's the graph there's the vector there's the the points in this you know exactly crazy grid and then combining and somehow that that also explains these these kind of prediction i mean it's not a prediction because it's more a reality than anything else that we saw on linkedin before so yeah you could you could get some valid results with just vector search but you would only be doing half of the half of the job right so you could deal with the with the simple the simplest cases where you kind of just chunk unstructured data and you put it in a vector but when you combine the two cases where you kind of just chunk and structure data and you put it in a vector. But when you combine the two is when you can be really precise, really rich, and really accurate. And that's what we're gonna be. We've shown it in the past, and we're going to reiterate that idea in today's episode. So that's RAC. That's what we call advanced RAC because it combines vector search with graph exploration and um and the example that we used uh in in last uh in last episode was um uh if you remember that of uh it was a legal document it was like some legislation from ireland where we have a just a pdf right or even an html version but it was a text document it was there was no structure i mean there was some structure in it because the document has kind of parts, and the parts had sections, and the sections had definitions. So what we did, and I'm not going to go into that today, is kind of try to create a graph representation that somehow captures the structure of this document, and that's what we did. And the thing is once we have it in the graph this graph reflects this idea right so we have the legislation as you can see the orange note that can this is just one of the one of the paths that takes you to one of the definitions but it will be a kind of a tree uh there will be multiple of those in the documents so you have the legislation with parts with sections, and clauses inside the definition. And the thing is, while breaking down in small chunks is good in a way, because it will give you much higher precision when you do vector search, sometimes it can be a problem because if you get just a portion of a definition, because if you get just a portion of a definition, let's say you have a high match, a high proximity on a vector search on this portion, you might use these as the relevant fragment of information for your answer. But the interesting thing is that these should be taken into context. And this is only a clause inside a bigger definition. And this idea would be lost if we don't do the graph exploration and that's exactly what we what we did last time and the way we did it uh um and and you can also look at the other slides from previous uh episode was by uh and we use langchain right we're gonna go into the code we're gonna use python and langchain is is super convenient to to kind of orchestrate the interaction with the knowledge graph and with the LLM. And basically what we did is we have this kind of vector index handler where we can do vector search and it had this additional parameter, which was the retrieval query, where we could inject any random cipher, cipher is a query language for Neo4j, of course, that would basically implement this navigation in the graph that we saw visually before so to the results of your vector search when you do a vector search in on this handler every node return every match return you would apply this uh contextualizing query so you would run a cipher on each of the results and that's it so that was that was what we did. And that was great. It's super powerful, but it had some limitations. And I mark it there. So it's a bit rigid. Why? Because these queries had to be specific of your model. Remember? So of your model, of your graph, basically. So this cipher has to be prepared to explore, basically know the kind of information that this vector search is going to return. And based on these return nodes, know how to navigate that. And that's entirely graph dependent. And while that can be acceptable, what we're going to be exploring today is how can we make this a bit more flexible, a bit more dynamic? And that's where ontologies come into play. Make sense? Yeah, that sounds cool. And the idea is pretty simple because we've done it before. I mean, I put just a brief pause here just to, I mean, that's just animation. But you probably remember in episode 15 where we built, again, it was another ontology-driven, I mean, we call it semantic data, but it was a streamlit application that was entirely driven by an ontology. So the idea there was, okay, we're going to build an entirely generic streamlit application that probably showed some elements in the graph, but there's no cipher hardcoded in there. So I cannot anticipate what the shape of my graph is going to be, what the schema of my graph is going to be. So my application, what it's going to do is it expects some ontology, some schema definition on the graph side. We're going to read it, and this is going to drive the behavior of my application, and that's going to drive the kind of queries that i have to to create the idea again if you remember was instead of having bigger and heavier application logics which means a lot of a lot of code on this side maybe thinner application logic and more information on the data side so we put a bigger description in the graph and we'll see that an ontology is also a graph and my application is going to be dynamic, entirely dynamic. It's going to go read your ontology and drive its behavior from there. So that's what we did in this Streamlit application. And what we're going to be doing here is exactly the same. Basically, instead of giving you a hard-coded cipher query here, what I'm going to do is go read your graph, get the ontology, and based on the ontology that you share with me, I will derive the behavior that I want to implement. So it's going to be entirely dynamic and driven by what I put on the graph side using ontologies. Does it make sense? Yep, that makes lots of sense. Sounds good. Perfect. OK, so that was that was the the idea in that episode, which I think it's kind of relevant for these for this one. And well, from there, let's go and. See some code. So, OK, as usual, I've created a sandbox that i will destroy at the end of this session it's an empty one and i'm going to be driving from this notebook i hope you can uh it's readable is it big enough alex is it maybe make it a little little bigger okay let's try yeah i think that should be good i hope the um yeah the text fits in the in the page but well initially we just uh import i mean we're writing python we're going to be using langchain uh open ai i mean gpt uh as the as the uh um llm and neo4j as the knowledge graph for RAG. So here I'm just connecting, I've pasted the IP. So when you, this of course, like always is gonna be available in the GitHub repo in at the end of this session. And basically what I do is, you know that when you access your sandbox, you have all the connection details, user password and the URLs. So I've copied this and i'm um oops i'm here uh these should be my credentials so i can now connect uh from my code so one of the things that i can do from langchain is connect to the graph uh in in a standard way like using the the driver and running some cipher against it. And I'm going to use that to populate the graph. And I'm going to rebuild the graph with the legislation that we used last session. So I'm connecting to my graph. I'm creating an EFJ graph handler with my credentials. And I'm going to run these three steps, which basically um the legislation graph into my instance so hopefully in a second here we go we should see all the elements loaded here just as a way to uh to check that everything looks as expected we can run this query this query and we should get the expected results basically what I'm doing is I'm getting the path the tree that gets from the root element which is the legislation all the way to the definition of the term unavailable deposit can you zoom in here as well a little bit? Let me make this a bit bigger. Yeah, perfect. So, yeah, I'll show the cipher again. But basically what I'm doing is, okay, I follow, in this case, we created intentionally a pretty simple graph using just a contains relationship. That's something that can be, you know, modeling whichever way you prefer. And basically the idea of today's session is that there will be elements that change and they have different names in the graph and we want to be able to have our rag application dealing with with those in a dynamic and flexible fashion so basically i'm starting on a root that contains the asterisk is a variable number of hops. So there will be a chain of contains that take you all the way to a definition. And then the definition I'm saying that is that of the term unavailable deposit. And the definition will have clauses, right? And these are the three last elements. So we can see here that the coloring reflects that. So the blue ones are closest, the green is the definition, and the rest are sections and subsections inside the chain. So it looks like it's loaded correctly. So we have the tree that represents that portion of the legislation. And then what we did here is, you probably remember, is we are going to vectorize some of the content so if you remember there is um these definitions uh contain fragments of text right for example this is a definition of a deposit that has some text and that we might want to to run some vector search on it so i'm going to take these definitions along with the term and um and i'm going to um to vectorize that my strategy to do that and you can do it in many different ways i mean because uh when you use uh langchain here what you have to pass is the a node label so which the which node types you want to embed in which of their properties so you specify an enumeration of the properties that contain a text that you want to embed you want to vectorize an index so my strategy here because there are nodes of different types because we have like we saw we have definitions we have clauses we have all the types of elements, the ones that I want embedded, I gave them an additional tag, which I call embeddable. So that's my way of, I added to the closers, I added to the definitions and to all the elements that contain some properties that I want embedded. And like this, it's easy. I mean, I have a secondary, an additional tag, an additional label that then I can use to say, go and get all the embeddable nodes. And in particular, these three properties and go create the embeddings. So this is using the OpenAI embedding service. You can use your preferred one. So there is not prescriptive here. LanChain supports integration with a large majority of providers. So you can use Vertex, you can use Bedrock, you can use even local, you can use Ollama if you have a local LLM. So there's many options there. So what this has done is it has created a vector index that I've called vector index, not very original, and it's embedded these uh these values so if we go now to a graph and i do schema i should see a vector index which is of type vector that has been populated for vectors of type embeddable right and every node at the same time you will see that uh i'm going to have to rerun this query to see not this one the previous one and we will see that uh i'm gonna have to rerun this query to see not this one the previous one and uh we will see that uh definitions now have not only the definition before or closest but also a representation a vector representation of the embedding right and that has been persisted as a property good perfect so we are all set to start running some queries. So back to our code. The first thing we're going to do is let's get a question, the one that we used last time. Let's get the first one. Our question is, what is an unavailable deposit and how does it become unavailable? That was exactly the definition that we saw a minute ago right so we see that an unavailable deposit that's the definition means a deposit that is due blah blah blah blah and then it further defines it by saying uh it can be either and there are three clauses when the competent authority blah blah when a court has made for reasons which are directed i mean an enumeration of clauses that that further specify the definition of a of an unavailable deposit it would have been a couple of lines paragraph lines different you know different paragraphs in the in the actual pdf oh exactly exactly but but the point here is that you have to take the whole thing in in context because otherwise you would get just a fragment of it and that's what we saw here right so Exactly. But the point here is that you have to take the whole thing in context, because otherwise you would get just a fragment of it. And that's what we saw here, right? So if I take this question, I can do a vector search. So I have a vector handler now. That's what I, this created the vector, created the vectors, the embeddings, and populated index, but it gives me a handler that I can use to run some queries. And here I'm doing a vector search. I'm doing a similarity search with the question that I've selected. And this returns a number of matches. So this is in the visual that we used before, we're in the projected space, and we're finding the vectors, the fragments of text in our graph, in our legislation, that are closer to our question and the question remember is what is an unavailable deposit and how does it become unavailable and luckily enough yeah one of them is uh it refers to to an unavailable deposit right and and but it you know it's fragments of them some might be closer some might be uh incomplete so there you go so that's the the kind of results when you do a vector search now because these three components these three clauses make sense in context and all together what we need to do oh and let's well before i go into that we build that this vector search in what's called a rack chain right right? So we, yeah, lang chain gives us the possibility of creating a retrieval Q and A chain, which basically implements the rack chain that we saw before. So it will go and do the vector search that we just saw here. And with the results, they will be passed to the LLM and the LLM will return an answer. So basically these orchestrate the whole process. with the results they will be passed to the llm and the llm will return an answer so basically these orchestrate the orchestrates the whole process so if i run if i create this handler and run the question uh it will tell us you know in natural language that the definition that mentions uh oh sorry i why did i that's That must, it looks like it seems like it's taking a, oh, I hadn't run this one. Did I not run that one? Oh, that makes a lot more sense. Unavailable deposit. Yeah, I was getting, I think I didn't click on this one and it was getting the results from a previous execution. Anyway, when we have an unavailable deposit, the closest answers are, yeah, the definition of an unavailable deposit, but then there's the closest and some other fragments sorry maybe people were a bit confused because the results were not making a lot of sense but anyway these are the vector results that match this search and then when we run uh the the whole chain the rack chain that does the vector search passes it to the lm then what we get as an answer there you go is an unavailable deposit refers to a deposit that it's due and playable so it looks like it's picking something right probably the first result which is a good part of the definition but we know that it's not complete so that's the kind of experience that we're going to get if we go just with vector search so that's exactly what we wanted to implement these advanced patterns. And that's what we're gonna do. I mean, I'm not going to repeat exactly what we did last week, but what we did is we implemented, okay, we search and then we, on the results, we navigate to complete the picture and provide a consistent context. Now, what we're gonna do is I'm going to define, this is a bit large, I don't know if I'm going to, I mean it's not very large, it's just two, it's a function, a couple of functions, but basically what this function is going to do, it's going to generate dynamically the context query, because further down, as we're going to see, I'm going to create this, as I showed in the slide, I'm going to create a contextualized vector store that is, again, another handler on the same vector store, but I will inject this retrieval query at the end. So I'm going to say, do the same thing that we've done before, but to your results, apply this contextualization. And this is not hard-coded anymore. This is going to be run dynamically. And this function, getDynamicContextQuery for a particular pattern, is the function that I was showing here. And I'm going to try to go into a bit of detail here. So the first thing, it's going to go and get the ontology from the graph. So in order to have that happen, and we see that there are resources, there are subproperty odds, we'll make sense of that in a minute, we need to have an ontology in the graph. And we don't have it at the moment. We only have the graph that we've created. So let's probably go and populate that. To do that, we're going to use you can use your favorite ontology editor I'm going to use protege which I've used in the past that's the local I mean the desktop edition of the the web protege that we've used also in other episodes and what I'm going to do is I'm going to basically annotate my ontology so in my ontology describes the content of a graph right it's a kind of a an explicit description of the graph in a way we could we get something close to an ontology when we do this DB schema visualization so if I do schema visualization wow that's that's not very good that's why an anthology is much better i mean what we're doing here and yeah that makes sense because because of my use of of the related to uh and um and the contains relationship so essentially we have um um this is clause, this is the legislation, so this shows the different entities in my graph, and then the possible relationships between them. This is of course not formally described by anyone, this is just learned from the graph, and because we use the same two relationships, related to and contains, you will find them across every possible pair of nodes which is a bit confusing and not very helpful in this case so here's where you will probably want to create a real ontology with a formal definition i mean pretty much the same thing which are the entities and how they relate to each other and eventually annotate them as well which is what's going to be useful for us what's going to be useful for us so let's do that so i go back to protege and i'm going to say okay i'll create my ontology and i'm going to give it a uri for example i'm going to call it going live and this is my ontology uh let's call it it's going meta24 and this is the legislation. Whatever, I mean, you just pick a name. For example, this one. And I'm going to import an ontology and I put it here because I've created a code ontology for rag patterns that just defines a couple of patterns that I've implemented. These are the ones that I've implemented. So these will be extended, of course. This is a turtle definition, but basically it describes there's a further detailed relationship. So I'm going to describe that there are some relationships in my graph that are useful to provide further details. That's the idea of the definition before. So when you have a definition, the definition can have sub clauses. So this relationship, whatever it's called, contains in my my graph but it could be called whatever you want in yours i'm going to annotate it and say this is a relationship that is of type further detail relationship because these relationships are the ones that my rag application are going to take to use and expand the result so that's what i try to describe here further detail relationship is a relationship that connects an entity to related entities that provide further details about it, right? I'm also defining another one, which is the one that goes backwards. It can be seen as a synonym. I don't know if it's exactly the same. Is this readable? Should I make it a bit bigger? But I call it inverse contextualizing relationship. So what happens if I get a portion of a relationship, sorry, a portion of a definition or a portion of a document? Sometimes I want to put it in context and I want to navigate to the parent. And that's what I think Thomas Bratenich and others have called the parent pattern. So it's a way of going to a container and then retrieve all the siblings to get not only the exact match to your vector search, but all the additional elements that are connected to the same parent. So it's a way of bringing, again, additional context that's potentially relevant for the LLM to generate the answer. So these are the two relationships that I've defined. And I've implemented the associated behavior in my RAG application. fun and and i've implemented the associated behavior in my rack application so these are the two and i'm going to import this uh these ontology in in protege because i'm going to extend it so i can import ontologies here i'm going to say that i am going to import an ontology that's located on the web and i have a i've added a bookmark here so this is my ontology again all this is is public and we'll add it to the repository but uh if i continue uh it will import it finish and it's all done so this ontology has been imported uh and now if we go to my entity view there's no classes defined that's what i'm going to do now and uh and there's no object properties well there's the two that we defined there's the further detailed relationship and the inverse contextualization relationship okay so let's I don't know let's create a class for example if we go to our graph you probably remember that there were definitions and definitions contain clauses right the way to do to represent these in an ontology would be something like okay there is a class I'm going to add a subclass that is going to be called a definition good so I create this entity and there's another entity which is a close I'm not going to create the whole ontology but you see what I'm going we've done these in other episodes I believe in this one with streamlit so you have the whole condition of the ontology but basically you describe the entities in your the types of nodes in your in your graph and then you can define the relationships between them and in particular here i'm going to create the relationship that exists between these two which is called contains it happens to be the same across all the graph it's probably not the case in in any other real we've called it like that here in particular here let's go back to my project, contains is a, let's call it is an object property. An object property is a relationship. I mean, in in RDF in OLL, which is the language that we're using here, because the definition between object properties and data type properties in a Neo4j graph in the property graph, an object property is a relationship. And a data type property is an attribute as a property of a of a node. So basically, attribute there's a property of a of a node so basically those that have a literal value that can be a numeric a string a boolean an array so in this case we're defining the relationship contains so contains is a relationship but and it's also a relationship we can say that um uh it has a domain and a range it will remember that domain and range is the definition of what entities does it connect so the domain of this relationship will be it starts from a definition and it ends the range ends in a clause ends the range ends in a clause okay so basically what we're saying is that the contains relationship it's it's not entirely true because we're using it for other cases but in this the fragment that i'm interested in is the one that connects the definition to the clause and i want to specify that my contains relationship is a further detailed relationship is a relationship that actually i want my rag to use to to further specify to further contextualize the results of the of the vector search so that my way of specifying that is saying that this is um a sub property of further detailed relationships so if i do, it will naturally put it in the hierarchy under its parent. And that's, I mean, I'll stop here. That's enough for me for now. But you see how we could define the elements in my ontology and annotate them. So what I've done here is I've said, okay, there is a relationship called contains that connects definitions to clauses. And this relationship is very particular because it's a further detailed relationship according to the RAG ontology. What my RAG general purpose engine is going to do is going to go and find that there is a further relationship existing in my ontology and it's going to generate cipher based on that. So let's first save this. So if I do file, save, and we're going to generate cipher based on that. So let's first save this. So if I do file, save, and we're going to choose. I mean, we could pick any syntax. I like turtle. I think it's particularly readable. So if I save this, I'm going to give it a name. For example, I don't know, something like going meta24 onto legislation. And I save it locally perfect and now I'm going to show you what that looks like now it's in my local drive and this legislation if I open it with my text editor, it looks something like this. Let's make it a bit bigger. And again, I don't wanna go into the details of the syntax, but you see the elements defined. So we've defined, I mean, we've created an ontology, that imports the rag ontology, which is a public one that we have created, but it's public. And it defines uh one object property the the properties contains and it's an object property that's a sub property of a further detailed relationship and it's a property that connects a definition the domain is definition and the range is a clause which are themselves classes so this is all right so we have an ontology definition. It took us two minutes. It's a limited one. It's a small one. Now I'm going to put that in my graph because this is what my RAG application is going to go read. And based on that, it's going to build the relevant cipher. So how do I do that? Well, I can go to my graph and i can import ontologies using near semantics so it's i'm in sandbox so it's already installed i can do something like pull antennas onto import actually before we input let's uh let's preview let's just let's see what it looks like and um actually i'm i mean i could fetch it from a file but I'm going to literally copy paste it and use the inline flavor of the method so if I do that and say that it's turtle all I'm going to do is like I said paste my ontology here I'm literally copying it from the file that I just showed you and pasting the content here. So these ontology import ontology preview, they have the two flavors, the fetch where I can say, go read this file and take the content and the file can be local, can be HTTP, can be FTP, whatever. Or I can just inline the ontology. And here what I've done, like I said, is I pasted the file that I just showed you. It's really copy-paste, right? So I'm not loading it yet. I'm just previewing it. And if there's no errors here, that should show me, well, more or less what I was expecting, right? So there's a class, sorry, a relationship, right, as you can see, called contains. let's make it a bit bigger so contains is a relationship you can see it here that has a domain definition and a range clause right so it's a relationship that's established between definitions and clauses and it's also a sub property of a sub relationship of a standard one called further detailed relationship so that's exactly what i wanted to do make sense yeah so it tells me basically that this is part of of a of a you know like graph which which is not not a you know how do you say it say, it's part of a tree, if you will, and then you can, when you look at it, you should be looking also at the nodes and the relationship that come before it, so it knows it's not stuck on this individual point. So this, in a way, this is kind of a separate graph. So once we import it, and we can run it right now, because this a preview let me let me run instead of preview let's run the import so now we've loaded it so there were six triples there we've imported them and they've been persisted in in the graph so now we have this these uh loaded in our in our um in our neo4j instance so we have it here persistent and it contains if we expand it here, persistent, and it contains, if we expand it, we have the domain and range. So you're right, this is a graph. It's another graph, but it's kind of a metadata graph. So it's kind of separate. We have our legislation with all our nodes and relationships, and separately, we have a separate graph that describes my graph, right? My instance graph. So it's somehow as if I had persisted as nodes and relationship the result of my call db schema visualization right so uh in this case i am i am explicitly stating things rather than learning it from the database so i'm saying there's a contains relationship that is established between definitions and clauses. And it's a sub property as a sub relationship of these. And there could be more and I could actually, I mean, we could perfectly well come, come to my, maybe in my graph, there is, there's no others, but we could add all the relationships saying, Hey, this is also a further detailed relationship. And it's built in a taxonomy. I mean, in this case, there's a one level taxonomy, but that's exactly what my what my rag application is going to search for. It's going to go read this kind of descriptive graph, the metadata graph that describes what am I going to find here? And it's going to build a query based on that. So that's the ontology driven part. So it's going to build the query based on that so that's the ontology driven part so it's going to go read the ontology and based on that it's going to generate the relevant query what that means is that the query itself is not hard-coded in my in my rag application the the query is dynamically generated entirely based on what i put in my graph so okay that's why my application becomes way more flexible and I take control of its behavior from the graph, from the ontology definition. Make sense? Yeah, that makes a lot of sense. Yeah, I think that was helpful for everybody to understand that you take the graph, whatever it looks like, and you have an explanation layer basically on top of it, that then a system like LLM can understand and read and knows exactly what's individually happening in each branch. So let's say here it contains, but there could be other relationships. There could be, I guess, node labels as well and stuff like that. And that would then be helpful for the system to understand, okay, how the world, the graph basically is connected, what the individual parts of the graph are and how it then can interpret that and make work. This is exactly what we did. I mean, sometimes it's called like a semantic layer, like a description, a semantic description of what's in the data that can be consumed by any application. And if you go, any of the people attending wanna go look at episode 15, we did exactly that. So we created the definition and then my Streamlit application would go read the ontology and dynamically generate the queries that would consume. And it's also a way of defining views over your data, because maybe you want to expose part of it and not all. And maybe instead of having an application that behaves dynamically based on the ontology, you want to do some reasoning and the reasoning. And we did that also in episode, I believe it was four. You probably rememberlex well we decided we expressed something like uh uh a director or what was it or an artist is someone who performs in a in a play right so we defined and then because you performed or i performed in in going meta we were annotate we were tagged we were marked in the graph as artist or whatever. So basically some behavior is dynamically driven based on what our definition is in the ontology. So this approach of formally putting some meaning, some semantics explicit in the graph and having some software in the more general term, it can be a rag, it can be a stream data, it can be a reasoner, that does something based on that is this idea of the explicit semantics of the semantic layer or however we wanna call it. Yeah, yeah. So if we go back to the, I mean, the whole reason why we've done this is because now we can go back to our rag application and this is looking exactly for that. So we know how ontologies look like. I mean, the sub property of relationship and let me go look like. I mean, the property of relationship. And let me go down here. I mean, I think I've run this code. But you will see how we can test. So there's two types of queries, further detail or the other one, which is inverse contextualizing. So if I run this query now, you see that it's generating something that contains, that relationship contains. Why? Because it's gone to the ontology. It's read that contains is a further detail type relationship. And it's found it and populates the graph like that. Just to show very quickly how this is dynamic. You'd like to reload. yes, let's do that. Yeah, I created, I mean, this, so let's say I'm gonna create another one. Let's say there's another relationship that's, it doesn't exist in the graph, right? Hang on. Not this, I'm going to use this. Another relationship, it's non-existent, it's just so that you can see how the query is built dynamically. Let's say includes. Let's say that there's another portion in the graph, in the legislation that something includes something, that that's also there. So I'm annotating it also as a further detailed relationship, right? In this case, I'm not even specifying the domain and range. So I'm going to do this and I'm going to save it. So if I save it it i will have my uh um ontology here let me you know you have to i have to reopen it because the text editor doesn't get the refresh so if i open with with text edit there we go let's make it a bit bigger we see that now this contains and this includes right so if I take again the whole content and do exactly what we did before if we not here here. So if I go to my ontology imports and replace the previous one with this new version and I'm doing a quick and dirty copy paste. Now it's eight instead of six, as we saw before. So if I rerun this, bang, now we have two, we have includes and contains. So we have two we have includes and contains so we have two relationships that i can use in my rag application to you to do further detail expansion right so if i go back to my uh my notebook look at this query before it was just contains if i rerun it now it's finding includes and contains because it's found two types of relationships that are worth exploring. So this is entirely dynamic. The moment you change things on your graph, the behavior of your RAG application is going to change. Why? Because you're being explicit and saying this relationship has this behavior. And this sort of engine, if you want, reasoning engine, whatever you want to call it, interprets that as this is the way you want to use it. So long explanation, I'm conscious of time, but basically what I'm doing here in this step is if I run it, I'm going to basically create a contextualized vector index handler, basically on a dynamically generated cypher query so what happens if i ask the question now and everything goes well as expected look at that so now the the result from scanning is not fragments is getting the whole package is getting an unavailable deposit is the data and it has the three sub clauses so it's fumbling it all together basically it's running this query that I showed you so when we have a node resulting that contains or includes something take these these queries entirely generic so it's taking all the properties in the node and generating them so I don't care what's there it's entirely dynamic that's the whole point and returns these results so So this is what this is returning So we have all the information and if I pass that if I build another Q&A chain on these contextualize Handler it's now going to produce hopefully a much richer definition of what a Unavailable deposit is. There you go. So now we have an unavailable deposit is blah, blah, blah, and it has a totally accurate description of one, two, three. So I've made sure that the LLM gets all the relevant context perfectly bundled so that it can generate the rich response that's accurate and complete. Right? Yeah, yeah, super cool. This is one pattern. But we I mean, like I mentioned before, we have other examples when it goes the other way. And here, the sky's the limit. I mean, you decide which are your patterns and what's the best way to chunk your document. So that's what I came up with these two ideas. So there was the other pattern where I I was talking about reverse contextualizing. So it's kind of the opposite. Instead of getting a definition and adding all these children, I'm doing the parent approach. And that's the other idea. So let's say that in my ontology, and I'm going to do this quickly, let's say yes, I'm going to create, indicate that that contains it's actually not only a further detailed relationship but it's also of the other type it's a contextualizing relationship so i can use it to navigate the other way so if i persist that and it was file save save and I reopen my document with the text editor. There we go. Now we have... did it reopen it or contains? No, it didn't. I have to close it. I'm opening it with the text editor which is dynamic. So now it basically looks both ways exactly so well now it's defined i mean we can choose how to i mean this is too small let's make it a bit bigger but you see that contains now is defined as both a little bit smaller further detail but it's also if inverse inverse contextualizing so it's a relationship that I can navigate in both directions right so again let me do it I'm doing it this manually of course this will be part of a of an automated kind of ETL chain of some sort but anyway for this session I think it's good to do it like that I'm literally copy pasting again. Now the newer version of the ontology, boom. Now it has another one. And if I rerun this query, now we have, there you go. So we have further detail and contains is both further detail, but also inverse context, right? Nice. Yeah. So again, I'm enriching the ontology. I keep adding elements and remember that contains also had the domain and range right so there's there's additional elements so i can keep adding explicit fragments of knowledge to my uh semantic layer to my ontology and that will further drive the behavior of my rag application so if i go down here now i'm going to do something similar so uh instead of the this pattern i'm going to use the other one and of course you will be wanting to use both in combination here i'm use them use them using them separately but let me uh create another one uh and uh basically this is the the opposite right so if you get a detailed fragment it will navigate to the this is the opposite, right? So if you get a detailed fragment, it will navigate to the pattern. It's the pattern template. And for that, I'm going to use a different query. Sorry, I'm accelerating a little bit in the interest of time. But for example, what definition mentions section 17 of the Building Societies Act? Boom, just one question. So if I run this question, look at this. So like we did before, you will get vectors that closely relate to this one. For example, there's the building society. There's the section 17. There's a 17 here. I don't know if that's the reason why it brings it up. So there's things that are similar to it. Again, they're probably being used in isolation so there you go section 17 up so this is probably the the the best and the right match but it's incomplete right because it's a close of a bigger definition and what i'm interested in what i'm asking actually is what definition mentions that because i don't know and if if i get only that and i build the rack based on this fragmented information it's going to tell me that the definition is the definition of a building society under close b which is it is incomplete right because uh let's let's see what it's like actually let me um so uh how can i do that so i can search for a close as we saw that. Let me see. That's and and dot definition starts with something like that so if I run that there you go so we get these why is it so big so we get we get the close but if I expand it I probably get what I'm after so this is is contained in the definition of uh definition embedding the term authorized so what's an authorized I don't know whatever authorized means in this context in relation to a credit institution so in the definition of authorized if i expand it we have a mention of these uh regulations section 17 of the building societies so what i'm up to really is this this definition and what it's telling me as an answer is just based on this fragment which is incomplete as an answer is just based on this fragment which is incomplete right yeah so that's what i'm getting now in this case is a perfect k perfect scenario for this navigating backwards i've got into something that's very detailed and i want to put it in context by navigating to the parent and that's what i've done here so i've created uh um not a further detail, but the inverse context. So if I run this, I can now, I suppose we have the question loaded. Now the context, look at that. Perfect. The defined term. So instead of getting me fragments, now it gets me the whole thing because it's applied the contextualizing query that hasn't been generated dynamically again. So we have the defined term, which is authorized. And then the definition is all these sub nodes that we saw before. So this is much richer and much more accurate context to pass to the LLM. And if I pass it like we did before, I would expect to get a more accurate and richer. Oh, that was concise, but they're perfect, right? So it's the definition of Oprah. It's going definition of. Perfect answer, very short and concise, but there you go. I'll stop here because I see it's the top of the hour. But I know it's a pretty dense session because it's one of those where we really go meta, but people get the message. What we're doing here is we are making the rag part more dynamic so that it's driven from the graph side. So I decide what to put in my ontology, and that drives the behavior of my rag. At the moment, my rag application, as you're going to see when you have a look at it in GitHub, it only implements these two behaviors, these two patterns, the further detail and the inverse contextualizing. But you can think of adding several more. I mean thomas has published things like uh what was it the synthetic questions the summary so you can do multiple patterns and and have kind of micro reasoners on the rack side that can work on any graph and that's the powerful thing of having it data driven ontology driven semantic layer-driven. I'll pause here and see if we have some questions because it's, like I said, I know it's been a dense one. So I'll give people a break. Totally. But I think it's very cool. I mean, last episode basically was already very interesting. But now basically this is making it more and more usable, I would say. I mean, not that the other one wasn't usable, but the bigger your data pool is, the bigger your graph is, the more you don't want to type it in yourself. You want to have some kind of layer in between that takes some workload away from you and gives you... It's a good point that you bring there, Alex, because some people might say, hey, I don't need that level of complexity and flexibility there. I mean, I'd rather hard code my query because it's only going to work on this graph that I'm working perfectly fine. Maybe you want to... But in a way, you're kind of reinventing the wheel every time. So having these kind of repeatable patterns that are generally useful so this idea of navigating to the pattern to the parent to contextualize or further uh specifying something i mean these are kind of it's like you're building a library of tools that you can then reuse and and and this works across graphs i mean one of the things that i was thinking if i was maybe taking that and show how it works in a different graph. All I have to do is layer the ontology on top and it will work exactly in the same way. It's too much for a single session, but yes, I mean, it's yet another tool for people to think, do I want to make my logic a bit more flexible, more dynamic and drive from the ontology? Maybe not. Maybe I just happy hard coding and some people might go down that route. So it's both are fine. Yeah, totally. We had a couple of questions. So let's try if we can make it work. One or two actually from Ultra Eagle. They say, would it make sense to track the queries being run for traceability purposes? Absolutely. Yeah. And I didn't show it. And you're totally right. And one of the biggest problems of these RAG applications is traceability. What's going on? Because you see that in this single line of code, when you build the chain, there's a lot going on because you're going to the external store, to the graph store, you're doing a graph search, you're enriching the results, you're passing it to the LLM. so you want to really track what and for that there are tools for example langchain has uh can't remember the name of the name of of the tool they offer some i don't know if it's commercial or open source but yes i mean you could build an element of that some logging in the code that i've created but there's there are tools that that do that for you but But I totally agree. And when you're working and debugging and trying to find where things go wrong, this traceability is critical. Yeah. So spot on. I totally agree. Cool. Then they follow up with, does it make sense to create virtual nodes from the ontology and then generate the query? I mean, you generate virtual notes when you're kind of doing the preview, but you have to persist them at some point, because if you think of it, the rag part will go and query that. So the ontology has to be persisted in the Neo4j graph so that I can query it and get the description in the ontology. So the virtual node is also useful when you are kind of exploring the ontology before you import, but it has to eventually be persisted. I mean, think of these multiple, even multiple rag applications. So you need to have it persisted and centralized it. So I don't think maybe unless i'm missing something the virtual nodes i mean the visualization is one part but you have to persist it eventually so that these elements um yeah work uh work correctly so oh lang smith exactly that's that's uh thanks roy so there's um that's that's one option but the lang chain offers these these um these tool to to give you all all traceabilityability on what's going on in the chain. Yeah. And there was one more from Skander. You created the ontology file manually. Do you think it can be generated automatically based on a document? I mean, you can, but that's a different problem. I mean, you can, but that's a different problem. I mean, so you can, yeah, if you give a document to an LLM, for example, an LLM will do entity extraction, and it will tell you these are the types of entities in your document. So you can generate an ontology out of unstructured data. But in this case, it's different. Here, I did want to take control of, because remember, that is driving the behavior of my RAC application. So to answer your question, you can do it. But in this case, I wanted to own that process and indicate this relationship is one that you might want to use to contextualize in one direction or the other. And I wanted to own that. So that's why I went down with the manual creation. But yes, absolutely, that ontologies can be, I mean, they're always going to be halfway between the automatic generation and the curation. I mean, ontologies in a way are kind of capturing experts' knowledge. So yes, you can get help with automation, but I think it's inevitable to have some element of curation and human in the loop in that and in this case yeah it just it made sense for us but yeah question yeah thank you there are a couple more questions i don't know how much uh are we looking for time i mean we can we can go over them uh briefly but because we already over um maybe we briefly touch them um one is um do you have ontology templates examples in neo4j so that's more like a general question i guess not necessarily yeah uh i mean we have created in this series a bunch of them, and you will find references to them, and we publish them normally in this URL that we use today. But they're not really, I mean, other than in our collection of assets of the going meta series, they're not very well documented. I mean, they just mentioned. So, I mean, the quick answer is no, we don't. Maybe that would be an interesting thing too. But yeah, given unlimited time, that would be something. Yeah, yeah. So, I mean, we don't have one. You will find some examples in the series. Maybe in the life sciences space, there is BioCypher, which is kind of going in that direction. So, I mean, that could be helpful. But, you know, there are as many ontologies and semantic descriptions of as many topics in the world you could look, you can think of. So it's a tricky thing. Yeah. okay, cool. And the other question, just to close it down, was from Viditya and they say, would I be able to compare two versions of a document that has changed over time using Neo4j, also more generic than... Yeah, I would say that's a slightly different type of problem. Here, yes, here we assume that we're, I mean, we didn't even touch on the versioning, on the tracking changes. I would say there is an episode where we did ontology versioning. So I would recommend you have a look at that. It's just part of it. But so yes, the answer is yes, you can do it. You have to define the scope, the depth of the change tracking that you want to build in the graph. But yeah, we didn't do it in this episode. Maybe have a look at this ontology versioning episode and that can get you maybe started. Yeah. One more, I think, because it's an interesting one, is from Angel or Angel. Tell me if I got it right. You query the ontology to get the extended cipher queries. Have you tried to use the LLM to use the to use the or take the LLM to use the ontology to query the graph directly? So have I tried to use to have the LLM use the ontology to query? I mean but how do you I mean you mean probably pass the ontology to the LLM mean the the the thing is the llm is a passive element so you can ask i mean the lm is not going to go uh query i i suppose the i mean i suppose there are different patterns here i mean the the short answer is is you know in this case we were looking at this particular pattern where we have some context and some structure in the graph that we want to leverage as part of this kind of advanced rack pattern. But I suppose you could pass the ontology and context and have the LLM generate a query, if that's what you mean. I mean, that's something that we've done and that's exactly what Langchain, I mean, that's something that we've done and that's exactly what one of the capabilities of language chain is the generation of cipher and it does something very similar. You will go query the graph, you will introspect the schema elements, and based on that it will generate queries from natural language. But that's yet another interaction pattern. So that's where you want to, you're not doing this approach of vector search and further enrichment, you're trying to generate cipher and return results. So that's possible, but different type of scenario. Different, yeah. Okay, cool. I think we close it for today. Thank you everybody for your questions. If we couldn't get to yours, sorry for that. But you can always reach out on the Discord server, the community forum as well. And on various other outlets where you can interact with us, ask questions. And we should be able to find somebody from the community to help you out or to discuss further things. And yeah, it would be always interesting to hear your questions, hear your thoughts. So if you have any other points, use the YouTube comments section, also a good one for feedback, for inspirational ideas you have. If you try this out yourself, if you want to give us feedback, then use this outlet for that. Let us know if you are struggling with something or if you tried something and had a good result of it, then obviously we would like to hear that as well. Always nice if these, you know, we show you a little bit how it could be done, but it's these, you know, we show you a little bit how it could be done, but it's just, you know, a teaser sort of an inspiration for you to see what's possible. If you then take one of these episodes and build something out of it and make it work in a full scale or proper application scenario, then that would be great to hear because that's helpful for us to see that this is something that people actually take on in a daily life scenario. And yeah, I think we are at the end. The next session that's coming up is obviously in February, so next GOING Meta is in February. is in February. Next Tuesday, though, we'll be back with another Neo4j Live session. And on Tuesday, we discussed unique graph use cases for reducing risk and improved bottom line. So it's more like a bit more classical graph database technology session, but it should be interesting. Some learnings there, some good feedback, some good use cases of how to use Graph Databases and where to use them. So looking forward to this one and yeah, thank you all for watching. Thank you for sticking around and again, happy 2024 and see you soon. See you. Thank you. Thank you. Take care. Bye bye. | Going Meta - Ep 24: KG+LLMs: Ontology driven RAG patterns | 4,247 | Neo4j | 20240105 | Episode 24 of Going Meta – a series on graphs, semantics and knowledge
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Knowledge Graph Book: https://bit.ly/3LaqE6b
Previous Episodes: https://neo4j.com/video/going-meta-a-series-on-graphs-semantics-and-knowledge/
0:00 Intro
5:10 Ontology driven RAG patterns
17:20 Code
1:00:31 Q&A
1:08:20 WrapUp
Check out https://community.neo4j.com/ for questions and discussions around Neo4j
#neo4j #graphdatabase #knowledgegraph #semantics #ontology | 2024-09-03T12:34:22.755244 |
https://www.youtube.com/watch?v=E_JO4-2D5Xs | All right. Hello, everybody. And good morning, good evening, good afternoon. And thank you for joining to Going Meta, episode 23, Advanced Rack Patterns with Knowledge Graphs. So this is the first, Jesus. We continue immediately after previous episode with the topic, kind of. immediately after a previous episode with the topic, kind of. It may be, yeah, that's probably right. But it's such a rich topic and such an interesting one that I wouldn't be surprised if we spend a few more on it. But yeah, you're probably right. No, I'm sure we will. And it's a great topic. But first, thank you for joining, Jesus. How are you doing today? I'm very well. I'm not as cold as you are probably. After seeing the pictures, it's pretty crazy over there, right? It was crazy. Yeah, I was at my parents' over the weekend. And basically from, I mean, it started snowing Thursday afternoon or something. And Friday evening, it was already like 20 30 centimeters of snow saturday morning we woke up and it was like almost a meter of snow everywhere we were completely snowed in uh there was no way of getting getting anywhere so until saturday sunday early afternoon or something we were we were kind of like left for us it's like the you know internet connections have not been affected. No, the internet was good. We can see you well, which is great. That's amazing, yeah. No, it's all good. It now already seems to be thawing a little bit. So at least here in Munich and back in Munich now, the temperature has gone up. But it was super cold. It was minus 15 or something so serious temperatures i don't think we've experienced that here i mean ever i would say but happy to keep things like that to be honest yeah yeah don't need don't need don't need that every uh every other day so um so for sure now i know how our colleague adriana feels like when she goes out of the house in Canada where she lives. Looks like it's chilly in Chicago as well. Oh yes. Hi Daniel. Good to see you. Yeah. There's a little a drop here and other places. So yeah, so it's good, but yeah, this is, like I said, episode 23, last episode, we discussed a little bit of RAC and then on all of that and how does it work? What does it mean? What is going on in terms of knowledge graphs and LLMs and how does the two concepts flow together or how do they combine well and how do they add things towards each other? I guess it's a good way of starting the conversation. If you missed it, don't worry. I think you can, as usual, even though this episode may be a little bit building upon what we talked about last month, but you're probably going to be up to speed quickly. And then, you know, you can always watch the episode later on. Yeah, I'll do a couple of minutes recap of what we covered last session. But yes, the recordings are up and all the assets. So it's up there for everyone to try at home and yes, it's, yeah, should be, hopefully, I mean, it's pretty dense. I mean, as usual, have loads of content. We'll see if we manage to squeeze it all in the session, but we'll see. If not, we'll have for the next session. Have space for the next one, exactly. And yeah, if you watch us live, then please use chat as you go through the content. If you have any questions, anything is unclear, any comments, even if you just want to say you like it, it's also nice to hear. But if you have any misunderstandings or anything is unclear, then please let us know. If you watch this after the fact that we streamed this today, then obviously the YouTube comment section is a great place to put something in there. Even better yet, the community on Discord or on the Neo4j community forum are places where you can ask questions and we can have a look for them there and help you out if anything is is unclear afterwards perfect so shall i go ahead and and share my screen yep let me know if you can see it i'm going to be sharing my whole screen and moving this out of here. Oh, and I see Noel joining from Argentina. Probably he is having a completely different experience there, shorts and a t-shirt, but also nice. Different type of Christmas to the one that we have here. Yeah, yeah. Good, good, so you can see my screen, right? I can see your screen, yes, here we go. Good, good. So you can see my screen, right? I can see your screen, yes. Here we go. Perfect. Good. So, well, yes, like you said, I'll take it where we left it last episode, right? And we talked about RAG. And just to recap and to refresh, for those of you who missed it or who have been under a rock, like I said, you probably haven't heard about RAG. But RAG is kind of an architecture, right? you who missed it or who have been under a rock, like I said, about a rag. But rag is kind of an architecture is like, you know, a framework and approach is a way of building applications based on large language models, but integrating in real time with some external source of facts that's curated, that's trustable, that's explainable, and that enriches the whole, I mean, the quality and traceability and explainability of the results. So that's high level what it is. And that can happen in many different ways. Of course, the one that we've been exploring here is when we use this kind of grounding partner, this external source being a knowledge graph. But the idea is like you see in these diagrams. So we have some form of front end. It can be an interactive one. We looked at chat style applications, but it can be anything. And today, actually, we're going to be looking at something that's more kind of a batch process where we generate personalized content for emailing or things like that. So anyway, some form of application that interacts and leverages the conversation, well not the conversation, the linguistic and the reasoning capabilities of an LLM, but that injects the knowledge of the world, the facts. Because as we've heard and we've discussed in the past, large language models are great at these things, at reasoning, at talking, at generating content, translating, summarizing, but sometimes they struggle with world facts. And of course, they've not been trained on your own private data, your organization's private data. So that's something that you're going to have to inject. And that's exactly what this architecture, this approach that's right solves. Right. So that that's the that's the idea. And again, last episode, we touched briefly on how rag knowledge graphs in rag, you know, improve the quality of the solutions. And I have the impression that, you know, I kind of went over it high level, like saying types of solutions where the graph captures the structure of a document. And because you're structure aware, then you can leverage that structure to generate. And I'm going to explain what that means, because you know that here we're big fans of not using just the words, but showing what we mean by them. But I talked about this kind of modeling in the graph, the structure of document, the sections, the subsections, the images, the tables, and things like that. We talked also about the idea of having a knowledge graph where we have entities, where we have relationships, and using that to enhance the result of a vector-based search by doing some semantic search, vector-based search, and then taking the results and doing some post-processing and contextualizing them, enriching them by navigating the graph. But even though I explained that, I have the impression that the demonstration, I mean, the experiment that we ran last time, look at the two approaches, the vector-based approach and the graph-based approach separately and uh and maybe some people came with the conclusion that you know you take one or the other but not the two and that's exactly what we're going to show today so how the combination of the two in these two type of examples is really really powerful so the idea is that now as you know the mvj uh supports vector-based search since the introduction back in August of the vector index. Now we can combine the two things. And that's really, really useful. And that's exactly the plan for today. So does that make sense, Alex? Yeah. Yeah. Perfect. That makes sense. Yeah, that's cool. That's cool. That's interesting. I think that's a good point that we showed previously, the two different approaches and that we now go and combining them, I think, is an interesting story. Yeah, so the first one focuses, and they're quite different from each other, right? The first one focuses on, like I said, unstructured document. And it's something that we used a few weeks ago in the summit here in London. And it went down quite well and people liked it. So I thought I'd reuse it here, at least a part of it. And here the idea is that we have a legal document in this particular case. It's something that my colleague Michael was asked to do by a customer. And there's this PDF and you have the URL down there. It's some financial regulation in Ireland and in the document it's a PDF if you go to that URL you'll see that it's described and it contains a number of parts each part contains sections and and yeah it's natural language and then sections contain things like definitions. I intentionally put that fragment. I don't know how readable it is. But this block there contains the definition, right? So if you look closely... Let me just go to the little bit. It's a bit small. Well, anyway. You know what a definition is, I guess. Well, we're going to see that in the graph in a minute. But basically, that describes what an unavailable deposit is, right? And when and how does it become unavailable and these sort of things. And you see that it has different, you know, clauses in the definition. There's A and there's a section, B, C, and it happens to be across different pages, which can also be a problem. Because if we look at the way we traditionally would take a document like this one and put it in a vector index, it would be that by doing what we call chunking, right? You take fragments of it, you know, but you could, of course, vectorize the whole document, but then the precision of the search would not be very, very good. So what we normally do is chunk the document in small portions, create entries in the vector index, and then do searches on that. That's exactly what we're going to do in this experiment. But the main difference is that what you see there, to me, and I'm sure to many of you joining this session, it looks like a graph, right? Because you have a regulation that has parts, the parts have sections, the sections have... I already start to see a tree and that's exactly the case. So that's, we're going to see that in a minute. But if we model that in a graph, and I will not cover so much the construction, because I want to focus on the rag part. But you see how that, and actually the LLMs do help. And that's the part that Michael did in the graph summit. So you can also take that PDF and ask the LLM to do the structuring of the document, the structure extraction of the sections and the different parts in it and generating a JSON based XML based whatever. And that's something that you can then ingest into the graph. So the construction is also something that LLMs will help with. But let's start from the point where we have that already. So we have a node representing the legislation. It contains a number of parts, as we saw in the previous screen. The parts contain sections, the sections ultimately contain definitions, and the definitions will contain clauses. So this is a standard graph. Now the thing is, some of these nodes will contain fragments of natural language text. For example, the definition like we saw before will have a piece of text, same with the clauses. And this is the kind of thing that I will want to first vectorize and then index in the vector index. So that's where the demonstration is going to begin. So how am I gonna do that? Because I wanted to jump from the slides to my notebook. So before I go into the into the code let me um yes i thought of that actually because i have a number of things going here good because what i have is i have an instance an on our instance that has the graph that i've just described already loaded okay i have a notebook that i'm going to be sharing as usual in the a notebook that I'm going to be sharing as usual in the GitHub repository. And yeah, my plan is to run it step by step. And maybe it's worth, because I'm going to use this time, langchain. And langchain, some of you may know, is kind of an orchestration package. It's a component in Python that really, really simpl the the construction of applications that interact with lms and do things like rag and stuff like that so uh yeah like like i was saying let me go back to the to the slide so i have this graph already and what i want to do is first take a bunch of the properties in these uh in these nodes and put them in, index them, vectorize them and index them. So I'm going to do that with the help of, it can be done in a number of ways, but I'm going to do it using langchain. Sorry, jumping back. And I'll show this fragment and feel free to not dive into all the detail. But basically what I'm doing here is doing a single call to this method called from existing graph, which basically is going to do the job of vectorizing certain properties and adding them to a vector index. And the way to pass the configuration is pretty intuitive. First, I pass, which is the embedding service. And in this case, I'm using OpenAI. I'm giving the credentials and the access point to my knowledge graph. And that's my Neo4j instance with my username and password. Then I'm giving them the name of the index. Every index in Neo4j has a name and I'm calling this legislation. And what I'm gonna do is tell this function, what you have to do is go take all nodes of a particular type or with a particular label and certain properties within those nodes. That's the way I select the properties, the text properties in my graph to populate that vector index. Then I store the vector itself in a new property that I'm going to call embedding. So all this is going to happen in a single call. So I don't need to care. And that's that's how this language chain is great, because this rather complex multi step process is going to become very straightforward. Now, some of you might be asking, hey, what's that embeddable label, I have not seen that before. And you're totally right. So there's a number of ways of i mean actually i can show you what um um we have the graph here so if if i look at the clothes for example there you go we have some of them here or even even let's look at the definition okay so if i take a definition uh um i see that it's it's labeled also as embeddable but i can expand that and if i can uh let's make that a bit more readable so um this is the section that contains the definition uh the the definition the section will be itself well it will contain all the definitions that that was not a very good actually maybe from the notebook because I have a nice query here that's going to return something a lot more readable but what I what I wanted to show let me go back to my legal one so here I'm taking the the one that we saw in the slides the definition of an unavailable deposit and that takes the whole chain and makes it look quite nice so we have the the legislation of the root we have the part actually maybe to make it a bit more readable i can make that bigger yeah that's good so we have the the legislation at the root then we have the the part the section subsection and then we have the legislation at the root, then we have the part, the section, subsection, and then we have the definitions and the clauses in them. Now, what I've done is, OK, I have to select which are the nodes that contain textual information that I might want to embed. And my strategy has been, OK, I'm going to add an extra label to my definitions. to add an extra label to my definitions this is a definition that i've also tagged as embeddable and the closest i've also tagged them as embeddable so now this embeddable is kind of an extra tag that i put on certain things to indicate my vector index okay i will call my vector index my lan chain go and take all these things that are tagged as embeddable and they can be of different types and uh pick the properties that i'll tell you and use them to to create the vectors and that's exactly what we were doing in in this section so that's how i i um i was uh passing this embeddable tag which is an additional one that i've created as my way of marking the types of nodes that i want uh indexed so that's that's what this uh is gonna do this is is the magic of off-line chain, that it does the whole thing. And once I have that, I have a handler here, this vector index, that I can use to run semantic search. So that's what this does. But let's jump now into the notebook and let's run it for real and we'll see what it looks like. So, again, I've run the input, so I initialize my environment variables with my access point to the instance, to the Neo4j Aura instance, credentials, etc., etc., my OpenAI key. And just as a test, because, you know, the whole thing thing here and I'm going to show that in a in a mini application also in Streamlit is okay I can I mean I could ask directly an LLM kind of skip the rag step and say okay I can ask this question of what is an unavailable deposit directly to the to the LLM. So if I go back to this slide, I could say, okay, my application can ask directly the LLM and I will get some results. Might be good, it might be bad. In this case, it will give a better, I mean, I wouldn't call this a hallucination because this, as you can see, this is returning what's an unavailable deposit. And it's probably based on general knowledge. But now that's probably not the answer that I want in the context of my regulation, my legislation, sorry, my private information. So I use that as a kind of like a baseline. That's where we start. But the next thing we're going to do is, okay, we have some information vectorized in the graph. Actually, I'm going to write it here. This is the fragment that I showed a minute ago. So all the credentials, I'm going to run this. And it should populate the graph, sorry, the vector index. And now we have a handler. That's this vector index. What I can run are semantic searches, as in vector-based searches. And the syntax is pretty similar. I can just invoke on this object, vector index that has been created out of this, a similarity search based on a question. And the question, as you've seen here, I've defined it, what is an unavailable deposit and how does it become unavailable? It's interesting because I've run this in a couple of other events and it's amazing how you can run it in other languages and still semantic similarity works. We're going to see that later. But that's the reason why you find it here in French and even in Spanish I've run it. But anyway, so we have a question, a question in natural language. If I pass that question to this vector index similarity search, that is going to return the best match, and that's where the K1, I'm returning the closest neighbor to that question on my vector index. And that's returning effectively the node that's closer to the definition of an unavailable deposit, which is the definition node, the one that we saw before. Fair enough. That's kind of something to expect, right? The problem is, as you saw before, is that this definition is incomplete because that's just the first part, because the definition contains three clauses that completed the actual definition of an unavailable deposit. So if I build a whole rack chain and that's something that again LandChain helps us with and I believe that's the content in the next slide although I might not go back to it again but But essentially, that's the construction of the QNA, the Retrieval Augmented Generation chain, where we pass the vector that we just created as a retriever. That's all. And of course, the LLM that we're going to be interacting with. That's what we're doing here. So if I create this chain, I can then run exactly the same question. And instead of getting this result as output, which is what's happened when I just query the vector index, what's happening there is it's building the whole chain. It's querying the vector index by using the vector index here. But then it's passing it to LLLM, which is this LLLM, chat GPT. And it's going to return the answer based on this context. So that's what's going to happen. So if I run that, that's going to give me a more elaborated answer based on this definition. And that's a lot better because that's already one step in the right direction. But still not happy with this because as we saw before, we know that a definition may potentially contain clauses. So here's where I'm going to combine the vector search with the graph exploration. Maybe my match returns something like this because it's effectively the closest match, but I'm going to say for any match that you get, go and try to navigate the contains relationship. And if it contains any clauses and reach the result and produce a more a more contextualized and more enricher kind of answer so that's exactly what happens here and uh i'm going to go back to my slides another i don't know if that's going to make people a bit that's okay anyway the idea is pretty simple. I mean, before we were calling from existing graph, and that would take the graph and populate the vector index at the same time. If the vector index already exists, I don't need to recreate it. Now I can say, okay, give me a handler on this vector index from an existing one, and I pass the name. The name is legislation. It already exists, and I'm going to use it. Same credentials, my user, my password, where my Neo4j instance is, and the additional element and where the magic is going to happen is this additional configuration component. I'm saying there's a retrieval query and this query is some random cipher that I'm going to configure. I'm going to show you what it looks like and that's what i'm going to say for any match that the vector search returns go and run this cipher because that's going to enrich and reach the result and that's exactly what we are doing in this next step so here we go so that's the code and uh you see that's uh the fragment that I just showed, and this is the query because the graph is pretty- Say that again? That's the cipher magic. Exactly, that's the cipher magic. And in this case, it's pretty simple because the graph, I've tried to build it in a pretty consistent way where there's a contains relationship. It was a tree and the legislation contains parts, the parts contain sections, the sections contain definitions, the definitions contain clauses. So if you just navigate the contains parts, the parts contain sections, the sections contain definitions, the definitions contain clauses. So if you just navigate the contains relationship, you could even make it like a variable length one and get as many child elements as you could. And we'll discuss how making that generic can be powerful. But we have now this contextualized vector store, which is, again, it's a pointer. It's a handler on my vector index, but now it's enriched with this query. When I run a question pretty much like I did before, the same question and the same limit, I should get something different. If we've added, there you go. We have the definition that we had before, but now we get the three clauses following it. If I pass that to my rag chain and I'm creating here a new rag chain now using these enrich contextualized vector index I can rerun my query and print the output and hopefully hopefully I'm pretty convinced unless things go wrong but this is going to look a lot better because now what we're doing is, OK, we're asking a question and we're going to the graph. There we go. And that's a much richer, a much more complete definition. And unavailable deposit is a deposit that's supposed to be paid, blah, blah, blah. And it can become unavailable in three possible ways. And basically it's literally listing the three clauses that we had in the in the graph right so so that's how this combination of vector search that finds the closest to whatever my question in natural language is after can get enriched by exploring uh the the knowledge graph and now just to show the whole thing in in action i mean that's because you know because this is fine for techies that are watching us, which is probably most of them. But the kind of thing that you're after is something like that. Oh, and it's already on. Damn it. I wanted to. Anyway, let me start from scratch. So I've created this basic stream lead up, which basically has exactly the same code behind. The only thing is it creates an interface on top. And what we do is I enter my question in natural language, and I have the three interactions that we show. That's what happens if I ask directly without any rag. If I go straight to the LLM, that will give me potentially a valid null hallucination, but completely no context. I mean, at least not the context that I'm interested in, which is the fact that I'm talking about this particular, and think of this legislation as any enterprise private data, data that has not been used for the training of the large language model, and therefore, you know, no context is available on that side so that's the kind of answer that i would get which is plausible but not what i'm after the second one is a bit better but it's incomplete right second step we just do the vector search and get the best match but then the third one it's actually the richest one and it's the one that i'm really after because it uh it returns uh following the whole process all the context around the vector index results. Okay, I'll take a quick pause here, short pause here. Any questions? Does that make sense? Yeah, no, that's very cool. I think especially now that you showed us side by side, I think it makes it very clear that the one after the other or one or the other result is much richer, much more information, much more contextualized, as you said. So I think that's important because we say this, we use the knowledge graph to ground stuff to contextualize your results. But what does it actually mean it's it's it's it's a good very good demo exactly exactly and you know it's also a kind of a demonstration of how vector search only it's not as rich as vector plus graph plus knowledge graph because you know sometimes and some people think that maybe a vector database is enough. And it might be for some cases. But if we have a proper rich context capture in the knowledge graph, that really enables for much richer and more precise results. So that's the idea. Yeah. results. So that's the idea. Yeah. I mean, with the vector, you kind of get to the right, let's say, if you imagine it's like being an area, like a space of the graph, you get to your point where you want to go in a very quick way, but then it stops there. It doesn't give you more information about it. It's like if you were gaining directions of saying, hey, I need to go in a very quick way but then it stops there it doesn't give you give you more information about it it's like if you were gaining directions of saying hey i need to go to city hall and somebody says yeah it's at i don't know the city city hall one square and then you get there but then you they don't tell you anything what's what's around and i say okay yeah here's uh i don't know some other locations around it and around it or that needs to be informed about the whole, that makes the square, makes the information about it. Absolutely, yeah. And it goes beyond that. I mean, it's probably not something that we'll have time to show here, but if you think of it, you can do a lot more there. I mean, you can apply access control. I mean, the moment you have the graph, you can be more fine grained. If you hear I'm not modeling any kind of user configuration, but if if if my graph is user aware, I can even return personalized results based on who you are, because I have control over the results plus the navigation that comes that comes after. So, yeah, absolutely. I think the richness of the context return is significantly improved. So that was the first. Interesting questions from Gopesh about what's needed for this to work in such a way. And he says, it seems like a contextualized rack needs a highly enriched knowledge graph. Would you agree with that? Or what would you say? Yeah, I mean, it requires a knowledge graph. I mean, the thing is, and that's probably something that we'll cover again in future episodes. But you definitely need a graph. You need some some kind of in this case, some awareness of the structure. But I wouldn't say it's highly I mean, it will be enriched over time. But this, I would say, is a pretty straightforward type of structure of a document. Right. If you think of it in the context, it can be a documentation. I think that's similar to what Adam has built in our Graph Academy, right? So you can have a different type of structure for courses or even for web pages. So I think the construction of this should not be, I don't know if his question is more in the direction, it requires a massive effort. I mean, it requires definitely some awareness of the context and of the structure, but I don't think it's a huge and significant effort up front. I mean, it's probably bigger if we get into something that we covered in previous session, like entity extraction, where you have to build richer structures. But this one is, I would say, pretty approachable and something like I said, that will grow organically over time and will keep enriching. Because it's true that in this case, I've created sort of a custom query that follows the, you see that it's the contains relationship that exists all over the graph, right? You can go meta there, and that's the plan relationship that that exists all over the graph right yeah you can you can go meta there and that's the plan for for next next episode and and actually be more dynamic about that so maybe i'm not going to pass the set the cipher there i'm going to configure that on the fly with applying ontologies applying some annotation of my graph so we'll go into that but but you know to do to to the question, yes, you need a graph. I don't think this is a massive effort. It's relatively straightforward to build a graph. What you need, though, and I think Kopech then adds this in a chat message later, is having domain knowledge, obviously. So if you set this up and if you like when you went through and you put the the notes uh with embeddable as a second tag on it as a second label then yeah you know you understood okay what's important what what what what do I flag um as the expert the domain owner or the domain yeah that's a fair point although again in this in this this is the first example that focuses more less less on the on the domain knowledge and more on the structure of the of the of the input so here it's more about hey there's sections there's parts there's definitions and and yes in general i understand that definitions are important and the kind of things that i'm going to be looking for but uh yes, I mean, of course, you know, the richer, because you know, the value proposition here is that this knowledge graph is not just a means to enrich RAC. The building knowledge in the knowledge graph is a great investment of effort. I mean, it's going to deliver value in many different fronts, right? So I think it's valuable in many ways. But in this particular case, it's true that I focused more on the structure of the document than in the domain knowledge. But that's exactly what we're going to cover in the second example and what we saw in the previous episode, for example. So yes, building domain knowledge in the second example and what we saw in the previous episode, for example. So, yes, building knowledge, building domain knowledge in the graph is definitely an effort that's worth it and that you should definitely invest time on. Absolutely. Yeah, sit down with your experts, sit down with the team. It's the usual. It's very approachable for everybody, even if you have no ideas of what a graph database or even what a LLM or RAG means. If you show this to somebody from your legal department and say, hey, what's the important part here? Then they will be able to tell you what should be flagged and what shouldn't. Maybe before we go to the next example, because it's an interesting question from Peter here. You use an embeddable label, but could you have just used the text node type of node instead? The text node type. So, I mean, you can do it in many ways. I mean, what you need to populate the vector index is define a pair of label and property or collection of properties so I've seen it done in different ways some people what they do is they have the node and they create an external separate node that contains the actual unstructured text and that might be that might be good for a number of reasons I mean depending on how the graph is is being used maybe it's true that having uh the the a large chunk of text in a separate node for for i mean to enable all the types of of uh of accesses to the graph so that could be another option like having these nodes definitions linked to a text or to a vector and then you say okay that uh that node that you can label as as text or as vector is the embeddable one so you you can you can do that in in many different different ways so so if that's what peter means absolutely so you can do it in i mean that's that's my way of doing it because i i you know i kind of like the sort of compact graph that this creates so i just you know add an additional tag tag to to these notes and and it's all kind of like the sort of compact graph that this creates. So I just add an additional tag to these nodes, and it's all kind of bundled in the same node. I could have extracted it to a separate one, and that would be absolutely fine. Cool. Thank you. Good. So good. So that's experiment one. Now, the second one is more, it means less about the structure of the document, and it's more, let me go back to the slides. Let's start that. One second. And, yeah, so this is covered. So here's the second one. So we have a – oops, wrong. I don't know where that came from. Anyway, so yeah, this is – I'm going to try and create an art gallery assistant. But this is not going to be kind of a conversational one. This is the one that's going to generate some personalized emails in this case. But the starting point is I've looked at the tape modern here in London. So they have the catalog online and the catalog looks like these pages that you see on the left. So there's an artwork and there's the information about the author. There's lots of textual information and some subjects that are touched by the this artwork. So I've loaded that in a graph, in a very simple graph, and basically the attributes of the artwork are built into a node that represents the artwork, which is what you see on the right-hand side, things like the dimensions, the materials, the medium as they call it, like the materials that are being used, the date, some description, the kind of display caption, what you see in the museum if you go and read the label next to it. Then in addition to that, there's a note representing the author, and then another collection of notes, the blue ones, that are the subjects that this touches on. So this particular one is about figures, dynamism, geometry, abstraction. So these type of concepts that describe the artwork. So that's the graph that we have. So here there's no notion of structure of the document. What we have is standard type of graph and what i what i want to do here is something oops before i go into the code because i mean the objective here is slightly different so i'm going to again same same idea i'm going to pass to my llm information that i control and um basically i'm going to give the facts for the LLM to generate a personalized, yeah, let's call it an email that's, let's say Alex is going to visit the Tate Modern and you enter a topic of interest and then the agent is going to look up the catalog and give a recommendation of these are some artworks that you should go visit and that is why. So instead of again, delegating that to the LLM, I'm going to take ownership of the facts. Again, this is of course the take modernist public data, but think of these as your own enterprise private data. So the LLM is not aware of this information. So what you have to do is find the relevant, the potentially relevant data, that's the idea of RAG, and pass it on to the LLM in context for it to generate the response. And the, yes, let's go into the code. It's slightly more cumbersome, still using lan chain, but the idea is similar in concept. What we're doing here is we're defining here a sort of, let's call it an application. In principle, what we're doing is here, I wonder if I should show them the demo first. It's going to make it more understandable. But anyway, so effectively what I'm passing is, well, the customer name. And this is just, of course, not just because I want my email personalized, but it's a way of indicating that I could pass some identification of the customer of the user so that you can even further like I said before filter personalizing the results so that's that's one thing that this example doesn't do but could be done and then a search prompt which is a some form of search string a question like we saw before like before I was asking what was the definition of an invalid deposit. Here I'm going to ask what's, you know, how does Andy Warhol address the topic of, you know, happiness or whatever. And then you'll take that to generate this sort of guide. So these are the two inputs. And these are the two parameters that I'm going to be passing as input. And these are the two parameters that I'm going to be passing as input. And this is going to be used on this function that I call Recommendations App that you will not be surprised to see that it's exactly the same thing we saw before. So this is going to be a vector index handler like the one we saw before. And this is the similarity search that we run in the previous example. So no difference. I'm going to pass the input. And in this case, I'm returning the top three results and just joining them into a chunk of text to return. So this is just wrapping the same processing that we were doing before, but in a function, because then I'm passing that function here to this kind of chain. I mean, I won't go too much into the details of how this syntax, which is a bit, you know, weird, but basically think of it, this is the input. Then we, I'm gonna show the prompt, how we use this input in a prompt. We pass that to the LLM and then we stream out the output. So it's kind of a declarative sort of syntax to indicate how the chain happens. And the other part that I want to show is what this prompt looks like. Because the prompt is critical here. Because what I'm saying is, okay, I'm going to pass some context, which is the output of the vector search plus the graph navigation. But look at what I'm asking the asking uh the llm i'm saying hey you're a personal assistant for an art gallery and you have to write an email to the customer name that i'll pass you who's a member of the of the of the gallery and and and he or she is planning their next visit so the aim the email should summarize you know the artworks uh that they have reached they have search with the search string and uh i suggest okay why not introduce kind of a joke uh how it does and um but i insist please only use uh the artworks or the context that i'll be passing in context so again don't use your knowledge of the world just use the knowledge that I'll be passing and that will be the result of my retrieval from the knowledge graph. And so on and so forth. I'll pass with them the URL, I want the URL to be also visible in the output, and that's pretty much it. So that's the instruction, the prompt sort of definition, and I will be passing a list of relevant artworks from my graph based on the search that I will get from the user. And again, that's just standard a long chain syntax. You'll see that in the in the in the notebook. And once I have this this prompt application, I'll invoke it and you will see the results. So before I go into the next section, let's jump to yet another, I have to reconnect now. But okay, let's bear with me. I'll do the input. But again, you will see some some pretty familiar elements. So still same thing. Now I'm connecting. And don't worry about you seeing the user and password. This is a sandbox that I will after this session. Anybody views, it's not a problem. So I'm going to be perfect. So that's already been installed. I can get rid of that. So I define my environment where my instance is. This is another one that I have. Of course, it's gone sleeping now, but it's here. So the graph that I showed a minute ago is already in my sandbox and uh should be visible here anytime there we go so we have the artists the artworks and the subjects and so on and so forth so you can see how there you go so this is an artwork the the topics and and so on so that's that's the one i showed so back to back to my notebook. So here I'm doing exactly the same as in the previous example. So I'm creating in this case, I'm not populating the index, it's already been done. And what I'm saying is just like I did before. In this case, take the artworks and use the title, the display caption, and the artist. These are the fields, sorry, the properties that I want to, I want the index to capture in the vector and to add to the index. So that's, I won't repeat it, it's exactly the same thing we did before. I'm storing the embedding in the embedding property. It's already been installed and the index is gonna be called artwork. So that's, I think it's done. There you go. And now I have a handler and i'm in a ready to do exactly the same thing as before so i can ask a question and i can do a semantic search brazil for example i'm interested in works by constable featuring buildings natural language question search string if i run that and again do a simple vector index search and get three results this is going to return uh three nodes in the graph uh that here three results. This is going to return three nodes in the graph. Here we go. It's going to return the title, the display caption, and the artist, which are the three that I embedded. These are the three closest matches based on these fields. To my question, works by constable featuring buildings. So that's my context. But in this case, again, like I did before, I don't want to pass all this textual representation to the LLM. I want to pick portions of information that I want to be used for the recommendation. And that's what I do with my uh retrieval query like i did before in this case what i'm doing is i take the artwork and navigate to the subjects you remember how every artwork contain a number of subjects so here i'm aggregating them the blue notes if you remember exactly same same same logic as before i connect to the to the index but i pass this retrieval query and the retrieval query means every time you find a match and reach it with whatever this query returns. And this is exactly what this handler does for me. And I put it in a function, just like I showed before. I'm just wrapping it in a function because that's what I'm going to pass in the second step. Just to test it, I could call it with my search string of the constable paintings. So if I run it, this is the kind of output that it produces. It gives me the title, the author, the medium, and the features. So I ignore all the blah, blah, blah, all the text, and I want to pass that to the LLM so that it produces the recommendation or the personalized email exclusively based on that. And then the URL in case you want to click on this. So this is my top three, I'm limiting it to three. And now pretty much the same way, but now using the prompt instead of the direct rag that I showed at the beginning, I create my prompt definition and the application like I saw before, like you saw before because i i should i'm just running the code that i showed on the slides and from here all i have to do is call it and of course i'm going to call it with you alex i'm going to say i'm customer alex and i'm interested and the two parameters that i need to pass is my search string which is the one that we saw before the constable paintings and uh and that's my name alex so if i run this it should now chain the whole thing so it's going to uh take the search string exactly do the vector search return probably these elements uh then pass them to the llm apply the prompt and pass the prompt and and the context and this should generate uh something that looks like an email that i will be that my agent, the agent will be sending you on your mobile app when you go to the tape modern saying Alex, that's what you have to do. And I'm really curious to see how they do with the Joe. Yeah, that's probably why it's taking so long. The good one. that's probably why it's taking so long it has to come over with a good one something like that so you see uh subject well subject your next visit to the gallery constable masterpieces away so dear alex uh i hope this might blah blah blah all this thing and uh uh incredible art artworks by john constable but so all looking great right so there's and of course you know as expected this is one thing that you should go see and and that's why. And here, I mean, in this case, it's being, I mean, that's surprising because one of the things with when you invoke on an LLM is that the results can look different at each time. And here it's somehow been very, very kind of not very verbose, right? It's literally taking the elements and putting them in a kind of you see as a list of features and it's true that this one is kind of build the phrase out of it rather than just put the list but yes it's decided to do it in this way and then it says click here to view the artwork blah blah blah and now here comes the art pun for you, there you go Constable works are truly building a legacy in the art world well, pretty bad and actually I don't like the fact that it says here's the pun for you, I would have liked it to kind of subtly blend it into the text but anyway, it's not perfect but you see the idea, right, so in this case it's built,, but you see the idea, right? So in this case, it's built. And again, I've done something similar to what we did before. And I'm going to say the same, right? So I'm going to ask Alex. And this time, I'm interested in, I have an idea here. Why not? Because that's what Alex asks himself when he visits the Tate Modern. I'm interested in artworks that focus on geometric forms. Actually, I like those. You know, I was doing the research. I came across this one and I found it beautiful. I just loved it. I mean, I hadn't heard before about this artist, John Wells, but I can't wait. The good thing is this is just around the corner from the office. So I'm going to go I mean, I hadn't heard before about this artist, John Wells, but I can't wait. The good thing is this is just around the corner from the office. So I'm going to go have a look, but I really love it. That's a great example. So that gives the, sorry, the application. There we go. So we have the, and that's what it would look like, the final product. So it would produce something that looks like an email. I hope this email finds you well, blah, blah, blah. And you see all these things are it's different. So now it's gone with a more kind of standard kind of paragraph style rather than a sort of a bullet point. But the good thing is that these things are clickable. So now you can see spiral movement and you can go in and see actually the the page um and uh let's see with the pun here you have preference to that it's just skipped it right he didn't didn't do a joke didn't do it yeah oh yeah and remember when it comes here it's all about finding the right angle there you go okay that's better anyway well that that's, I like, you know, because sometimes we tend to focus too much on the Q&A type of and they're interesting, of course. And that's great for, you know, I mean, for example, we mentioned before the graph Academy, you know, when in a kind of a training environment, being able to ask question is great. But sometimes, you know, it's really powerful to use LLMs to generate highly personalized content. And this is a great example of that, where I leverage the LLM for the generation of this kind of natural language content, but I give the LLM exactly the information that I want it to use. And I kind of take ownership and control over the actual facts. And again, back to the idea of the explainability, I give the provenance in a way by giving the URL and the clickable link. But yeah, like I said before, that can go even further. So here I'm passing the fact that I'm Alex. If this happens in a session, I would have your user credential. I could use that to even further personalize. I know who you are i know what you're interested in like you know without you telling me that you like geometric forms maybe you just need to say hey um uh i know i'm i feel happy today uh tell me what should i go see and because you know it knows and it accumulates information about you it can provide uh uh something something relevant based on on the knowledge that are uh that's stored about you in the in the knowledge graph so yeah or like another imagine you have a have a user i mean the the the the examples that come to mind is like board game geek or goodreads or something like that where you have your your your reading list or your board games you play and the the ratings and all of that and you can say, okay, I have a user. I want to send them a personalized email. I know what they like. I know what they played recently. I know what genres they go for. And then I can combine the two. Absolutely. So I hope that's given kind of an overview of, well, these two examples of how the graph can can play the role of you know this kind of brain that that knows all the facts and and and gives us control over them and and how we can decide what's you know how we enrich how we expand how we contextualize these facts and then we pass it to the llm because the llm is great at you know at reasoning at generalizing at doing analogies at generating uh text as we've seen, as having conversations. But we keep control of the facts, which is extremely, extremely important. So that was the idea. And to your point, Alex, I wanted to conclude, and I'm conscious of time, with a final slide with where to go next, because there you go. So that's that's some content that Dan Bukowski from one of our colleagues in the US has been creating over the last few months. And it's I think it was presented, I don't know if I mentioned it last time. But it's it's super interesting, because it goes in exactly in that direction that you said, what he talks about is how the graph can not only contain the the actual facts the the knowledge about the domain but it can also store the conversations i mean this interaction that we've had with the crowd when we ask about what's an unavailable deposit in the in the legal example well it can be stored and it can uh persist what your question was what the answer was which chunks of of which bits of the graph were used as context to generate that content, right? And like this, it's building again, a history of all the interactions and all the knowledge that you've been, you know, exchanging with the LLM that again can feed future responses and build a more personalized experience. So that's definitely something that people should go look at. There's a bunch of blog posts and there's even a YouTube session pre notes where, where him and Alex went over the, you know, what they had built with when was that? I think that was, yeah, that was the road to notes. The road to notes is like a workshop training, a little bit, a little bit longer content, but more hands-on more tutorial style um if you want to do it yourself you want to learn how to do it if you want to get some some step-by-step guiding then this is definitely um worth worth looking at yeah yeah so that's that and for us what's next like i mentioned before because you you know, one thing that I, you know how we like to go meta here, right, Alex? So like I mentioned before, I don't want to have to be explicit about my cipher. So how to, depending on the question, how to expand it. I want to be able to control that by annotating my graph. So that's what we're going to be covering next, because I think it's it's an interesting topic. It aligns with our kind of general topics of semantics and ontologies, because, you know, the way I see it is I'm aware of my knowledge graph and I can say, hey, this is a property, this is a node that can and should be expanded, should be enriched. And the way to do it is by using these relationships, providing these type of annotations so that the hey, this is a property, this is a node that can and should be expanded, should be enriched. And the way to do is by using these relationships, providing these type of annotations so that the retrieval part of the rag will be able to leverage that to do something in a more dynamic way. Because now, the way I built it, if I want to change my logic, I have to go and change the cipher and then get a different behavior. But so that's the plan for next. Probably, I mean, unless we think of something else. Come up with something else. But yeah, that sounds great. Sounds good. Very interesting. So that's what I have for today. I mean, I'm really happy. I hope it was, you know, all the code will be there. I might have gone a bit too fast. If you are not familiar with langchain, it might be a bit too much code on the screen, but hopefully it was at least understandable at a high level. And then when the different components, the notebooks and all that are in GitHub, you're of course free to go play with it yourself and it will become a bit more clear, hopefully. Yeah, let us know if that was the case play with it yourself and it will become a bit more clear. Hopefully. Yeah. Let us know if that was the case or if not, that was the pace was great. And then also let us know about that. Yeah. Mike Miguel asks in chat, when will be notebook and the graph available in the repository? I think probably it is already. Well, not this time, sorry. But it will happen, you will happen in the coming hours. I mean, tonight for sure. Later today, latest tomorrow. Check it out. The GitHub repository and the links are in the video description, either on YouTube or on our landing page. So check it out there and you'll find the information on the code and all the snippets from there. There was one more comment from Daniel I wanted to read out. Daniel says, so many benefits from the knowledge graph, both on the human and on the machine sides. Seems downright foolish not to apply them broadly, especially in the case of a Gen AI application. I could not agree more. It becomes more and more clear that there are a perfect combination, right? Like the brain, as in the, I know I've said it a hundred times, but I'll say it a hundred more, like the brain, the facts, the explainable, the trustable knowledge, and then the reasoning, the creative capabilities of LLM. And the combination of these two are really, really powerful. I mean, these, of course, I understand these are sort of like everything we build in this series, the toy examples, but I guess, you know, the concept is pretty easy to get and pretty easy to extrapolate to really large scale applications. And we're talking about really relatively easy to build solutions. So yeah, totally agree with Daniel. It's a super strong combination. Yeah, and even like you said, even though they are toy examples, I think we'd like to encourage you to try these yourselves, obviously in a toy environment, but you know, you can take it on to the next level and take a snippet of your data. I mean, if you're working with something where you have a use case that goes into this direction, you don't have to go all in right away. You can say, okay, I want to try this out. I want to see if the improvement is there right there and take a sample set of your data and see for yourself how it benefits and how it improves the results. And, you know, looking at this today and then on all the other sessions we did before and on other streams and video and written content that is out there and like what Daniel said, I think you will immediately see there is a benefit. It was very evident today with the, with the written PDF use case. But, you know, if, if you imagine you have your own data, then give it a, give it a go for sure. And then there's one more question from Orolesco. Do we support Haystack as well? Oh, good question. I'm not. with Haystack. I need to find out. I'm not familiar with Haystack to be honest. But I'll have to get back to you on that so framework to build production re applications uh i've i can't give you an answer right now let's let's take that offline and we'll get back to him so i haven't looks like it's a framework to build uh um applications but i i don't know i mean i wouldn't be surprised if it's based on on on python.net or any of the languages that we support with our drivers. That should be pretty straightforward. But yeah, we'll do a bit of research and get back to you on that. Cool. All right. I think that brings us to the end of today's session. Thank you very much, everybody, for watching. Thank you very much, Jesus, for yet another great hands-on demo together with slides and great background information so this was there was this was super good and I hope you know we can continue on this exciting journey forward exploring more rag exploring more LLM chat knowledge graph combination the world the world there is I think very, very much interested in this and you know, like everybody else. The next one will be in 2024 already. The next one will be in 2024, exactly. So, new year. Don't be scared, but it's exactly true. So we are looking at January of next year for the next episode. And yeah, before we wrap, I'd like to say thank you all for watching. Thank you for being part of Going Meta through 23 episodes now with me, Jesus. And thank you all for watching. Some of you have been watching from episode one, so it's been really great fun for you to accompany us through this journey, through the various topics around knowledge graphs. And I think we've got a couple more, or many more, hopefully, in our back of mind of what other topics we can cover in this series. So it's not going to go away. So we're very happy to have you as viewers, as watchers, as commenters afterwards. And we'd like to keep going. So it's great. It's good fun. Excellent. Well, see you all. Have a great end of the year. And see you in the the year and see you in the next one. See you in the next one. Exactly. If you are interested in Rack and all that stuff, we are doing a Neo4j Live or I am doing that together with Tomasz and Leanne on powering advanced streamlit chatbots with Gen.ai. So very similar topic to what we have today. So if you want to know more, if this topic is interesting, tomorrow at the same time, same place, basically 5pm Central European time, we're going to do that. So tune in tomorrow and otherwise we'll see you in January. Take care everybody. Bye. | Going Meta - Ep 23: Advanced RAG patterns with Knowledge Graphs | 3,841 | Neo4j | 20231206 | Episode 23 of Going Meta – a series on graphs, semantics and knowledge Jesús Barrasa: https://twitter.com/BarrasaDV
Links:
Vector Search: https://bit.ly/4cuZoeS
Grounding an LLM: https://medium.com/@bukowski.daniel/the-practical-benefits-to-grounding-an-llm-in-a-knowledge-graph-919918eb493
Repository: https://github.com/jbarrasa/goingmeta
Knowledge Graph Book: https://bit.ly/3LaqE6b
0:00 Intro
5:07 Advanced RAG with Knowledge Graphs
13:10 Example 1: Legal Text Chatbot
22:25 Code with Graph post-processing
26:09 Chatbot
36:53 Example 2: Art Gallery Assistant
44:00 Code for personalized Emails
50:55 Email Creator
54:24 Next Steps
1:01:34 WrapUp
Check out https://community.neo4j.com/ for questions and discussions around Neo4j
#neo4j #graphdatabase #knowledgegraphs #knowledgegraph #semantic #rag #chatbot #llm | 2024-09-03T12:51:57.989803 |
https://www.youtube.com/watch?v=9DxwgIKVSHY | All right. Hello, everybody. Welcome to Going Meta. Thank you for joining. Good evening, good afternoon, good morning, depending on where you are. As you can see, I am in a funky hotel room with an interesting image at the back. It's like a piano and a trumpet, so I don't know what that means, but here we go. Are you going to play something for us today, Alex? I don't know, better not. I'm not so sure about that but I don't have my piano here so I couldn't even do that I'm very well thank you yeah and I was going to say that we have for a change and not that it's a good thing but I'm sure you're relieved about that and after the great success but we're not going to open by saying hey sign up for notes because notes relieved about that and after the great success. But we're not going to open by saying, hey, sign up for Notes, because Notes is behind us. And it was great. I was briefly thinking about it to say, hey, Notes is done and put up a little fly up. So now that you prompted me, I'm absolutely doing that. Because Notes was amazing, I think. It was a really, really good event um uh on the 26th so uh you know two weeks ago almost to the day um we had great sessions uh on basically 24 hour streams and three different tracks but i guess you uh you know you you uh you heard about this and you as you make fun fun of me constantly repeating about everything. I don't have to tell you that. But what I wanted to tell you now is that we have the videos available. So if you know, for some reason, because it was 3am in the morning for you, potentially couldn't watch all of the sessions uh you can now go to our youtube playlist for whatever reason you've only been able to consume like six hours well you still have 18 ahead of you right so there you go i mean there's plenty of everybody can be michael great content well exactly there might be one or two i mean we should that's great and i'm looking forward to catching up with some of the the great sessions because yeah i've watched a few of them but there's this loads there's many many more to watch um yeah so i have to i i myself have to have to catch up with a couple uh particularly in the ai track uh where i i wasn't able to uh to watch that many because i was kind of like on duty on the visualization part but that was that was also a very interesting interesting track to to follow I don't know if anybody who's watching us live now did attend notes and if they have anything to say maybe you have you know a favorite session already you can you can you know identify then let us know in chat uh if that is something um you uh you can recommend something uh to the viewers that will always be great uh it's it's interesting uh for others as well if you think something is worth worth a watch um so if there's anything let us know but um yeah today um we we talk about uh rag so retrieval argument generate uh knowledge graphs rag yeah rag yeah it's from rags to graphs not too riches i don't know. So we'll see. Well, yeah, no, I think it's impossible not to, you know, have a session on this, I would say. I mean, we kind of opened the gate somehow last week talking a little bit about explicit semantics, you know, vector-based semantics. And I thought that it would be a good idea to kind of build on that and talk about about rack which is something that you know i don't know maybe maybe there's some bias but i can't help but you know reading it everywhere i mean it's it's all over the place and and it's super interesting super exciting and i think graphs are are incredibly well placed to to be be a perfect, you know, grounding partner for large language models for RAC solutions. And I wanted to, of course, this is a large topic, but I wanted to spend a bit of time today talking about it because I think it's really, really interesting. Yeah, yeah, it definitely is. It's very interesting. I think it's a particularly RACs all the rage. Yeah, Tomas, exactly. And, you know, it's a it's a particularly racks all the rage yeah tomash exactly uh and and uh you know if i just you know it's this perfect fit it was not planned i promise you this but you know i was just saying ever since the the the a couple of the announcements uh we we we did over summers or one was the the vector search capabilities and all that stuff and open AI things. It was something that came up every time. And as you know, I'm writing the Twin4J This Week in Neo4j newsletter on an almost weekly basis. And Tomasz is one of the people that have almost added on a weekly basis another interesting blog post around around the topic so yeah he's gonna be on my slides today yeah all right good yeah he's amazing so yeah thank you very much thomas shout outs to him uh if you ever um you know look at uh at his blog look at his medium follow him there um him there and read some of his content. It's really amazing if you're interested in that topic. But yeah, what I wanted to say, all these words, what I wanted to say really are was that it is not only been in our minds but I think everybody else's mind as well. It was like, okay, this is interesting. We need to look at this. We need to absorb as much as possible so yeah we are raging exactly so it's it's for once i think it's not just neo4j that says this is this is amazing but kind of like everybody does it absolutely absolutely okay shall i go ahead and and uh share my my desk uh probably oops where did this go the usual fun when you have too many things open there so sharing my screen and hopefully you will be able to entire screen i only have one screen so it has to be this one and i'm going to move uh to this one can you see that yeah alex yeah it is yes perfect okay well as usual i mean maybe maybe today i'm going to spend a bit more time on slides than usual but i think it's it's important to to you know to get the basics and i'm sure you know people attending today or watching this offline are not new to this concept but i think it's um um yeah it's important to to um to get the the the basics right and then we'll we'll move as usual into a more hands-on session so uh a very brief recap of what we covered last session so you probably remember how uh we went over the difference between vector-based semantic search and graph-based semantic search right so essentially we we explain how uh you know semantics or meaning can be can be represented in two main ways one is in what we call an explicit fashion which is what we do in a graph when we state that an apple is a is a subcategory of a fruit or an orange is a type of fruit or in our example that you know Neo4j is a type of graph database and a graph database is a type of data about database management system so these are explicit statements that we can make uh um yeah explicit using the word but in the graph and that then we can we can explore and we can navigate to uh to come up with uh with uh intelligent answers to to questions now on the other side of the spectrum is uh the what's called sub-symbolic or implicit way of capturing these semantics and uh and that's by uh translating if you want uh words uh or structures or whatever into vector numeric representations that have very interesting characteristics right for example once you create a vector of of the word apple and another one of the word orange they happen to be topologically closed so when you can you can do things like semantic similarity based on the proximity between these two vectors and that's what we did last time so associated with both types of representations, we have two ways of computing how close or how distant two things are. In the vector space, it's purely mathematical, right? Well, not that the graph is not, but this is clearly a distance function, right? And mainly we use the Euclidean and the cosine distance in order to be able to tell this vector is close or far away from this other vector. In the graph graph we have different ways to to compute distance and and therefore proximity and similarity and we have a number of them right all of these are already built into Neo have been for quite a while you can find a node similarity based on an overlap or in jackhard based uh algorithms we covered and we'll probably go a little bit over today. Also, things like leveraging the taxonomies. When you build a classification, a taxonomy, an ontology on your graph to enrich your data, you can navigate that to do similarity computations as well. And for that, we have a number of metrics that we use. So, again, these are the two main ways of storing uh uh representing and storing semantics and and both are supported by neo4j and and that's what makes you know the platform great uh for what we're going to see today for rag because you know the combination of the two is really really powerful and and just to complete the picture you know the explicit uh the search uh in in a graph space is based on the exploration of the graph. Whereas in a vector space, what you're doing is a search based on algorithms around the approximate nearest neighbors, basically looking at which vectors are close to the one that you're looking. So this was the sort of the foundation and what we covered last week. And we use that example to show, you know, different characteristics because, you know, both are really powerful. But one of the things that differentiates them is that the explicit one is explainable. Because one of the things that, and that's why we call it sort of implicit, is because a vector captures the meaning, the semantics, that's what we're dealing with here. But we can't tell why an apple is close to an orange. I mean, we can tell because they show up in multiple documents that are part of the training of the model that generates these vectors. But the idea is that you cannot explain in an explicit way why you come up with the answers that you come up with. And that's where the combination of the two is really useful. I'll take a pause here before we go into RAC. All good? So remember from last session, that was all clear, Alex? Yeah, it was all clear. I think that's an important point to make again. So yes, good to repeat that. Perfect. Right. So today's topic is RAC. And what does RAC mean? And again, if you have not been living under a rock, you know what this is. I mean, basically, the idea is that we want to build applications that use large language models. And some of the great things that large language models can do is provide conversational interfaces, they can do translation, they can do summarisation, they can do loads of things, but they're only aware of, in terms of world knowledge, of the data or the facts that they were trained on. So, you know, that has a number of limitations in the sense that there's a training, a kind of a cutoff date. So you only have knowledge about facts that were available at the time of training. And that I think is less of an issue because we keep getting new and nothing yesterday we had a new release by OpenAI where they brought forward this cut-off date. But still, they will never have all the facts. They will never have all the information. And that's important. And the main reason for that is because these large language models are commonly trained on public data. And if you want to build a solution that leverages the capabilities of large language models, to build a solution that leverage the capabilities of large language models but you want to use your proprietary data, your enterprise data, that is never going to be part of these public large language models. So there has to be a way of combining these two, like leveraging these capabilities in the large language models but using the facts, the knowledge that you have accumulated and that you have on your enterprise side and that's where uh this pattern called retrieval augmented generation or rag comes into play and that diagram describes it very simply so effectively you will have a question that you want to ask your application is basically built on the idea that you will want to ask a question from your large language model and in order to do that we're going to see that this question will have probably to be vectorized somehow in order to interact with the large language model. But there's this intermediate step that you can see. I don't know if my pointer is visible, but in this step here, you see that we are accessing some external database, data store, knowledge base, whatever. In our case, it's going to be a knowledge graph. It's going to be Neo4j. But the idea is, okay, I'm going to go out and get all the potentially relevant information in order to answer this question, which that's what we're gonna call the context. And I'm going to pass it on along with my question, as part of the context for the LLM, in a way that the LLM can use all the capabilities that I was describing before, like all the reasoning, all the analogy, all these powerful things that large language models do. But I will explicitly ask the large language model to only use the information that I provide in context. Because another thing that happens, and I'm sure you've heard about hallucinations, is that large language models are generative AI and by definition they're built to be creative to be generative to create new new new data new knowledge new facts and uh what we want to do is kind of turn turn down or disable completely this creativity and say okay do what you can do but or you're at, but do it only based on the context that I'm providing you. And that's exactly the idea. And ultimately, you know, the large language model will produce a conversational experience. It will provide a summarization, all these super smart things, but exclusively based on the information that comes from my controlled knowledge base, which in our case is a knowledge graph. You will probably see that with other types of platforms, because, you know, why not? I mean, you can generate a query, a request to an external source that returns the contextual information, and that can be anything. But I will argue that knowledge graphs, and especially those built in a vector search-enabled platform like Neo4j, are superior to other alternatives. And that's what I'm going to spend a few minutes on. But is the concept of RAG clear now? I hope so, yeah. I think, at least to me, it is, yes. I think, at least to me, it is, yes. It's an important point to make, really, that sometimes, you know, you have your LLM system, you have something that almost seems a little bit like magic. It is a little bit like that, because it does something, and you don't quite know what it does and how it does that. There's obviously some kind of calculation of probability, how the answer is formed. And obviously, that makes sense to the system. But as a human, you cannot look into it and cannot understand how, becomes visible in the end. So it's tricky. It's tricky to formulate, to find out, okay, why do I find my baseline here? But I think this is why it is so important to explain this and how this can be more, yeah, the grounding is the proper term for it, to connect it with your data to make it more relevant and um yeah more explainable and more explainable controlled because you know think of it in in in scenarios i mean because ultimately what we're doing is building building applications that that do things for us and and think you're coming up with uh uh you know a, a medical diagnosis. You don't want the large language model to be creative in that context. You want to provide the right contextual information in order for the large language model to come with an explainable and valid answer. And the same, you're going to make an investment. In any case, in any critical type of system, you want to have control over the information that's available to the language model. So that's the idea with RAG. And that's what we are going to, yeah, to keep exploring today. So if it was just in chat here, Wenliang, absolutely. If you have questions, just type it in chat. We have a look for it. Either we slot them in as they fit or towards the end of the session. So anytime something is not clear, type it away because then we can directly elaborate on it or if you have a generic question towards what we're presenting then we we find we find uh the time for answering good and uh well before i go because i know there's three four aspects in which knowledge graphs improve retrieval augmented generation. But I want to provide a space line, probably the default, not the default, but the most simple approach to retrieval augmented generation, which is using a vector database as grounding part. I don't know how to refer to this term, but to the external knowledge base that provides the context to the RAG experience. So the idea is that this search is based on a vector exploration. So essentially what you have is a question or a request from your large language model. And this is typically vectorized. from your large language model. And this is typically vectorized. This vector is sent to a vector database or any kind of vector search enabled platform. And that will provide a number of approximate answers. We saw a minute ago how when you have a vector representation, you can have the nearest, the approximate nearest neighbors. And these will be the vectors that capture bits of information that is potentially close to the to the question that you're asking. Now, it's critical here for a number of reasons. I'm going to actually go over them is it's critical to get the right information, of course, and the best information and the most precise information we have to understand that the amount of information that can be passed to the large language model is limited. So there's a hard limit on the size of the context. And that keeps growing again. But an important element there is cost. The cost of the utilization of large language model is strongly related to the amount of data that is exchanged and provided as context. And also, of course, providing a large amount of context has implications also from the point of view of performance. I mean, it has to be parsed, it has to be analyzed, it has to be built into the answer. And so we have interest in providing the best possible contextual information and the most precise and minimize it in terms of amount and by removing superfluous or irrelevant information. So that's the point where I'm going to talk about ways in which knowledge graphs improve and actually, yeah, actually improve the the the construct retrieval augmented generation solutions compared to the pure and just basic uh um vector search so what we have in the fj is is a graph of course and and and the graph uh has uh nodes and these nodes have properties that can be uhed in a vector index, right? That's from 5.11 back in August, we have that available. Now the first thing that graphs provide is this thing that I think Joshua and some others and all these are Neo4j members that have been publishing, you know know relentlessly on this topic what he calls structure awareness so the idea is that when we have contextual information typically it goes through a process that's called chunking so again because we want to limit the amount of information that's passed to the large language model we normally break it down in smaller fragments so it would be great if i could pass all my internet so it wouldn't be ideal because you know this this the the confidentiality and and the data sharing uh component to it but you know it would be great if i could pass just all my information to the large language model and then just ask questions but uh that's for the reasons that i mentioned not ideal. So what we will want to do is break it down, and that's a common approach in what's called chunking, this unstructured data into fragments. Now, what we can do in the graph is build into these chunks also this document structure. These chunks are parts of documents, and documents have structure and organization. And I mentioned here two posts that have been uh published over the last few weeks months by members of our team the one to the left is a chatbot for the graph academy that adam cowley published a few weeks ago probably a couple of months already and what he did is is he analyzed the the pages in the Graph Academy, and he chunked it based on the different sections in the HTML blocks, and he kept a reference to the original page in that case, or some representation in the graph of the hierarchical structure of the documentation in the web pages. Something very similar is what Joshua in the one to the right, and have the URLs to to go and look at that in more detail and even the code he's looking at decomposing a document into sections chunks even tables there can be several elements and keep pointers to the document to the container why do we? Well, sometimes when we do a vector search, we might find a close match in one of the sections, but we have relevant information in all the components of the same document, of the same section, or the same whatever the container structure is there. So one thing that we can do with graphs is navigate this hierarchy, navigate this structure representation, and expand and augment the context by providing related information that might not be picked up by the pure vector search, but that we are aware of because we have this structural representation of the document. Does that make sense? I'm going to take a pause here. I'm going to take a pause here. Yeah, yeah. Good start. I think it does. Yeah. So that the first... I see no questions. Providing this structure awareness and that's the way of augmenting the context in an informed fashion because we have these additional information. The second one... Oops, there's an animation there. No, no it's animation you're just taking long too another one is is what we have not necessarily uh uh document structure so necessarily the information doesn't come from chunking documents and and trying to preserve the structure in the graph but it comes from a combination of data from different sources and uh and you have some textual information that's embedded in vectors connected to uh all the bits of information that are uh structured in the graph and in this case i mean one great example of that is the the the example that came came with the with the gen ai stack that also was released a couple of weeks ago and that um GenAI stack that also was released a couple of weeks ago and that I believe is Oscar Hane and some others published where it was based on on Stack Overflow data right so what we have and this is this is the simple model so Stack Overflow as you know is organizing questions and answers right so and both questions and answers are natural language. So fragments of text that can be embedded. We also have information about users and questions are also tagged. So we have some rich model there that can be explored again to provide an augmented context for the retrieval augmented generation solution, right? For the retrieval phase. So what are the kind of things that we can do here? So we can do, based on my question, I want to understand how, which are the parameters that algorithm X in graph data science takes, right? And give me some details about that. So maybe there's a question that matches that, and it's probably the closest to the one that I'm looking for, and I will find that with some vector search, but then which of the answers do i provide maybe systematically if there's if there is a an approved answer i will provide that answer so i will link i will expand because probably in order to provide an answer you know it's not so much the important bits to uh to map a question but to provide the fragments of text that will help the llm provide uh an answer to it so this navigation will augment the um the context again that we can provide to the to the llm and similarly i mean you can provide the approved answer if there's not an approved answer you can look at which of the answers is provided by the by your user with the highest credibility so you can really uh leverage the the structured information in the graph to navigate and expand and extract additional information that's that's uh relevant to the to the rank process that's very similar to the example that thomas created with um um it's a simple example where he combined unstructured information in the form of tasks right so tasks uh it was uh I mean he described it as a as a DevOps uh a rack application and and you find their information about teams uh involved in tasks uh the tax the tasks affect microservices they have dependencies between each other and individuals are members of teams so again it's a combination like in the other case of uh different entities different graph information with a a node that contains textual information so in this case what uh what he shows uh that's possible to do is to go and do a vector search on the text of the task and then expand it by saying, well, this task is actually currently active and it's associated to that particular team in which this person is. So again, this is a way of augmenting the information that we have in the graph where the vector index provides just the starting point and and the the approximate match based on the question there's additional patterns or templates i believe they're called that thomas has been working on and that's that's really really super interesting and he's contributed those to the langchen. Unfortunately, they don't have the URLs here, but I'll add them after the session, where these more sort of advanced rag patterns were explored, and he provides code on how to, and I think that the two or three that he covers is the parent pattern, which is similar to the one that we saw in the previous slide. So we have a document that has been chunked in a number of fragments. You can, when you find a match, navigate up to the document and find siblings to provide an additional context. So that's the parent template or pattern. I can't remember the right terminology there. There is another one where, in addition to each of the texts, each of the fragments, it provides hypothetical questions. So actually that's interesting because you have some fragment of natural language text and you can ask a large language model to generate some hypothetical questions that you can annotate this content in the graph. And then you can do your search when you ask questions on these hypothetical questions to enhance the probability of matches in this approximate search and then navigate to the actual content by doing this graph exploration to augment the context for RAG. I think that was the hypothetical question. That's the parent. There was a third one. I can't remember which one it was. But a couple of other advanced ones that leverage this idea of context augmentation by combining this initial vector search with graph exploration. And well, I could keep going. But the other one I can think of is fine-grained access control. So in a vector database, it's really hard. It's impossible to to be fine-grained about who can access what right because all you're doing is is exploring a vector uh that's the result of of embedding natural language texts in a graph i could be very uh very specific let's go back to thomas example i mean you can do a search on a task, but because you're a particular member or you're a member of a particular team, you will only get visibility over the tasks that are assigned to your team. And that's a great way in this case, not of augmenting, but of filtering and not providing information that should not be visible to you. So that's another great way in which graphs can help in that front. The one that we're gonna be looking at today, and we started looking last week, is this combination of vector search and graph search through explicit semantics and using ontologies and taxonomies. There's more, but I think I'm gonna pause here because I've been talking longer than usual, and it's probably time for some code. But maybe you have some questions there, Alex? There were a couple of questions. So we have Tomas, by the way. I think you posted the link to his GitHub repository. So I'll put that in chat. I'll add that to the video description on YouTube as well. So you can find it there if you're wondering. At least this first one here from Ben Young. He says, I can understand the rationale of using knowledge graph in context to vector databases as context. However, the main bottleneck for me is how to construct a reasonable knowledge graph that best serves the purpose. It seems to me that create vector database is much easier than construct a knowledge graph. Yeah, absolutely. Yeah. Yeah, totally true. And the answer is, well, this probably we will not have time to go in detail on that. But again, let's think of, I mean, a knowledge graph can be the combination of data from multiple different sources. But probably what he's thinking is, OK, we have unstructured data and I want it represented as a knowledge graph. And, you know, I can chunk the data and I can take that text and embed that fully. I can fully automate that. And that's simple. Now, the process of translating that into a graph is a bit trickier, but still large language models can go a long way there. And what I'll say is maybe I can provide a pointer, because again, that's a topic that some of our colleagues have covered, and Tomas in particular. I can send a pointer to the post, but that's something that I'm planning to cover in future sessions. So you can, again, take unstructured information, tell the large language model, kind of the target model that you want to populate. I mean, what kind of entities basically you want to extract. Somehow you provide the ontology of the knowledge graph that you want to extract somehow you provide the ontology of your of the knowledge graph that you want to create and the large language model will uh return the data in a format that match much as that uh that structure so that can be that can be done and then again the idea is that these will be combined with additional information like we saw in other examples so yes i totally agree the construction can be can be challenging but there's a lot that can be done with that with the use of large language models and and um yeah i'll i'll post the the the link or maybe thomas is online i'm sure he will put it there but uh if not we will do it and and again uh the plan is to have a a session specifically on on that particular on that particular topic. And we kind of did, I mean, we didn't use, you know, large language models, strictly speaking, but if you remember in the previous session, and I'm going to run it very quickly in one go today, what we did is we run an NLP step, right? So we took the, we were looking at articles, and what I did is a pass of natural language processing for entity extraction, right? So what I did is extract the entities that you can find in this document and then link them to my taxonomy. So in a way that was probably the most rudimentary step in the direction of constructing a knowledge graph, right? So it's entity extraction. And the next step would be also relationship extraction and then curation of all that. I mean it's not a simple process and again Thomas will show his scars from that and but it's definitely something that large language models can help with. Cool, yeah. And the other point was from Jaime. LLM's context window size growing constantly LLM's context window size growing constantly has, it's more like a comment, has a similar effect as RAGE, I'm not keeping saying it, RAGE is not right, RAGE rack, on the validity of the outputs. Do you think that could help? That would eventually keep up with it? Yes, of course, that will. that could help that that would eventually keep up with with it the fact I mean yes of course that that will I mean the the problem is not so much what's technically possible or what the large language models offer in terms of size of the of the context window it's also the cost associated with that and the precision of the answer I mean think of it I mean if you and the cost the financial cost because that's how they how they you know, the cost is, like I was saying before, directly related to that, but also the time that it takes to process that contextual information sometimes, you know, and it will depend on the on the application, the rag application that you're building. But if in order to answer a particular question about x or y, you have to process a 50 page document, which I think it's technically doable. I wouldn't recommend it, but it's something that can be done. I mean, that will probably break the interactive experience, right? So, and, but yes, I mean, this is something that we should expect to be constantly changing and growing, of course, but that doesn't remove the need for us to be to be precise to be um to be as granular as we can and and and to make sure that we only pass context that's as relevant as possible and as as as reduced as possible to the to the large language model i would say yeah yeah good okay well I'm conscious of time. And I wanted to, I probably will not be able to go all the way to the end. But at least I'm planning to go the first steps. And the idea, if you remember, and I'm going to run, I had my, you still can see my screen, right? Yeah, put it back on. So I have my uh um you know a freshly uh created um sandbox uh as always uh if you go to the repository you'll already have some some information and what i'm going to do is run in a single go this whole chunk of code that what it does is reconstruct the the graph uh that we created in the previous episode and while it runs hopefully there's no errors there i will quickly go over it so remember how we loaded a collection of articles that were represented in in notes articles that just have the title the body and the time uh the date time the time stamp of the publication then we uh enriched these articles with that with the taxonomy that we took from dvpedia or was it it was dvpd i think it was wikidata i might have got that it was wikidata i think yeah exactly so i've probably named it just randomly i don't know out of my head but anyway it was it was a rich taxonomy of of programming languages technologies of all sorts so we we loaded that using NeoSemantics. And then we did, I mean, we didn't really. I mean, because that's time consuming. I mean, and it's risky to run in a demo live. I didn't run the actual natural language processing to do the entity extraction. I had done it upfront and I've saved it in a file. So I had extracted entities. So for each of the files, I already had the result of these analyses done and what I did is I basically linked my articles to the topics in my taxonomy that were actually explicitly mentioned in those articles. And finally what I did is I created a vector index and populated it using again embeddings that I have initially pre-computed. All this becomes really, really simple, and that's something that I will touch from the next session, using things like langchain. I mean, the moment you start interacting with vector indexes, providing natural language questions, interacting with LLMs, I've been fascinated by the richness of this orchestration kind of framework, langchain, that really helps a lot here. But in this case, I'm going to use basic Cypher, and that's going to be it. So hopefully that's done exactly. So we have it here. And just as a reminder, we had the articles, a bunch of articles, and hopefully the articles should have, there you go, so we have articles that mention entities, and the entities, if we navigate, will be connected through subclass of, or some kind of taxonomical classification to other entities forming this kind of semantic layer, this taxonomy, this rich context in the graph. So that's the model that we have. And if you remember, what we were doing is we described how we could compute similarity using both vector similarity and graph-based similarity. So if I go first with the simple one, the way to do vector based similarity was copy that. And so what this query does is, okay, I'm reading an article. So in this case, what I'm going to expect my retrieval augmented generation application is okay, I'm reading an article, I'm going to, I wanted to do to write a summary, a recommendation of what should I read next with some context. So in this case, rather than sending a natural language question, I mean, it's not always a chat. Imagine that this is a, I want to build an agent that analyzes my, you know, the articles that I read and sends me an email at the end of the day or at the end of the week on what should I go and read next. And I wanted to write an email describing natural language, but generated by the large language model. So the kind of thing that I will do is, okay, I'm reading this article, that's one of them, and identify it with the URL, and given this article, which are the ones that are close to it? And that's something that we did last week. If I've made no mistake there, all I'm doing here, let's look at the query. It's as simple as, well, I find the article and then I do a vector search on the article embeddings index for all the articles that, and I set a limit of the number of them that i want i'm set to seven but it could be anything uh based on the on the embedding representation of the content of these articles so that's this kind of implicit semantic similarity which returns i mean the article that i was reading a number of similar articles and the score again the problem here is that if I want my llm to write this um email saying hey you should read that uh maybe my best bet and and I'm open to suggestion because that's something that will probably treat in the in in the next session would be I don't know to to pass the the the maybe a summary of each of the articles to the LLM and say, well, that's the summary. Tell me why these two are related. Because I know they are related because that's what my score is telling me. But can you articulate it? It would be a generated answer. So it's like a curated one, but it would be generated. Maybe that would be a way so if I pass you know these uh the article plus either the maybe the full the full text again depending on how big I want I want the context to be or maybe a summary maybe that would be the context that I want to pass to my my LLM and that's exactly what I do in this um in this um um notebook that's already committed in the repository as well. So again, this is purely a cipher base. So I connect to my graph. I, if you look at what I've done, I've just literally pasted here the query that I run in the browser. And all that this query does is takes the URI and it will return a summary uh let's run this in the form of you know here's the title here's the body i mean it could be maybe the like i said before a summary or some kind of uh simplified view but again a little bit into this uh second zoom yeah and you zoom in a little bit yeah cool thank you there you go so that will be the query i mean the the retrieval uh uh request on my graph so given this article which is my my input this is the output i mean this is determined by my vector index and this is what my llm should use for the recommendation so you should go with these these and these or and explain why and the why is the tricky part because you know my best effort is to give you my you know the the raw uh text of the article and maybe you know the llm can can cleverly uh generate something indicating how the two are similar now the other way to approach this is the vector-based one. And I think, well, and this is actually what I'm doing here. And that's my, again, I'm entirely new. So I'm open to feedback here. So I believe what I'm doing here is I'm, that's not working. I've done something wrong wrong but what's it so my idea here is i've created as a basic prompt saying i've read this article and tell me why i should read next uh in a brief description of why based on the provided context and the context is the output that i've just received now maybe uh what is it what's going on here so this oh yeah the the the window maybe one thing i can do to trick these at least to make it work for now and again that's what i i was saying um and this is a bit of a hack just for the for the live session but you know some some uh smart approach like i would say summarizing might be probably better here what i'm going to do is totally truncated so i'm going to say okay set a dot body to a dot body some string let's keep the first what uh three thousand characters there you go boom maybe this is gonna be small enough i don't know still too long yeah maybe i'm passing too much or or the oh another thing i could do here is instead of returning a file, I could return just three, right? Again, I'm playing here. This is totally experimental. Three, if we exclude the one that's two, but maybe that's, looks like it went through. But again- It's just for sharing exactly. So it doesn't, I mean, for this purpose. This is the part where I'm really being totally experimental and I'm planning to have a session with Tomasz on this to make sure that I'm approaching it in the right way. And again, the Langchain framework offers a very, very, very powerful and simple, intuitive way of automating the whole chain, like the whole retrieval and the passing to the to the to the large language model and producing the answer. So here here I'm intentionally decomposing it in two steps. So I'm querying first, I'm returning the context, I'm passing the context, but it can be all bundled in a single in a single uh and a much more compact and intuitive uh or way of of representing a good lang chain but you know let's let's keep that on the side for now and and well my model returns something i mean it it's probably not ideal and like i said maybe i could be more i can be smarter on what do i pass as context but i want to provide this as as one approach so that's that's the the information that I get using a pure vector similarity now there's this monster query that we worked on last last week which does let me look at the query in detail first so that's the that's the query and this is what it does so the difference here and let it run so it's that for the same database and the same article what it's doing is providing similar articles but remember i'm going to give it a nicer name here let's call this uh and like i said i'm going to explain a little bit of what's going on here so as you can see to the right it's it's giving me kind of an explanation of why this result is relevant because this is based on the path exploration on the graph based exploration of the of the of the similarities between the two so i'll go a little bit into the detail of the of the cipher but as you can see it's telling me you should read this one and the reason why is because the original article mentions explicitly neo4j and this the recommended article mentions also it's the neo4j that's a simple recommendation but it's you know it's it's explainable it can get a bit trickier for example like for example uh new sql uh an implementation blah blah blah well the original article mentions neo4j uh the second one it mentions new sql neo4j is a type of graph database a graph database is a database management system new sql is a database blah blah blah blah. So it gives us this collection of statements that provide, I mean, it's kind of a textual representation of what we saw last week. You remember how we could see the path that connected the two and we could determine these two are similar because there's a path that connects the two. What I've done here is I've translated it into text, into statements, because the way we communicate in a RAG environment, the context is passed as text, into statements, because the way we communicate in a rag environment, the context is passed as text, as natural language, or semi-structured text. So this is the. I would be passing this type of context to my LLM. And the way I work it out, and that's what I don't want to scare people, because I know this is a bit chunky, but here, I mean, it's not super complicated. It's just a bit verbose. But again, we start with an article, which is our starting point. The article mentions a number of entities, and that's what we did with the entity extraction. I can do a path similarity search where I i set the threshold and that will return a number of related topics and for each one of them i can get um well i can get the articles because the topics will be linked to articles and then it's all about how i format the output so that's where most of the code is used and uh and this is the the this is how I compute the similarity, which, like we discussed last time, because there can be multiple paths. Well, I take only the average of them. I think that's what we're doing, right? So I believe we return the average somewhere or the minimum. Collect. Yeah, it's the average here. So I compute the average of the similarities. But the interesting bit, or interesting and at the same time the verbose, and that makes me think that maybe that's something that we want the function to return out of the box. So this return is a path. And what I'm doing in this section here is translating a path into a collection of statements and without going into all all the details i'm iterating over the notes basically over the relationships in the path remember that the path is telling me why why this article is related to this other one and i iterate over the relationships and i say well the relationship is a verb and i can get the source and the target node and by doing that i i generate a a collection of maybe triple statements subject predicate object or node relationship node and um yeah it's a bit verbose and but i it probably it probably should be something in the same way that the path, the result returns a path. We could do a version of the path similarity that returns a textual representation and basically hides that cipher. Kind of a tricky part here in the question. But that's pretty much it. And back to our context, to our notebook, I can run exactly in the same way, the same query for the same article, and that would provide a similar but different context it will be a collection of articles but it will return hopefully the text that um just taking a bit too long let's leave it to run but um one important aspect of these and I think that's the whole reason why we do this. Oh, transaction failed. Couldn't connect. Maybe some, maybe my sandbox. Since there's still being that, right? I don't know. We'll give it another try. That's not a problem. That's the thing. Maybe it's my Wi-Fi here. You still can hear me, right, Alex? I'm not. I i can yeah yeah i can still hear you it's all good but yeah maybe the sandbox had a little hiccups so there it seems to be not connecting properly but anyway the important thing and the reason why we did that is because uh if you remember uh we have control over how these recommendations uh happen and and the way to prove that is that okay I can extend my my ontology my taxonomy that's new knowledge that I have now so my recommendation at the moment is this one so that's what you should read and this is the reason why but then I say hey but there's knowledge in my organization that basically but then I say hey but there's knowledge in my organization that basically states that there's this notion of technology stacks so something that's maybe not aware to my large language model is the fact that there's technology stacks that combine different technologies and let's look at what these if I do instead of input I do a preview before importing it. So this is a classification where I say, okay, there's something called the GRAND stack that connects GraphQL, React, Apollo, and Neo4j. There's also the MEAN stack that connects MongoDB, Express, Angular, and blah, blah, blah. Anyway, that's another way of connecting things and that will probably provide new paths for exploration. So I'm extending, again, my knowledge graph by importing these additional taxonomy elements. So if I do import, is that what I did? Yeah, import. So these will enrich my knowledge graph. It will add a bunch of triples. That's represented in the form of, again, of an ontology. And the result of doing that is that I will come back to my query. Let me delete it to make it more obvious here. So if I run exactly the same recommendation query now, hopefully I get new results. And see that effectively you know up to I'm going to have to reduce it a little bit so that we can see the two side by side but the first three are obvious because they both mentioned directly Neo4j so that's natural but then look at the third one so instead of being this new SQL now suddenly becomes a beginner's guide to GraphQL and the important thing is why that's what i get why this new thing has popped up well because the original article mentions neo4j and the recommended article mentions graphql neo4j is a grand stack component and so is graphql there you go you have the connection so this context can be passed in the form of sentences to the large language model to provide an explanation of why you should go and read this maybe because you're interested in the grand stack or maybe because you are at all these are that again I'll I'll pause here because I'm I'm I'm realizing that this uh we're hitting the the time but you know yeah it's timed out somehow. The notebook doesn't like to, doesn't want to talk to the sandbox, but they're there and you can, I hope you get the idea. So we've kind of covered how to get the context. Now, like I was saying, if I, and very rudimentary how to pass it to the LLM, but we're going to be able to bundle that in a very, very compact and concise fashion using using langchain and that's probably topic for the for the next session so i'll take a pose here get back to the stop sharing and hope that made sense and that made that made lots of sense i think that's great you know you you said it's a bit verbose. It's a bit very monster query-like. But I think it makes sense because we have the problem that otherwise we don't quite understand how does the recommendation come to be. I mean, it tells us you should read this article. And why? And it doesn't give you that explanation and through through through going step by step we we give we give these we can see the almost we can see the calculations we can see how how it came came to pass and that that is that is an important bit yeah which which is not to be covered the the yeah the the explainability the attribution because you know again remember that if we ask the large language model to provide an explanation it will provide it i mean it will generate it that's the thing but i want i want real attribution and i didn't complete it there but i could provide the uris and things like that like saying hey these uh you know you can say this is the the article that you should read and maybe provide a reference or if think of um adam's uh chatbot for the for the graph academy he was able to provide a link to the point in the documentation where he got the answer from so hey the the the parameters that the uh you know jacquard algorithm takes are these and that and i know because it comes from this document as opposed to i know because because i'm an llm that can generate an answer exactly my probability score tells me that this is the next value i have to print out yeah explainability and attribution are extremely important and also equally important is the you probably saw that we have control over the information that's used to deliver the answers i have no control i mean i could retrain a large language model i could you know fine tune and we could go it's a completely different and way more a more complex and expensive process what i can do is i i is have full control over my grounding database my grounding partner to the llm which is my knowledge graph. And I can add, remove, modify over time information. And that will have an impact on the behavior of my RAG application, like we saw in this example. I think this is exactly the comment from Peter here on traceability. I mean, this is what we want to achieve, right? Why are we building a rack tie-in? It is because we want to be contextualized. We want to be traceable. We want to have a grounding. We want to know why a query or a prompt to my LLM comes back with an answer and why is that answer and where is the the the base mind the trace that you know we're tracing back where does it come from in essence exactly yeah no I was I was saying because I really I'm reading in the in the Q&A that there's a question if I will share the notebook it's already in in link in the in the github repo i might need to i mean i don't know if it needs updating or there's some error there but the one that i've run today it's already there so feel free to again especially on the lang chance lan chain side is it's not perfect and that's something that i'm a bit of a novice on but i'm i'm working hard on it so i I'll and I have good teachers here. So exactly. So the the the the repository is in the in the video description. So if you if you click that, it's a GitHub link to to the going meta repository. If you don't find it, check check the description of the video, then you should be able to see it um yeah william agrees tracing back is the key yeah and then another question from will um does uh new semantics latest 5.11 work with the current new project 5.13 i guess you would be the perfect person to answer that question yeah we're working because know, still struggling with, you know, keeping the pace of releases with the platform together. But yes, so the 5.13, I mean, 5.11 works with 5.13. So that's, I mean, what I wanted to do is have uh somehow no uh you know the nifj desktop to default to the latest to the most recent version of neo semantics instead of saying there is not a version available for you but unfortunately that's the that's the way it behaves at the moment and and uh but yeah it's just one additional manual step which is go and and copy the the 511 jar um into your um plugins directory in your 513 and it should all work fine yeah cool all right um yeah i think we are at the end uh of the session um again uh please try the repository, try the playbook that Jesus provided. If you have any comments, any feedback, let us know, either in the YouTube comments or on the community forum or on Discord or anywhere else. We'd like to reach out and we can have a look at it. It's always nice. If you like that video, if you like GoinMeta as a series, please give it a thumbs up on YouTube and share it as much as you can. Obviously that helps out a lot so we can continue doing this in the future. I think with that, looking forward to next episode, December, final episode of the year coming up. um that's gonna be probably then a follow-up to this episode i assume um yeah somehow it feels like if you're not doing something lm related you're kind of uh outside now yeah probably i mean there's a couple of open questions that we that we left today and it might make sense to but you know maybe something else will come up you know we take suggestions don't we so if someone yes we do dying to to get us you know to to talk about something in particular please let us know and we'll we'll take it into account but yeah perfect that's good well with that have a great week everybody uh great rest of your day uh and um thank you very much, Jesus, for obviously the presentation, but also for live coding again, bearing the demo gods yet another time and succeeding with it. So that's really great. Thank you all. Thank you for watching. Thank you for your nice, friendly comments, as usual. And yeah, hope to see you for the next Going for your nice friendly comments as usual and yeah hope to see you um for the next going meta episode in a month in the meantime um there is a couple of sessions coming up so i'll give you a quick little little teaser um for what's coming in november in two weeks time on the 23rd. I'm talking to Elena Colway, famous for her being a ninja, about user-defined procedures. So that's Thursday 23rd of November live here on Neo4j live. And yeah, with that, have a good day everybody. Great to see you all and until next time. Thank you. | Going Meta - Ep 22: RAG with Knowledge Graphs | 3,810 | Neo4j | 20231108 | Episode 22 of Going Meta – a series on graphs, semantics and knowledge Jesús Barrasa: https://twitter.com/BarrasaDV
Links from the Show:
Vector Search: https://bit.ly/4cuZoeS
Educational Chatbot: https://bit.ly/45RvnDz
Structure Aware Retrieval: https://medium.com/@yu-joshua/adding-structure-aware-retrieval-to-genai-stack-373976de14d6
GenAI Walkthrough:
https://bit.ly/4cxfPXW
GenAI App Building: https://bit.ly/4cxfPXW
DevOps Rag Application: https://bratanic-tomaz.medium.com/using-a-knowledge-graph-to-implement-a-devops-rag-application-b6ba24831b16
LangChain: https://github.com/langchain-ai/langchain/tree/master/templates/neo4j-advanced-rag
0:00 Welcome
6:35 Recap on Data Semantics
11:28 RAG
20:40 Knowledge Graphs to improve RAG
31:11 Q&A
36:25 Code Example
55:50 More Q&A
1:00:55 WrapUp
Repository: https://github.com/jbarrasa/goingmeta
Knowledge Graph Book: https://bit.ly/3LaqE6b
Check out https://community.neo4j.com/ for questions and discussions around Neo4j
#neo4j #graphdatabase #knowledgegraphs #knowledgegraph #semantic #ontology #rag | 2024-09-03T12:57:46.134379 |
https://www.youtube.com/watch?v=ReRH53amZ4M | Introducing Neo4j 5, ushering in a new era in graph technology that is transforming the way we use connected data. With this latest update, Neo4j is more powerful than ever. You want enhanced performance? Neo4j is unmatched in the market. Neo4j was already 1,000 times faster than relational databases, and with version 5, Neo4j widens the performance lead with new features like K-Hop query optimization, graph pattern matching, and granular indexes. You want your data to scale? With 5, Neo4j has unbounded scalability. Autonomous clustering lets you scale out horizontally with less manual overhead and better use of computing resources. And when your datasets lie in multiple databases, Fabric instantly helps you create a composite graph of insights hidden in disparate silos. You want agility? Neo4j 5 has unparalleled operational agility. Neo4j fits your IT strategy whether it's on-premise, cloud, hybrid, or multi-cloud. And with Neo4j Ops Manager and a continuous release support model, it's easier than ever to manage your database in the cloud. Neo4j Ops Manager and a continuous release support model, it's easier than ever to manage your database in the cloud. Neo4j 5 helps you achieve unbounded performance, scale, and agility. Find out more at neo4j.com so Stamina For For over a decade, Neo4j has been helping the world make sense of data. Today, Neo4j, the leader in graph database and analytics technology, is unlocking new possibilities in generative AI. Together, Neo4j and large language models allow you to use a public, pre-trained LLM with your own data, subject to your privacy controls. Reduce hallucination through a powerful combination of deterministic facts and probabilistic conclusions. And enhance explainability and transparency through explicit knowledge representation for getting transparent reasoning and explainable AI. transparent reasoning and explainable AI. Neo4j's scalable and flexible technology seamlessly integrates with generative AI frameworks like Langchain, Vertex AI, OpenAI, and beyond, democratizing access to the world's information while setting a new standard for AI accuracy, transparency, and explainability. Unlock new possibilities in generative AI today at neo4j.com. There are many types of databases today. The tried and tested relational database, the new and popular DocumentDB, and many others. And all of these have their strengths. But one place they all fall short is in handling complex connections between your data. Sure, any database may be able to return friend of a friend queries of three or four degrees, but what if you need 20 degrees and need it quickly? That's where Neo4j comes in. Unlike other databases, Neo4j Graph Database doesn't need to compute the relationships between your data at query time. The connections are already there, stored right in the database. Because of this, queries of deeply connected data are orders of magnitude faster. We get used to the limitations of the tools we work with. Neo4j blows those limits wide open, enabling queries never before imagined at speeds never thought possible. That's why Neo4j has become a key technology driving business critical applications for hundreds of Fortune 500 companies, government agencies, and NGOs. Learn more at neo4j.com. Learn more at Neo4j.com. Are you interested in graph data science but don't know where to begin? The Neo4j Sandbox is the best way to get started and learn graph data science basics with interesting preloaded datasets like fraud detection, contact tracing, and Twitter trolls. It's easy to jump right in, follow our guides and explore visualization with Bloom. There's no download required and you can get up and running in less than a minute. Try it for free today. Ketutak សូវបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបា� Graph databases have been the fastest growing database category for the past decade. The reason for this is simple. Graph databases are highly optimized to understand and work with the complex relationships between your data. Unlike other kinds of databases, a native graph database doesn't need to search for data connections at query time. Those relationships are already there, stored right in the database. Because of this, queries of large, complex, connected data are orders of magnitude faster. A graph database doesn't necessarily need to replace your current database. By adding graph capabilities to your current infrastructure, you can keep the technology you already have, but greatly improve it with the power to uncover rich data connections, reduce time to market, and run faster queries. Taking advantage of the connections that already exist in your data enables you to get ahead of the competition. Every business needs to leverage data relationships, and leverage them faster and more efficiently. Graph databases deliver those capabilities. The speed and efficiency advantage of graph databases has driven new, real-time applications in fraud detection, customer 360, machine learning, and knowledge graphs, just to name a few. Because of this, graph databases have become a key technology creating competitive advantage for hundreds of Fortune 500 companies, government agencies, and NGOs. We asked 100 senior tech executives, CIOs, CTOs, and chief data officers what they need to bridge data silos, boost AIML projects, and open up new revenue streams. A massive 88% of them said the same thing. Knowledge graphs. But why? Leaders know the value of data, keenly aware that it holds the answers to their most pressing business questions. They need insights to improve decision making and enhance business performance. Enter knowledge graphs. A knowledge graph is a map of everything an organization knows about insights to improve decision making and enhance business performance. Enter Knowledge Graphs. A Knowledge Graph is a map of everything an organization knows about a given topic. Knowledge Graphs help business leaders by functioning as a non-disruptive insight layer on top of their existing data landscape. Knowledge Graphs drive intelligence into data to significantly enhance its value without changing any of the existing data infrastructure. Knowledge Graphs enhance existing technologies by providing better data management, better predictions, without changing any of the existing data infrastructure. Knowledge Graphs enhance existing technologies by providing better data management, better predictions, and better innovation. Last but not least, Knowledge Graphs lend themselves to multiple use cases like fraud detection, digital twin, patient journey, bill of materials management, recommendations, and IT operations across many industries, including financial services, healthcare, life sciences, manufacturing, supply chain, and retail. The C-suite is waking up to the possibility of Knowledge Graphs, shouldn't you? Here we go. Hello, sorry. Hello, everybody. Good morning, good evening, good afternoon. And thank you for joining to this Neo4j going meta episode. It's issue number 26. And today we look at a benchmark from a couple of guys at DataWorld that was released end of last year, early this year, I think on the role and on the impact of knowledge graphs and how they make LLMs more accurate. I think that's what we talk about today. It's very interesting in general, just this topic. But before we dive in, I should maybe give you a little space. Jesus, hi, and thank you for joining. Yeah, you're totally right and we'll provide all the details and give all the context but you're right, yeah, really excited to speak of episode 26 which is great and yeah it's a bit of a special one. I mean instead of you know starting from from scratch and taking a problem we are yeah we will start from these paper, this kind uh benchmark that was published a few months ago and and we'll uh you know unpick it as we said in the title we'll we'll you know break it down you know how do you say like deconstruct it and understand what's in there and and try to put it in the perspective of property graphs and uh and the knee of the j and i hope it's going to be a we will, in the process, touch on some topics. I like doing that when we refer to previous episodes and bring things that we've already discussed. So there'll be a bit of that today as well. So, yeah, that's the plan. Any announcements that we have for the coming days, weeks? I know we sometimes use this sort of an introduction to remind people that uh you know about the other sessions that we run as well or the events that we're involved in yeah we can we can do that at the end but obviously what i what i what i want to do uh is is mention the the training series that's that's coming up starting next week and i can post a link in the chat right now as well so if um if you are interested in learning new things, getting up to speed with Neo4j, especially in the Gen AI space, we have a training on how to deploy high-performance RAC applications on the 21st of March, so in two weeks' time or so. But nevertheless, if this is not so interesting to you, then obviously we have an intro session. we have something on geospatial data and others so if you are interested in this check it out and register for for the trainings but i'm i'll talk about this at the end of of the episode so i wanted to um don't divert too much but yeah i'm looking i'm looking forward to today i think to to talk about i mean there's always uh benchmarks and and everybody publishes them i don't think um we have to go into into much details generally but i think maybe a little little heads up so you know, whatever you do there in terms of benchmarking, it always needs to be looked at with some sort of grain of salt. I think this one is, is, is maybe a little different than the usual. Um, I would say in a bunch of commas, kind of like official or something like a, uh, a benchmark done by somebody, um, obviously promoting their product. So it always comes with this caveat that you say, okay, somebody wanted to make a point and obviously you pick the queries that kind of like work well or the setup kind of like is in the favor of this. So everything is always a little bit biased, I would say. This is not really objective usually. biased i would say this is not not really um objective uh usually um so you know it it all depends a little bit on on general the background the setup all of that so if you keep that in mind then then whatever the benchmark you look at i think that's important to to have um as a as a you know always always keep that in mind but generally generally, I think today we don't want to moan or we don't complain about benchmarks. So I think we take this relatively neutral, I would say, and we try to see what they did in this benchmark and how they see the LLMs and knowledge graphs work together. And then we can see, we can give our perspective. And this is not necessarily wrong or right or something like this. It's more maybe just a little different perspective here. And I think that makes total sense and it's a good way of looking at things. Yeah. And sometimes, you know, a lot of it is I mean this is a message that we're totally aligned with I mean and probably let's kick off a bit of an introduction so yes we're talking about this benchmark that you know my good friend and Juan Sakeda who works at Data that World and a team of individuals at that company run a few months ago. I think it was October, November when they published it. And the results were fantastic because what they were exploring, and you can probably, I hope it's readable from the title, but we have the links, right? Alex will be sharing these. So these are incredible. And basically there are, I mean, Juan is a big proponent of this idea of uh knowledge first i mean we're and actually that that connects with something that jim jim weber our chief scientist uses a lot in some of the keynotes right he says he used this quote i don't know what it comes from but like we're drawn drowned in data but we're starving from knowledge and sometimes you know that that is the case, right? And this example was a great exercise, a great experiment. So what they looked at, basically, it's we'll go into the details, is these days with LLMs, and we've done a bit of that in previous episodes, we are seeing as one of the use cases, one of the users of it, is have them generate code for us. So we give them a description of the query that we want, and we get the LLM to produce the structured query, right? And that's applicable to all languages, all models. So what Juan and the team did is, okay, so we can try to have an LLM generate some SQL queries in this case, or we can create some knowledge representation of the content of the database, pass it as context to the LLM, and then see if that improves the quality of the queries that the LLM produces. And that's the idea. LLM produces. So and that's the idea. So again, the premise is, we believe that a knowledge, explicit knowledge representation should improve the quality. And exactly. I mean, that's, that's what the results produce. So that you see some, I've collected a couple of posts from LinkedIn that were quite announced some of the results, you see that this, the improvement was substantial substantial and we're going to understand why. And what was interesting is that that was really impactful. I mean, lots of people reposted it and some people or some other organizations repeated, replicated that with different technologies, which is, again, an interesting and interesting thing. The idea we're going to look at it from the point of view of graphs, of course. But Juan and the Data World team, as you can see, they call it the impact or the importance of knowledge graphs on the interaction between LLMs and SQL databases. But they look at graphs in a slightly different way than we do. And that's exactly the content of today. But I was saying that some of the teams replicated these exercises, and they came to similar conclusions, right? And the idea, we can see that it's pretty intuitive, right? So sometimes the knowledge is hidden in the data, and we have to distill it. We have to make it explicit. That's something that we've talked a lot over the 25 previous episodes, so you will see that some of these things resonate there. So that was the premise. So Juan and the team believed that that was the case and they proved it with a collection of... I mean, we're not going to replicate, of course, the benchmark, but I'm going to give some clues and show how we could set it up in an Neo4j environment and what would be the advantages of doing it in that way and give an example of how could we run one or two queries. And if anyone gets inspired and wants to replicate it or do it in its entirety, that would be a fantastic follow-up to these. So, yeah, that's – and I'm following with one, not directly with Juan, which we'll go back to in a minute, but to someone from DBT where they – one of the things that they do, DBT is a company that does – takes care about the data transformation, right, in data warehouse environments. And one of the things that they have been going to market with is this idea of a semantic layer, which is a very, very similar, I mean, conceptually is exactly the same concept, right? So they believe that layering semantic, a business oriented representation of the data that's storing your data warehouse will enable, you know, consistent and more efficient and richer use of that data, rather than having multiple queries replicating the same logic in different BI tools in different consumers, have a single layer where you centralize that logic, you have this kind of semantic layer, semantic representation, and then everyone reads from the semantic layer. And that's exactly what they did. They took Juan's example. We'll look at a little bit of what that looks like. But the idea is what you see on the screen. So layering this semantic knowledge has a number of benefits. But now in the context of LLMs producing code, number of benefits. But now in the context of LLMs producing code, this is super relevant and super valuable additional context to get better quality. And again, this is the idea what I was mentioning, right? So they saw this layer on top of your data, so that consumer applications will have a consistent and sort of canonical if you want understanding some of these topics we've used a lot, right? Is this idea of having an ontology? So Juan will call that an ontology. These people will call it a semantic layer. But again, the idea is pretty much the same. So let's go down to the details and let's unpick the experiment. So what you see on, and these is two screen captures from the paper that you've just shared, Alex. I believe you put the PDF, right? So these are the two systems, the two scenarios that they compare. So the first one, so data lives in a SQL database, in a relational database, and it's a pretty, I would say, highly structured schema, and it's all public. We'll show the GitHub repo where they published all the assets. For that database, we have the DDL. We have the description of all the tables with the columns, with the data types, with the foreign keys, with the primary keys, a formal description of what's in the database. But that's a database level description. It's a low-level description of the structure of the database but that's a database level description it's kind of a low level uh description of the structure of the database now if we pass uh that sql ddl this description of your database along with a question to an llm without no without any training and and it is a zero shot prompt so you just literally pass the question in the context which is the ddl you just literally pass the question in the context, which is the DDL, GPT-4 will produce some SQL. And we'll see, I mean, we will not see today, but if you check the PDF, you'll see that the results are pretty rubbish. It falls in a number of ways. And that's understandable. That's understandable. And we're going to see why. Because sometimes, often, the questions are described in terms of business terms that don't exist necessarily at the data level. So it's pretty obvious that, and here we're surfacing a problem that's not new. When we define a database, if you think of the process, we start interestingly with an entity relationship model where we identify the entities, we identify the relationships between them. We kind of build the graph schema, that's funny. But then at some point we translate that into a collection of tables, we normalize it, and we end up with a low level representation. That's great for querying, and I'm not gonna criticize the SQL model, I mean, the relational model, it's been there for 50 years and it's super versatile and powerful, but there's a problem is that we've lost a lot of the semantics of what our data actually means and what happens i mean if you're an expert if you if you know your data model then you will be able to write queries but if not you will find a gap and that's exactly what happens to the ddl sorry to the deal to the ll, the large language model, tries to generate questions in schema terms, in SQL terms, based on business or domain terms, because there's this gap. And that's exactly what we have to fill. And that's the role of the knowledge graph in this context. And that's what we see on the right-hand side. So what Juan and the team did is they said, okay, we have same database and we have to query it with SQL, but what we're gonna do, instead of providing the SQL DDL, a low level description data, level description of the data, what we're gonna do is we're gonna provide an ontology. An ontology, as our regular attendees know, an ontology is a domain description. It's a schema, It's a graph schema. It's a description of the entities, the labels in your graph, the relationships between them, how they're connected, the properties. So it's a business. It's a semantic description of your data. So it's exactly that gap that we're missing. I mean, it's kind of an explanation of what's in my database in business terms. So that's, of course, if we give that to an LLM, we're giving them a massive advantage. Now, the question, and we'll get to that, is what's the cost in building that? But it's definitely worth it, as we're going to see, because the results are significantly improved. Now, the problem is not the problem, but the reality of their approach is that they took a rather complicated architecture, as you can see there. So they look at graphs from the RDF, you know, semantic stack point of view, which again is something that we've discussed several times in this series, and well, they provide the ontology. They ask GPT-4, the LLM, to generate SPARQL instead of SQL. Why? Well, because they want a query that's described in terms of the concepts defined in the ontology. So it's a query described in terms of semantic business oriented concepts. Now, the problem is that sometimes at some point, you're gonna have to translate that into SQL because the data lives in SQL. And that's where they have this additional level here. Not only you have to describe to your domain, but you have to say, my domain maps to my data in this particular way. So basically what I define as a policy or what I define as a policy holder, actually maps to the subset of the records in this table. And that's where things become even more complicated because, you know, you have to do that, right? Your LLM is going to generate great sparkle, but then you have to translate that sparkle into something that the database understands. And that's the approach, which is great. As you can see here, for them, the knowledge graph is slightly different from what we see in Neo4j commonly, where we combine instance data. I mean, everything, we live in a graph world, which means that our description is part of the graph as well as the data. So we're going to see in our approach that we have a much more simplified architecture where we have a self-describing graph that contains the ontology within, right? But that's what they... Let me pause here and see if you have any questions. Does that make sense, the approach that they take? And I'm still going to go in more detail in each of the components. Okay, good. But I think it is interesting to see, but I think it's an approach that I take. I mean, I guess that's what I wanted to say in the beginning, is with different backgrounds come different approaches to things. I mean, we often see this. And you try to take what you know and adopt it sort of. And I guess that's, they come from an RDF background. Obviously this is probably why they chose this route. In a different background, you might take a different approach and I guess we show a little bit how you could do it with the property graph model. I guess we show a little bit how you could do it with the property graph model. But it makes sense. I mean, it's still already better than just standard approach. And I think it makes sense to add the semantic layer, to add this ontology towards it. It's just, you know, this double mapping is also then, I would assume, but maybe I'm wrong here, but I would assume that this double mapping could lead to errors that are not really, I mean, the left model will create errors because of the LLM being out of its depth a little bit and doesn't really know what to come back with. And the right model could come back with errors due to some mapping issues that are, I don't know how much this virtualized knowledge graph is hand manually curated or if they also had this done automatically by an AI-assisted tool, but that could then lead to issues obviously because you transfer SPARQL into a knowledge graph back into an sql query that i could see issues yeah yeah it's inevitably more complex i mean one could argue that depending on the context uh it might make sense if you uh if you don't want to move the data from where it is right so we don't want to to move the data out of my s database. There's no question I'm going to move that into a full graph environment. But the complexity is there, and we're in this virtualized approach where there's a number of the creation, the maintenance, and then the performance as you can imagine, it's having to translate a Sparkle query into SQL or in any form of virtualized federated type of logical layer, and there's multiple products in the market that do that, we know that the complexity in doing query translation in an efficient way, in a performant way, it's not an easy job. So yeah, totally agree. It's a rather complex, I would say, architecture. But I would say let's put that on the side and let's focus more on how do we and the value of making. Because ultimately what we're doing here is we're making the semantics of our data explicit, which is a topic that we keep talking about here. And we totally align with that and we're going to see that there's value in this so creating this ontology and mapping it to our data is is definitely uh uh an investment worth its worth worth the effort right and and and they have quantified it with the results uh of the of the benchmark and they show how how the the quality of the um i mean it's a great way to automate that because you know otherwise how would you quantify the the value of the... I mean, it's a great way to automate that because, you know, otherwise, how would you quantify the value of this semantic layer? Well, now we can ask the LLM a collection of questions and see how they improve or how they don't, right? So, it's a way of validating this idea of let's let's bring the knowledge on top of our data well i think it's great to see that even if you are using a more not judging here but a more maybe complicated architecture still improves your output by an incredibly big amount so no matter where you come from and what your what's your tech stack kind of is if you add a knowledge graph to your to your gen ai llm interaction it will boost boost your performance and that's that's a good you know it's an absolute good conclusion here we've seen that uh that it has i mean you can you can repurpose that i mean we've seen that all the time and again, right? How an explicit semantic description of your data can be used to validate the quality of your data. It can be used to dynamically generate data ingestion pipelines. It can be used to, you know, so many now to improve the quality of LLM. So materializing and making this knowledge explicit is definitely an investment that's going to pay off. So that's the bottom line and that's their position. So yeah, let's move on a little bit and let's explore each one of the elements. And again, these, I think you have the link to the GitHub repo and we'll go to it in a minute. Yep. But they published in GitHub all the assets. So there's the data, there's the mappings, there's the ontology, there's the questions, everything's there. So the first thing that we see is the ontology, and that's not new. I mean, we've shown many of those over the last 20, 25 episodes. And we see that an ontology, like I said before, is basically a description of the entities and the relationships in your domain. It's a graph model. I picked two fragments there where they described the notion of a policyholder. A policyholder is a class, it's a category, it's a way of saying there's a type of nodes in the graph that's called policy holders and they're identified by URI and they have a label, a human readable version of it, and then they are related to other things. They will have properties, but in this case what we're saying is that there's a relationship that's called hasPol, that connect is an object property, which is RDF terms for a relationship. And that connects a policy to a policyholder. So basically, what we're saying is there's a policyholder definition that connects a policy to a policyholder. So that's the way in RDF. And again, I will not go too much into that. But this is a formal description of the data at the business level. So interestingly, as we're going to see in a minute, the notion of policy holder, and if some of you are looking at the assets in the GitHub repository, you will see that there's no table called policy holder, which is, you know, again, an implementation decision. And that's the kind of thing that I was mentioning before. So these semantics of your data are lost. And only if you understand the DDL and you understand how the data is encoded, you will be able to surface what policyholders are and how they relate to policies. And we'll see that. But at this level, the ontology just captures the collection of entities and relationships and properties with them. Make sense? And it's described in OWL, and we work with this many times in this series, so not much more to add. Yep. The second part is the thing where, you know, it gets a bit more complicated, The important part is the thing with, you know, it gets a bit more complicated, which is where we map that definition in the ontology to the elements in the database, in the relational database. It's this mapping here. And for that, they use another W3C. I don't know if it's a standard. I don't know what the status of it. It's a recommendation or a proposed. But it's this language called R2RML. So relational to RDF, markup language, something like that. I can't, I'll show a link in a minute. But this is a way of basically describing, okay, the instances of this particular type. And in this case, we're defining a policy holder. So, and in particular, the ID, the're defining a policy holder. In particular, the ID, they identify as a policy holder. It's done in this way. Basically, if you can read in here, there is a SQL query. I don't want to click because it's going to navigate to the next page, but you see where my mouse is, right? Yeah. So we're saying that you have to, I mean, this is what you see on the left-hand side, is a subset of the table in the schema. Like I was saying, it's a highly normalized model. We have policies that are a type of agreement. There's a master table agreement, and then there's this joint table called agreement party role, which connects a policy, which is a type of agreement, with an individual, which can be an organization or a person. And they use this role model. So it's a pretty complex and indirect collection of tables, right? So you have to do a number of joins to extract what a policyholder is. And if you read this query, what you're saying is, look at that, And if you read this query, what you're saying is, look at that, we're joining this agreement party role, which is this table here, with policy, right, which actually is a synonym of agreement because they have matching primary keys on a particular column. So you describe the join here, and you filter the tables that contain on the role code PH. So there you go. That's only if you really understand the model, you know that ph is policyholder. So it's a rather complicated way of expressing, I mean, complicated, it's SQL, but basically what we're doing here is we're surfacing that and we're saying when you run, I mean, that's the way of extracting from this relational model the notion of a policy holder. So does it make sense? So that's the kind of thing. And I take just the fraction, but if I believe you put, well, we can have a look. Let me just delete this. And I think I have the GitHub repo. So if you go to the data folder, inside the data folder, I think, here we go, is the R2RML. And unfortunately, GitHub is not doing the proper highlighting of the syntax. But anyway, you see that there's a rich collection of definitions, each one of them backed by a SQL query. So basically you're saying, you know, run this query in order to distill or to extract the surface this particular business level entity so that's that's how it works and that's how the translation is going to happen when we describe a query in sparkle terms we're going to use each of these translation to uh to generate uh the relevant uh sql uh to to hit the database. Does that make sense? Yeah. So this is the second part. So we looked at the ontology, we looked now at the mapping, and then finally, well, yes. You need, like I say, you need an engine. Once you've defined that, you've defined the ontology and the mapping, you need an engine that is capable of, for the generated Spark translated into sql given the given the the the mappings right and that's what i mean i've found um uh this uh open source uh r2 rml engine i suppose juan and the team use their own i mean the the data data that world platform probably uh has their own proprietary implementation. But this is one example, basically. You can go look at it. And that's essentially it. So you pass it a mapping, and it will translate Sparkle to SQL. Right. So that's pretty much it. Does that make sense? So that's pretty much it. Does that make sense? And again, just to complete it, I mentioned before that all the platforms, all the technologies have implemented something similar. And DBT, as I mentioned before, they have this idea of a semantic layer, which kind of replaced, they don't have the notion of a knowledge graph, they don't have a sparkle, they don't use ontologies, but they have a formal way of defining in their semantic layer using yaml pretty much the same thing right so as you can see they the what we call the ontology here they define their entities right they define there's a notion of a policy right and then further down in the same description of the semantic layer they say there's a this party role code which is the column that we saw in the in the sql query they have a natural language text description of it so it must filter on this and if party role equals ph then a party identifier corresponds to a policy holder right so it's a another way of of describing it uh um and and trying to kind of provide this meaning layer of meaning semantic layer on top of on top of the data. So, again, I have the link there if you want to have a look at it. But the concept is pretty much the same, except that, like I say, they don't use ontologies or Sparkle. So, this is basically a replacement of this in yellow highlighted segment. Correct. And instead of having the ontology, what they have is a semantic layer. So they pass the semantic layer to the LLM, and the LLM will generate queries to the semantic layer that then they project onto the underlying relational databases. Okay, cool. Makes sense? Yeah. But again, the idea is the same. Let's make explicit the entities and the relationships and the domain and the meaning of our data so that we can ask questions in a much more meaningful way. So now, I've pasted again the same idea. This is the definition of their semantic layer that they, I mean, initially it's not conceived to be, I mean, you can use it for interactions for interactions with lms but they use it to to integrate with any consumers catalogs bi tools any kind of data applications right okay good so that here's the the neo4j based version and and and you know i i you can see that i i got just halfway because i didn't get to replace sql database and i'm going to do it right now. Because in this case, what we're going to do is, because I've got the question several times. What does that look in Neo4j terms? This is of course Neo4j. It's not a SQL database because our approach is going to be based on the idea of, okay, we're going to move the data into Neo4j. Instead of having a layer on top, is going to be based on the idea of okay we're going to move the data into into neo4j so instead of having a a a layer on top we're going to create semantically described described data right so it's already data that that has built in the the the interpretation uh the semantic interpretation and what that would look like is something like this i mean i don't use this kind of uh um yellow box here but, but that's what it's going to look. So the structure is the same. I mean, we're going to have the data in Neo4j, but because Neo4j is self-contained, self-describing data, we don't need to, I mean, we could, and we've done it again in multiple episodes, but you can ask the graph, and we're going to see it in a minute. We can ask the graph, hey, tell me what your shape is. Tell me which are the entities in the graph, which are the types of nodes, which are the properties, which are the relationships, how they connect to each other. So that's what these kind of back arrow returns. So I can generate, I can give you an ontology of the data in the graph. So once the data is here, you have it for free. You don't need to add any layer because it's already semantic data. So when we ask the question to the LLM without no previous, it's no SQL, of course, it's going to generate cipher. So I've just tried to reuse the diagram. But the idea is this. So what we're going to do is, hey, LLM, generate the cipher for this question and the context, I don't need to produce it, you can get it from the database itself. So query the database for a self-description of what the entities are. And with that, you're gonna be able to generate a accurate cipher. So that's the idea. So, and now is when I'm gonna jump into the code and we're gonna show how we will go about this. We're going to build a property graph and we use some of the elements that we've shown in the past. We use the ontology as a target model. We'll use the data input tool to populate that from the data that they shared in the GitHub repository. Then we're going to see how we can prompt an LLM with exactly the information as we described here. Make sense? Yeah, sounds good. Cool. Good. And then once we have it, the idea is pretty simple. We're going to test how to... We'll test the LLM to generate the... We can do it programmatically and we can do it directly on ChatGPT. So we'll try both. So let's get started. So I think I have most of the elements here. And as usual, I'll make them available after the session. So I've emptied the graph. I'm going to empty, I think, what was it? I have a, here we go, a way of removing all the schema elements because I don't want anything in my graph. So it's totally empty. That's my sandbox instance. And we have here the repository that data world have made available. So we have a folder with all the CSV files. It's small in terms of, I mean, we don't care too much about the size of the data. We care more about the correctness of the queries. care too much about the size of the data. We care more about the correctness of the queries. So we have the ontology, which is here, right, with all the definition of the business entities, all the semantic entities in our data set. And then we have the collection of questions and results. But we'll get to that in a minute. So first thing we want to do is, okay, I'm going to build a graph out of this, and I want to automate most of it. Like I said on the slide, that's something that we did in episode five. You might remember it was called something like ontology-driven knowledge graph construction, and we had a target model. That's the model that we want to build and and our source data can be relational can be whatever but we can automate the generation of a graph and the population of a graph based on on this uh on this target ontology that's exactly what we're going to do because we have uh an ontology available which is which is this one here and one of the ways in which we did it was using the data import tool because we we that we could take the definition here. Remember that here we define classes, their properties, their relationships, and use that to generate a data importer model that we can then bring here. If you don't remember that, I'm going to show it and I'm sure you're going to very quickly remember what I'm talking about. remember what I'm talking about. So this is basically the call. I'm going to paste and explain here. So here we go. So the method is still in experimental mode because, you know, this data input model is, I mean, was evolving at the time. So I didn't want to have it, you know, generate models that were not valid. So that's why I still keep it in experimental state. But the idea is that we pass an ontology as first parameter. And this ontology is just the ontology that we're looking at here. Remember when you do it, you don't take this URL because this is the HTML view of it. It is always the raw one, right? So you can either click on it and get this url or uh just uh right click and copy the link address so that's that's what we're doing right so in this case i'm saying go read that ontology and that's exactly what i'm pasting here right just make sure that this is the one it's serialized as turtle and for this example again i don't want to cover the whole benchmark. I'm going to pick just two or three classes. And I said, okay, I'm going to take policy. And I take these three because they're particularly interesting because they are really far from the implementation in the data as we saw, right? So there's no notion of policy holder. It's actually a subset of the records in the table called role, whatever, right? So I picked these three categories and i identify them with the uri in the ontology i pick policy sorry policy i pick policy holder and i pick agent right so these three are uh all three defined in this ontology and it will uh go and pick the relevant bits so if i run that, it's going to go read it, and it generates a couple of things. It will generate something that is actually a model. It's a JSON file that I can copy here and save and then import from the data import tool. It's a description. It's extracted agent policy and policy holder as I described. These are the properties that have been identified. These are the properties that have been identified these are the relationships that have been identified so all good so i can either copy these and put it on a file or i can use the other flavor of my of this method which is stream produces the output as you see it here but if you do export i believe is it export it is yeah it will save it uh in your local drive and uh that's what i've done so i call it di model.json so i uh you know it does it for me i mean instead of having to copy it and put it on a file so you you choose the your your favorite approach and then what we have is uh like this. I've saved it before, and this is the model. So if I open with, I don't know, for example, with ZLime, that's what it looks like. So it's a JSON file. I will show a little bit the details of what's in there, but basically it's a definition of all the classes and the schema that then I can import from my data importer tool. And that's exactly what I'm gonna do. So I go to say open model and I have it quite handily already here. So I'm gonna open it and here we go. It does a bit of a funny layout, but essentially it's extracted the two, three entities that I want to populate, right? So I have policies, policy holders, and agents. You will see that for all of them, it's systematic at a URI, a comment, and a label, which we can delete. I mean, it's something that's sort of standard in the RDF world, but probably in most of the cases we don't care about, so I can happily delete them from here same in the policy holder i'll delete the uri the comment and the label and in the policy i'll do the same uri uh comment and the label and what's lacking and i'll add it here is a something that I'm going to call a policy ID. But OK, so I've used my ontology to create sort of target model that I want to populate. The next thing is I want to populate it based on the data. And again, in the like I mentioned before, in the data folder, we have a collection of CSV files that I've checked out. I mean, it's public data, so you can do it yourself as well. And and we can import it here. It's very straightforward. Like I say, I have it here. I'm going to import all the CSV files, and my data importer will do a bit of an analysis. For each one of them, it will give me the headers and a bit of a sample data of what's in them. There's plenty of them. I'm going to use only a couple of them to populate this graph, but I'm showing how easy it can be to replicate and move the data into a graph representation. So for example, the policy will come from the policy. CSV, all looking good. And I will select the fields. So the policy effect, let me make it to make it a bit more readable. Because some of the fields have long names. So the effective date will be something like effective date. Easy. Expiration date will be expiration date. The policy number will be policy number. And the policy ID will be the policy identifier. So we're good. And this happens to be the primary key. So I mark it there. So done with the policy, I get the green tick. Then policy holder. Policy holder, as we saw, it's an interesting one because it comes from this file. Agreement, no, sorry, it's party role, isn't it? I believe. Party or party role. I mean, there there's synonyms in this case, so I can say that there's a party identifier and it's a primary key done with policyholder and agent is another party, so it comes from exactly the same file. And this gets hang on where is it I hope I'm making it right so is it a yeah or maybe it's the party same thing so it's party identifier and this is the primary key. So the last bit that we need to do is how do we populate the relationship? So sold by agent is, it comes from the agreement party role file. And yeah, it has the, this is the join table that I show in the slides briefly before. So this is, if you go here, the agreement party role connects the policy, which is agreement with the party role. So that's the the file that I'm using and it contains the two references. So it will have the policy, which is the agreement identifier, and it will have the agent, which is the party identifier. There we go. And the same. Now, it's important here because here's where we are kind of injecting the semantics that we saw in the definition before. So we have to filter because not all of these are agent definitions. Let me bring it up and it will be here. So if we go agreement party role. You see, remember the PH and the AG. So PH means policyholder. AG means agent. So, in other words, what this means is we have to select the fields, select the records that contain AG, and those will be agents, and select the records that contain PH, and these will be policyholders. That's something that, fortunately, we can do here by saying I want to filter, and I'm going to say that we're going to populate agents when the party role code is AG for agent and similarly when the policyholder same mapping I select the file agreement party role and we have the agreement identifier, party identifier, and same filter, except that in this case the code will be ph for policyholder. Anyway, so we have... Sorry, a little bit too quick, but you see how I did it. So what I'm doing here is I'm applying these semantics of saying, I have an understanding of how my data is organized, and I'm building it into my graph in order to now populate explicit relationships called has policyholder and sold by agent. We will not have to translate it on the fly like we were doing in the federated virtualized approach, but we will have these properties populated. So once I'm happy with this model, all I have to do is run the input. I skipped through that, but before this, I connected to my sandbox. All you have to do is pass the credentials, the URL to connect, user password, and you're in. So this is already connected to my empty sandbox here. Just a reminder that this is entirely empty. I've just used it to generate the model, and I can run the input, and it should take just a few seconds because it's a very, very tiny data set, so it's already run, and now we should have something here, hopefully. Let's take a few seconds. So if I show this, we see the two instances that we're seeing before. So we have two policies, policy one and policy two. And we see that they happen, it's a simple data set, the same policy holder, which is one, and they're both sold by agent two. So there is agreement one, policy one has policy holder one, agreement two, and two has policy holder one, agreement two. It's a simple data set. But yeah, you get the idea. So now we have a graph. So we built into the ingestion, the pipeline process, the semantics of understanding this particular fragment of the model but now i have a model that i can query right this already has my uh my graph is already at the business level it's not a physical broken down into tables representation it's something that's self-describing and what i was saying before and remember my diagram is that my model my data i can ask it tell me what's your shape tell me what's your ontology and i can do that very easily and and that's you probably know i can call it call a db schema uh for example no type properties right and this will give me a list of all the properties. So there's a policy and the policy has a policy effective date, which is type string, and so on and so forth. So we can ask our graph to describe itself and provide an ontology in a form. I mean, this is a kind of a more simplified, well, not simplified, because it's as rich as an ontology, but it's serialized as a collection of records, which we're going to pass, as we said, to the LLM. But that's that. And the same with the relationships, right? We can do, I think, there's a APOC, it's meta relationship type property, something like that. And again, we said there's a sold by agent that connects a policy to an agent and there's a has policy. So it's again, this is this is a query that I can run to my database. I don't need to create any separate artifact to to to overlay on top because my data is semantic already. So by passing a combination of these two, which is basically a description of my node types, the properties and relationship types and the entities that they connect, my LLM is gonna do a pretty good job at generating valid cipher. And that's what we're gonna test very quickly. And again, as usual, I'm gonna share the code. This is the usual introduction, right? So I've already installed Neo4j, OpenAI. I mean, don't need pandas for this one, or even RDA Neo4j. But what we do is we connect to, I mean, we set our connection. I'm going to have to input baby again. Yeah, let's do it because I thought it was going to keep it from the last session but looks like it's lost it oh no it's running it sorry i shouldn't have maybe i can still stop it stop stop i don't need it stop stop stop anyway well there you go so we have already um uh set up you know the the the credentials and the and the connectivity with the with with the sandbox that I've just created. And what I can do here is run these two queries that I've just shown you. The one that gets the properties of the nodes and the properties of the relationships. So these are two, I call them metadata queries for nodes and relationships. And if I concatenate them uh that will produce hopefully there you go a kind of a textual description of what's in my graph and it's more than enough for the llm to produce is this readable should i make it a bit bigger yeah i just wanted to say yeah i think this is yeah this is better now so there you go so it's it's telling me there's a label these are the properties that this label has this is better now so there you go so it's it's telling me there's a label these are the properties that this label has this is these are the label and so on relationships connecting from label to label that's it very very simple but ultimately it contains exactly the same information as the ontology and now i'm going to put that in a prompt and this is something that we've seen in the past is very very simple so that's the question that i'm going to go and run in a minute but my prompt looks something like this so i'm saying given a property graph with the structure described by the following list of northern relationships and i'm going to inject here my structure that i've just used so that's dynamically generated from my from my graph write a cipher query that answers this question. And I additionally say, don't explain the query, just the query and nothing more. And I pass the question. And this is where I'm going to pick one of them, one that applied to agent, for example. And I've used this one again, because this is the kind of query that an LLM would have never got right. Because if you scan the list of tables in your relational schema, there's no notion of an agent. An agent, like we saw, is a role that connects to an agreement through a party role, blah, blah, blah. So no way it would have got it. But in a graph, it's suddenly there. And hang on, where am I? Here we go. So this is the prompt. If I run it, it will generate basically it embeds what we've just produced is description of the graph model and it embeds the query. And if I run it, it will produce, this is what I call my open AI with any, at the same time, I think I had here, chat GPT open, because I can, of course, copy. No, not the cell I wanted to copy. Anyway, let's see what it does. And I could manually, if you don't want to do it programmatically, but I can go boom. Oh, no, I don't want that. I don't want to copy the cell. I actually want to copy the output. so how can I do that whoops if I do command c hopefully that will do it and I will clean that and pass a proper prompt here we go so that's that's the question. And it should hopefully do a good job. There we go. So it produces, I mean, it's a pretty simple one, but the complexity here is not in the richness of the query. It's in the fact that we're asking a question at the semantic level. So if I copy this query and I come to my database, boom, it would tell me that there's only agent two, I mean, simple data set, but again, and it's sold two policies. So I'll, again, take a pause here and maybe a similar thing, you know, like I say, we've run it here and it will produce the same, exactly the same query. I mean, this, oh, interesting. In this case, it's taking the ID, instead of doing it globally, it's doing it for, it expects the idea of an agent to return the results for a specific agent. But you know, the idea is that it's using the right, the right schema, which is again, self-describing in the graph. So, you know, long story, but what we've done here, and before I open it up for questions, in case someone from the audience, or you, Alex, have some questions. What we've done is, OK, we can if we have if we're using a platform that does not support semantics, which is like a relational database, I have to overlay it. And that sometimes leads to a complex architecture. Like we saw before, you have to create these artifacts, the ontology, the mapping, and then not only create them and maintain them, but when I run queries, I have to apply a translation that who knows what's going to be the efficiency of that query. Now, if I take my data and apply this semantic definition and create a semantically enabled model like the one we've created in Neo4j, we now have a much richer description of my data. I don't have this party role, blah, blah, blah. I have terms that I understand and that my business users will understand, and that I can pass in the way that we've seen as context to the LLM to generate queries. And that's entirely dynamic. I don't need to maintain that because the moment I add a new type of entity, these queries that extract the shape of the graph will be naturally returning the additional results, which makes the architecture substantially simpler. So that's the idea. Good, I'll take a pause here, sip of coffee and and that's that's pretty much it so that that's again we could take this gives an idea of how we could maybe take the whole workbench or benchmark and and and maybe replicate it and see the see the results that we get with Neo4j and the complexity of the architecture that we would need to build to do that. But I didn't, of course, in this one-hour session, want to run the complete benchmark, but just a simple example of how to go about doing it. Yeah, I think that was great. Great unpacking, as Tony says in chat. So I think that really helps. And I guess, yeah, like you said, we need to, in this format, we cannot really do the whole thing. But I think what you showed today is a good example of how it's done with the knowledge graph, with graph model um working uh alongside this um this data set so gives you it should give you a good hopefully it should give you a good insight of of what you can do here what what the benefits are and what you you know what what you get in terms of results uh from from this edition and i think we think we showed in the previous episodes why a knowledge graph together with an LLM makes a lot of sense and why this is really an improvement. This is just another way of, you know, agreeing, I would say, with what we found already in the past and just, you know, using a slightly different stack with the property graph model. And again, we took the same settings and we use exactly the assets that they published and in the context where data lives in a relational database. But any Neo4j user should know that the data is GenAI ready in a way. Because the last step the the code the notebook that I published there would work on any model I mean if you think of it let me let me bring it bring it back again and share my entire screen if I share this let me bring back the notebook I mean mean, all this notebook does is, after the installation, it connects to your database, and it will go and retrieve a description. I've let this because initially I was hard coding it, but I completed it in the end. All it's doing is dynamically retrieving from the graph a description of its schema. What's in that graph? What type of entities? What's the relationship? What's the ontology of your graph? You retrieve it dynamically, I don't need to do anything. You run this query and it will work on any graph. You have the movie database, we'll run it and we'll get the entities. Whatever you have, these will retrieve that schema, and you can pass that. This code will work. All you need to do is change this to be a business relevant question in your domain, in your model, and these will just work and will generate a high quality cipher for you without having to maintain all these elements. So that's one of the added values. I mean, the outcomes, I would say, the takeaways from this session, that a graph is kind of a self-describing data set. It doesn't only contain the data, but it can tell you what it's like, what's its shape, what's its structure. Yeah, perfect. That was good. There was one question from Fabrice on chat. As you pay for tokens, it could be expensive with a big ontology schema, like a big prompt. I mean, yes, it could be. But if we think of it, I mean, the answer is it can grow. But the ontology will never be data size, instance data size. So you can use portions of the ontology if that becomes an issue, right? Here, we're working in the context of a single data set. And ontology is never going to contain more than what? 100 hundred entities, you know Few hundred if you're looking at an enterprise-wide one I mean an ontology can I think they can become big when they're combined with with additional additional elements But it's I mean if it becomes a concern, I would say maybe pick a portion of it, right? No, maybe you don't need to pass the... Now, the question of how to filter, how to select might be a tricky one. I mean, ontologies are sometimes broken down and divided in domains or in subontologies. So that might be another way of doing it. But yeah, for a point, it would be an interesting exercise to to understand i mean i haven't looked at them at the metrics but but measure a little bit of how many tokens are being exchanged in this case which is a small one maybe in a larger one and and see whether that can become a an issue no no i mean that always needs some fine tuning but i think that's generally yeah you know um i mean the question really is here do you want to get the right answer or is is the answer not relevant and if if if that is not a business problem then i guess it doesn't matter but if it is a business issue and can create problems then I think the cost for the tokens is not going to be the high priority. Let's take one more question before we wrap up today from Arunan. I worry that this may produce unpredictable results in production. More of a comment about LLM output rather than the structured nature of the ontology. it's true i mean it depends i mean i think in the case i set the temperature to zero to kind of i mean not guarantee but at least have the more predictable query generation but yes i mean any anything where you where you introduce an llm you know by definition is generative and therefore creative and therefore sometimes to some extent unpredictable. But, and again, there's a bit of a question here on where would you want to put a solution like this one? I probably, my personal opinion is that I think this type of helpers when generating queries will be more co-pilot style rather than something that you ask a question in English. I don't know if you will, maybe there are scenarios where you want to give business users or non-technical users access to a dataset. I don't know if it will be kind of entirely open-ended and just let them talk and hope that the LLM produces something meaningful, but yes, that's inevitable. And you will have to introduce guards and controls. And that's the nature of LLMs, right? So you can introduce some guards from the Knowledge Graph side. But that's going to be a factor, of course. MARTIN SPLITTINGER. Yeah, but I think the more you add there, the more more guards the more structure the more context the more the more that a knowledge graph can help avoid that and make sure that you don't get um you know it results you don't want to want to see so but yeah it needs it needs fine tuning it needs it needs uh it's not something you can just copy and paste and run with it. It needs, it needs a little bit of work. But once I think as this, this whole exercise today showed, I think it is, it is worth it in the end because the output is just so much better already with this showcase. And I think it'll just improve over, over time. Cool. Before we wrap up i i as i promised initially i just wanted to give you a quick um little um demo or rather promo of what's what's gonna happen in the future so again the um So again, the workshops that will be running in the next few weeks is available for registration. So if you want to go there, check it out. Also, O'Reilly is running a knowledge graphs and large language models boot camp. So perfect fit, I think. If that is something you are interested in, they offer also a 30-day trial for free. If you want to join these, they are happening actually next week and the week after. So check it out for that and you can subscribe for 30 days for free. And then next week, next stream will be together with Johannes where we'll talk about entity resolution and deduplication with Neo4j and GenAI. So you see obviously the topic GenAI as a whole is sticking around for a while. So we offer a bit more on that front. But that should also be an interesting session. Also, again, touching obviously knowledge graphs and how that all fits together with this entity resolution and entity duplication in the next episode. Yeah, and I think with that, we are at the end of today's episode. Again, thank you very much for your comments and your questions. Thank you for watching. Thank you very much for your comments and your questions. Thank you for watching. If there's anything else that comes up after this episode is now over, please use the YouTube comments, give it a like, and follow our channel to get notified for the next ones. And, yeah, thank you very much for presenting and for preparing and for sifting through that benchmark. I think it was not really as mentioned, but I think very much appreciated that you looked at all this data, lots of CSV files, lots of things. And just to point out that this takes a lot of work. And thank you very much for doing that, for making that presentable in a very concise one hour session today so thank you perfect you know and as always all those materials will be soon on the github repo so we'll we'll make sure that happens right yeah cool thank you very much uh see you around and uh yeah until next time uh bye bye everybody see you all bye | Going Meta - Ep 26: Unpicking the data.world Benchmark on the Role of KGs in LLM QA | 4,404 | Neo4j | 20240306 | Episode 26 of Going Meta – a series on graphs, semantics and knowledge
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Knowledge Graph Book: https://bit.ly/3LaqE6b
Previous Episodes: https://neo4j.com/video/going-meta-a-series-on-graphs-semantics-and-knowledge/
A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model's Accuracy for Question Answering on Enterprise SQL Databases: https://arxiv.org/abs/2311.07509
Benchmark Data GitHub: https://github.com/datadotworld/cwd-benchmark-data
r2rml: https://www.w3.org/TR/r2rml/
r2rml github: https://github.com/chrdebru/r2rml
dbt semantic layer: https://github.com/dbt-labs/semantic-layer-llm-benchmarking
Neo4j Sandbox: https://dev.neo4j.com/try
Going meta - Ep 5: Ontology-driven Knowledge Graph construction: https://youtube.com/live/05Wkg1p34ek
Data Importer: https://data-importer.neo4j.io
#neo4j #graphdatabase #semantic #knowledgegraph #benchmark #llm | 2024-09-03T13:08:14.901104 |
https://www.youtube.com/watch?v=47waPARlTMU | Hello everyone and welcome to this edition of the Permit-CIA webinar series, where Joaquín Dopazo will present Digital Twins for Drug Repurposing by Integrating Mathematical Modeling and Machine Learning, Bridging the Gap between In silico and Real-World Data. My name is Marta Llorell Linares and I'm involved in the Permit-CIA project on behalf of EMBL-EBI and I will be hosting this webinar today. Before we go any further in the webinar, I'd like to let you know that the webinar is being recorded and will be disseminated afterwards on the Permit-CIA website and our YouTube channel. our YouTube channel. After the presentation today, we will have some time to address questions from the attendees, so you can write your questions on the Q&A function in Zoom. And all the material that we share are licensed under CC BY 4.0 license, except where further licensing details are provided in the material. PERMIT-COE is the HPC Exascale Center of Excellence for Personalized Medicine in Europe. It focuses on simulations of cellular mechanistic models which are essential to translate omics data into medical action. The performance of the current simulation software is still not enough to tackle certain medical problems, such as tumor evolution or patient-specific treatments. So, PERMED-COE will scale up cell simulation software to HPC exascale system to enable the creation of models of cellular functions of medical relevance. And it will achieve this goal for a series of objectives. First of all, it will optimize selected cell-level simulation software to run in pre-exascale platforms. Second, PermetCIA is working on a series of use cases to show the use of PermetCIE solutions in areas of clinical relevance, such as tumor evolution or drug synergies for cancer treatment. PERMED-CIE will also train biomedical professionals in the use of PERMED-CIE tools. It will integrate personalized medicine communities into the European HPC exascale ecosystem, and it will build the basis for the sustainability of the Center of Excellence. Today's presenter is Dr. Joaquín Dopazo, the director of the Computational Medicine Platform of Fundación Progresa y Salud. He holds a degree in chemistry and a PhD in biological sciences. He has previously worked in public centers like the research center Principe Felipe in Valencia and the Spanish National Cancer Research Center in Madrid and companies like GlaxoWellcomeSA. His field of work is personalized medicine with an approach that includes systems biology, mathematical modeling, and artificial intelligence. So Joaquín, the floor is yours. Joaquín Pérez- Totally. Let me thank the organizers for inviting me to share some of the ideas of this conference, which has a very long title. Actually, the title was going to be shorter. But after several discussions with some friends, they convinced me that what we were doing with the models was actually a digital twin. I have always been a bit skeptical with the concept of digital twin, especially in medicine. I would comment why. But to some extent, something which is important and probably is the core of the presentation is the use of models or the use of approaches that at some point convey causality that can be used for prediction. So, that would be the definition of a digital twin. There are replicas of objects or physical systems that typically are used for simulating events or interventions in the real world. Point, as I mentioned briefly before, is that in the context of medicine, digital twins have been proposed as a methodology like in other industry or in other fields, but it's a quite controversial concept in medicine, given that build up such models or such digital twins requires a lot of knowledge of the system. And we are far away of having this detail of knowledge of the human body that is at the end what we want to simulate. So, I mean, that would be the idea. If we have a system, we can decompose the system in pieces. Actually, we have different levels of organization within the system. For example, these pieces that you have on the right, I don't know if you can see the mouse. This small collection of pieces would belong to an organ of the car, that would be in this particular case the engine. And we can simulate the way in which these things, or these levels of organization work. So one thing are the pieces and you have below the pieces at work. So if we can simulate the operations of the pieces, how pieces interact among them and simulate this operation, you can simulate what is the functionality of the system. what this I mean the functionality of the system in that case most of the way with the system words can be simulated because we we know absolutely everything if we are in the context of medicine we have a system which is immensely more complex than a car and a car is complex immensely more complex than a car. A car is complex. This is much, much more complex and is poorly known. So we still lack a lot of knowledge. We have also these different levels of organization. Typically, we think in the full body. We think of the organs and we think of parts of organs, and we have at the end, behind everything, the molecular level detail. That would be this representation. And here, the problem starts. So we don't have a clear idea of how these pieces work, how these pieces, in some cases, we have some description of how pieces interact among them, but we don't know exactly how do they work. respond to a digital twin. So we cannot simulate, I mean, we are really probably decades or less, let's talk about decades being optimistic, but many decades away from having a proper simulation of the human body, but still we can simulate some parts of the system. But as I mentioned, we neither know all the pieces, nor we know how to relate to each other. So then how realistic can our digital twin be? So what can we do? Obviously, it depends very much on the speciality of the people working with this concept of digital twins can simulate one or another level. We typically want to focus the problem from the lower level, from the molecular level. So the idea would be we can have a molecular map that describes how molecules interact among them, proteins typically. We can model this interaction, typically, we can model this interaction and we can go from this molecular behavior to the behavior of the organ. That would be the general operation of the system. We can put some context when this general operation happens in a specific organ or happens in a specific condition in a disease or whatever and then we go to the body. Obviously, we are not going to simulate how the body works. We are going to simulate specific phenotypes. We would be very happy if we could go from the molecular level to define a specific phenotype, specific phenotypes. If we would be very happy if we could go from the molecular level to define a specific phenotype in the body. Then we use these maps, these molecular maps that are pathways, probably you are very familiar with pathways, quite common. So you have many different repositories, probably the most known is K, you have ReactOM and the EBI. There is this Wики pathways, there are more, there are other more specific, etc. There are also initiatives like disease maps, where they take specifically the parts of these maps that are relevant for diseases. So there is information, there is a reasonable amount of information on the table map of selectivity that describes the way in which molecules interact with each other. activity that describes the way in which molecules interact with each other. But what's important here is not that the molecules interact with each other, it's why they interact. They interact because they ultimately trigger cell functions which are responsible for the behavior of the cell and at the end the fate of the cell, but at the end the fate of the organ and at the end the fate of the cell but at the end the fate of the organ, at the end the fate of the patient. And this is what we want to simulate. So firstly we have to focus on the concept which is important here, which is the function. So we have these maps but not all the proteins do things by themselves. So some of these proteins simply are transmitting a signal or doing and the proteins do typically do several things. So something which is important is to identify important is to identify the level at which this map is describing an activity, a cell activity, or a cell functionality, which at the end is a holistic concept. So it's not, we have to forget forget the protein-centric or the gene-centric interpretation and go more for a holistic interpretation. There are like three levels. One level would be the pathway level. I mean, let's start by the gene level. So the gene level is what we are used to. So we have this gene which is over-expressed, then we start to think why this gene is over-expressed and what would happen if I inhibit this gene or whatever. But the effect of a gene within this context is quite complex many times. For example, in this case, which is, this is the summary. It's not because it's more complex, but it's a summary of the apoptosis pathway. And typically the pathways recapitulate the biological knowledge on many things. And typically there are many functions that are carried out by the pathways and typically these functions are contradictory, are opposite, not contradictory but opposites. For example this pathway, the apoptosis pathway is responsible both for cell survival and for cell death. cell death. So, if you just think of a specific gene, probably you will have not a clear idea of what is going on. For example, this gene BAD marked here, depending on the partners, it could be involved in apoptosis or it could be involved in apoptosis or it could be involved in survival. So having this inactive says nothing on what is going on in the cell. If you go at the level of the pathways, say, okay, there are many yeast in this pathway which are active. So well, but what is the pathways doing? Is it killing the cell or is it making the cell to survive? So the point here is to go to the specific circuits, to try to look for the activity of specific circuits that at the end do something. So the first thing that we have to do is to identify this part of the pathways which are responsible for a specific function. And from a mathematical point of view, you have to identify those proteins. I mean, this is a directed tree, right? So you have arrows, and these arrows mean some type of interaction. It can be an activation, many different types of activation, or can be an inhibition. So there are arrows that go from one gene to another. So when you have one gene that receives an arrow but there is not an arrow from this gene to another and has an annotation, this is the gene which is at the end of the cascade and do something, right? So we need to detect all the cascade and try to see if this cascade is active. So as this last gene can do whatever the gene is annotated to do, right? So the first thing that we have to do is to define this circuit, which has to do with this functionality. So what we do then is to take a pathway and to identify this final, this effector nodes and the compose the pathway in all the, I mean all the pieces of the subcircuit, that all the part of the subpathways that define thisor. So this, this, subs, these circuits are our units of functionality, right? In which we are going to focus. So we use a specific type of mathematical modeling, which is called mechanistic, because typically conveys causality, right? So the idea is that by using this model, we can provide a quantitative representation of what the cell is doing, right? We use pathways to define these circuits, also how proteins interact with each other. And ultimately one of these proteins triggers a cell functional activity. We define these functional endpoints. And what is interesting here is that we use gene expression as data. And this is the main reason for which we are very focused on this molecular level. Because gene expression nowadays, we have lots of gene expression data. I mean, really lots of this data. There are data which are cheap to obtain. They are quite accurate with the current technologies. And you have measurement for, I mean, absolutely anything that you can imagine. So, there is data we need a lot. So how does this model work? Let's imagine that we have this circuit here on the top right, which would be a very simple circuit composed by two receptors, which would be proteins A and B. Then you have the rest of proteins C, D, E, F that has this topology, this specific topology, we have a different topology. And finally if the signal arrives to G, then G triggers some function, typically by Many, many, many, many ways can be that, right? Then what we can do is to infer what would be the activity, because these are proteins. So we use gene expression as proxies of protein presence, which are the genes of proxy of protein activity. Many people discard that. I mean, you have a gene expressing, it doesn't mean that you have a protein. Yeah, good. But we are not focusing on a specific, unique protein out of context. We are talking about the expression of a chain of protein, which are really a biological subsystem. So if all these proteins are, all these genes are up, it's probably because there is a biological program behind that has accounted for this, right? So it's quite realistic, this assumption. So let's imagine that you have this condition A, in which we have active the genes A, C, D, E, and G. So if we put these genes over this circuit, you see that if A receives a signal, the signal can be transmitted until G that does the function, whatever it is. But you can have a different condition like B, in which you have Gs B, C, E and G active. So you have, under this condition, if B receives a signal, the signal cannot pass because V is making a short circuit. So in that simple way, I mean we use a quantitative formula, it's a little bit more complex, but we use this formula. So using the levels, sorry, of gene activity, we can guess what is the level of activity for this specific function. So this is quite interesting because we can convert something that we can measure, which is the expression, and it's easy to measure, and it's cheap, and it's accurate, in something which is more subtle, which is cell functionality. How can we measure cell division, for example, the activity of cell division? We should prepare a very complex experiment to measure, well, maybe not that complex, but other functions can be really complex to measure. but other functions can be really complex to measure. And that way you can measure the signal that the cell that decays to trigger this function, which in theory should be related to the function, right? So sounds good, but does it work? Let's see. So we took the data of the TCGA, the Cancer Atlas, the Cancer Genomic Project, etc. a lot of data on gene expression of the tumors, biopsies, and we know what happened with the patient, if the patient dies or goes well or whatever. Then we can define functions that has to do with cancer. Probably you are familiar with this whole mass of cancer, which were defined in 2011, and now they go from the third or the fourth definition. There are specific cell functionalities which has to do with the conversion of a cell into a malignant cell, right? So you can even define easily the circuits that trigger DNA replication. So you take patients, some of these patients survive and some of these patients die. You calculate what is the level of DNA replication in the tumor of the patient. And what you can see here is that clearly patient with a high level of DNA replication die more than the others who has not so high the level of replication. Okay, maybe that's by chance. But you identify other holmas like anti-apoptosis. So patients here on top right, patients with high anti-apoptotic activity die more. They have a worse pronostic. Cells, you go top right down. Patients with a low cell, low in that case, cell aggression activity, which probably means that the cells which probably means that the cells go and float in the blood and can colonize and make metastasis, distant metastasis. So these patients die more. So they have a worse pharmacy. For example, angiogenesis patients, which have high angiogenic activity, they have a worse form of disease. So what we did here was to convert what it means of the model gene expression into cell activities. And since we know what cell activities are relevant for the cancer, we can identify them and you see that actually patients with these cancer activities have a bad prognosis. Obviously other unrelated activities have nothing to do with survival of patients and they don't separate patients. And something that you have to keep in mind is that the activity, we are not measuring this activity. The activities are construct. We have inferred the activity from the model. We are not measuring the activity. So we have a tool that to some extent allows us to convert gene expression, which is something that we can measure, into cell activities, right? So that's nice. That's nice. But there is a nice property of this model. This model allows to make causal predictions, meaning that I can take a condition, for example, this condition here, that is represented here, where you have A, C, D, E, F, and G active. So the signal goes through with no problem and the function is active. And then I can simulate, for example, knockout. I can simulate interventions. I can simulate knockout, but I can simulate over-expression. It could simulate the over-expression of B. I can just change the values of the condition and recalculate and go back and see what would happen if I do something. This is a qualitative change because I don't need to measure things. I can have a condition and say, what would happen if I would do that? For example, if I do a knockout on the E gene, probably nothing happens because the signal still pass through F. Maybe it's not that efficient, but it still pass through F. But that would happen if I in D, the signal does not pass through this point. So I have switched off this sequence. In that case it's quite obvious, but you imagine that quite the very complex pathway that we have, you can do that and you can search for a specific intervention that have a specific result that you are looking for. So let me show you this. Again, it sounds good, but it's realistic. Yes, it is. Obviously, it's a prediction. We cannot pretend that all the predictions work, but it works quite well. And actually, this is probably the best paper of my career. This is the paper from which I'm really, really proud of this paper. So what we did here was to predict a knockout that was never done. Well, I mean, we predict several looking for a knockout that kills or reduce the viability of one cell. And we went to a experimental group and we did the, with them the the validation and actually the predictions were correct. So you see this type of approach is completely different to the approach that we are used to. Typically, this type of things are driven by experimental people. And sometimes people who just analyze data, they say, okay, can you analyze the data? And you do the analysis, but the ideas come from the experimental part. So now we are in a moment in which we can start to think in a different way and say, okay, we have model, we have data, and we can start to formulate hypothesis from the in silico directly, right? So this is an introduction to this type of models to show you that they work. So why repurposing? Why we are interested in repurposing? I'm going to talk to you about repurposing in emerging diseases. I mean, we have time. I will tell you another field in which repurposing is also interesting, which is rare diseases. So in the case of emerging diseases, and this is the case of the COVID pandemic that we have a few years ago, and we still have. And actually, we have now two strains which are very contagious. So be careful because now the number of people that go to the hospital is increasing. The number of hospitalizations is increasing. So what do we need in one of these situations? We need something quick. We need a quick response. Quick response, you cannot start from the scratch with a drug. So something which is interesting is if we can repurpose any type of drugs. We need a quick implementation. A drug, because this drug is already in use, can be quickly implemented because it has a known security profile. It has a known mechanism of action, it's already registered, so legally it's quite easy in comparison with a drug from the very beginning. What is the problem with emerging disease? Most of the details of the real mechanism of the disease are unknown. So we have to try to see how can we relate this drugs and the mechanism. So what would be the solution? The solution would be to use this method that I told you, that I have been commenting. And how can we solve relationships that are beyond the models using causal machinery? So I'm going to talk about that. So in this moment of alert, so there were different projects, there was a lot of money. There never has been a lot of money in this country for research. But there was money and they funded several projects. So we got this project and we took advantage that they were involved in this large-scale project, in which they were defining the COVID-19 disease map. So there was a large collaboration that involved more than 230 different biocurators, domain experts, modelers, data analysis, etc. from 120 different institutions in 30 countries. So it was a quite interesting initiative. So finally, they come up with a quite detailed map that we can use as basis for the modeling, right? So we modeled it and we published it in a couple of papers, etc. But here is the importance. So we have the map, right? So now we can model the map with the algorithm that I mentioned. The problem is that, okay, if the drug, if other drugs have a target in the disease map, so the effect of this target, of the effect of the drug on this target, can be easily modulated, as I showed you before. So we can just put the expression data and say, what would happen if I inhibit that? Oh, that's good. It would inhibit this part that has to do with respiratory tract or has to do with quadrupedies or whatever. But what happens if the drugs are outside of the map, which in many cases is what happened. So this is a problem. This is a problem because we cannot model that. I mean, we cannot directly model that using the mechanistic model, right? cannot directly model that using the mechanistic model, right? Then we have to imagine how to relate that with the map. So at the end, the map, as I mentioned in the very beginning, is a collection of proteins and arrows that link these proteins. Any of these arrows is biological knowledge. It's a lot of experiments and validation, etc. Demonstrate that this protein actually is activating or inhibiting this other protein and no other, only this protein. So then you draw an arrow. And this is a lot of experiments, etc. etc. Right? And the is that currently, one third of the genome can be modelled. The other two thirds are not linked by, are not part of this relationship. In other words, we don't have a knowledge, a real knowledge of what is going on between these proteins, how these proteins interact to do something. This is why, this is one of the reasons for which this digital twins has a lot of limitation in the field of when we want to make a real data to it, right? And the problem is that drawing one of these arrows, it takes a lot of time. It's generating, it requires the generation of this biological knowledge and it takes time. So maybe we don't need to do experiments because we have lots of experiments. We have lots of gene expression data and maybe we can use this to in the context of machine learning to try to learn biology from the data. So, I mean, the idea is fantastic. Why not? I mean in other cases you can learn, for example, there are lots of applications, machine learning and deep learning, etc. which they can learn, for example, in image case of image, it works very well. So you can take radiographies or you can take some different types of medical image and you can decide if there is a lesion or there is a problem or whatever. And in some cases, the level of accuracy is even higher than the human experts, right? So at the end the software, the network, the algorithm, the machine learning algorithm is learning from the data and you do more data, you say okay you can learn what happened with these pixels or what pixels are important, and then you learn how to identify something. So this is something that we could do, in theory. We could take all the gene expression data and say, okay, just learn. Learn how proteins interact among them, and what type of interaction it is. Well, I mean, it is realistic, no, it is not realistic because no matter that we have lots of data, we have the course of the initiality. So we have still more variables and the interaction of the variables are much more complex than the interaction that we have pixels in an image. The pixels at the end, they are important pixels around, not the other pixels in the other corner of the image. So, in principle, we are not going to solve all the biology in one shot. This is for sure. But something that we can do is to reduce the initiality of the problem. Not being so ambitious, not trying to learn all the biology in one shot, but something that we could do is, okay, we have a part of the biology that we know, that are these circuits and these circuits involved in the disease, and then we can try to see, okay, what are the relationships between all the proteins and all proteins among all the proteins? But I'm interested in a specific set of proteins, which are these proteins that are targets of drugs already in use. And I can try to learn what are the relationships between these proteins, which are targets of other drugs and the proteins which are in my map of the disease, right? And that case is something which is affordable, the problem. So what we did was to try to learn, well, I mean, the idea is we try to predict from the activities of the targets, of different targets, we try to predict the activities of the circuits, right? the circuits, right? Obviously, most of the drag targets cannot predict anything because they are completely unrelated, but some of the targets can predict some parts of the activity on this map, right? We use, I mean, we use here, I say the multi-help with random forest regression that was used in the original paper but we have tried other approaches I mean they work more or less I mean there are two or three approaches that can work and something which is more important that use this sharp which is more important than use this sharp score,put, multi-output, because the input is R, all the targets and the output are all the circuits. I say, okay, can we predict one for the other and what is important in predicting and what, right? So what we have are diagrams like this in which we can see that some specific targets, but the rest are not important as you can see here, but some specific targets were important for some specific... These are the drugs on the target for some specific or general parts of the pathway, right? Then we are not interested in those drugs which are acting on absolutely everything because probably they're killing the cell and that's it. And actually, it's curious because you have the chloroquine that was very famous at the very beginning because they say, oh, chloroquine works. Yeah, chloroquine that was very famous at the very beginning because they say oh chloroquine works yeah it works and kill the virus kill the cell kill you and kill everything so this is not it's not what we are looking for but there are other other more specific drugs which are more specific of some hormones of the disease. They go directly against, for example, the immune response, exacerbating immune response, against the energetic, inflammatory response, etc. So when we started with this, we made a prediction and said that we didn't have any means of checking if they are real or not. So something which was nice was to realize that in the moment in which we were publishing it, they published this paper, it was a review on the trials that were testing treatments for COVID-19. And there were trials for all the, I mean, we have predicted targets for all the drugs that were trying here that have a target, obviously, other drugs didn't have, I mean, other treatments didn't have a target. Some of them were something like inhalation of gas or something. They don't have a specific target. But those who have a target were predicted by the method. So you say, that's okay. It's not bad. It's not an issue, but we are at least oriented. So we thought, okay, let's go for real data. So we have this big database in Andalusia, which is the population database, and all the health system, which is the lies here, as in other places, all the system every month, on a monthly basis, they dump all the information on this database, which is consolidated. So it has information of absolutely everything that the system has recorded on a patient for over 15 million patients of the health system. So what we did was, I mean, given that there were lots of problems for using the data, what we did was to build up a trusted research environment, which is essentially a computational place in which you can do all the calculation that you need, but in a secure and controlled place. So what we did was to set up this infrastructure, which is available for research, which is a computational infrastructure, which is, which resists within the network of the hospital network. So it's not outside. So in that case, it's quite easy because you can request. Now that would be the typical, on the right is the typical pathway that follow with the typical analysis of clinical data. Typically you request the data to this committee and there is an approval, then they allow you to take the data, the data must be anonymized, they are extracted and they are used. So, in many cases there are problems and the committee rejects the project because there are doubts of security outside. So what we did was just internalize that. So the analysis can be done within the hospital in a trusted environment. And the only thing that is the result. But in that case, it's quite nice. So, okay. Then we start to check some of the drug that we predicted. We started with this because there was a lot of interest in the community because there were several groups working on the vitamin. And actually, there is the calciferol or vitamin D has a clear protective effect. And I'm still having this calciferiol because it protects. It's a fact, right? As you can see here, it has a significant protective effect. And this is not just that you see what happens with the patient that has a consume of vitamin D or not, is that you take into account all of these variables that could be affecting, which are sex, age, other vaccination, other comorbidities like diabetes, disease, cancer, et cetera, et cetera. And you have to take into account that. Meaning that when we say that something is protected at any age, at any sex, at any comorbidity, at any et cetera, right? So it's not that there is a correlation. It's a causal analysis, right? So we did the analysis in more detail and we came up with 21, this is a second revision so probably will be published soon, we came up with 21 quite effective treatments. Some of them are really, really, really effective. And one, I mean, actually there are more, but there are not significant because there were few people. But some of them, some of the drugs were counterproductive. They increased the mortality. What is interesting is that we can not only see if it has an effect, a significant effect, in the mortality of the patient. We can see also if there is a biochemical background behind. So we have access to the analytics. You can see how the lymphocyte count in these cases, the cases of the drug that protect, actually goes very well. While, for example, in the case of the drug which is causing more mortality, you have a lower lymphocyte count. So there is a biochemistry behind which is validating this. So what is interesting is that there is an enrichment, so many of these drugs were predicted by us. So this is something that at the end makes us very happy because we were really predicting correctly. So we did the prediction and then we could do the validation using this real-world data. I was going to mention something on the red diseases, but I'm going to jump on it because maybe we have a short time. But, I mean, it's a similar case. So we essentially did other molecular mass for, we make in an industrial way, many, many molecular mass for many rare diseases. And then we are validating now the results. So we have some cases which they have validated some of the cases with this previous publication that we are publishing that for retinitis, for anemia, for conianemia, for others. So just formulate the hypothesis, to learn how these proteins relate to the maps using public data, and to do a validation using real-world data. All the process from the formulation of the hypothesis to the validation has been done without doing any single experiment. Of course, these experiments were already done. I mean that we particularly didn't do any single experiment. But the important message is that there are lots of data. So for many things that you can do, there are already data that can be used. This is, to me, this is a new paradigm for knowledge generation, because at the end you can use models, modeling you can do modeling which involves biological knowledge and is causal and you can try to to to make prediction and you can validate the prediction using real world data this i mean is far away from what we would expect for a digital twin, but it's a little, it's a little twin, little digital twin that actually works. So, I'm finishing here. I'm really happy. This is the group, these are our sponsors, et cetera, et cetera. This is the hospital where we work, it's a, behind the Rocio. And I'm really happy of taking questions that you have. Thank you. Thank you very much, Joaquín. Yeah, as Joaquín said, if you have any questions, you can write them in the Q&A function in Zoom. And while you do that, I can ask with a general one. OK, so you're telling us now you did all this without experiments that you did, because experiments on the data were already there. How do you see that going forward? So the relationship between the models and the experiments? Will you have like a circuit where, you know, models inform experiments and then inform models, the experiments inform models and all that? How do you see it? Well, I mean, the way we see that is you could enrich the map that you have by validating that your prediction. For example, you predict that that drug is what happened here. You predict that that drug has an influence on this circuit and it will prevent coagulopathies. And actually the patient has no coagulopathies. This is what I observed in the patient. Then I can draw this new line. So for my next simulation, I have a new arrow. So I have a more complete map, et cetera, et cetera. So at the end, this is a real way of learning biology from the data, by science and error. Yes. Thank you. So there are a couple of questions now there. So one is how to distinguish a correlate relationship from a causal relationship? Yeah, good question and this is actually the core of the presentation. So, fairly, the model is causal. So the model Fairly, the model is causal. So the model is causal by principle, because we already know that, because there is knowledge that we already have, that this protein activate the other protein, and activate the other protein, and the final protein is a transcription factor that describes something that does something, right? In that case, if I do something here, there is a consequence here. So the causality is built up in the model. A different thing is how do you distinguish in the case of this influence that we use with the machine learning model? How do you distinguish between correlation and causality? This is not mathematical, formal mathematical causality. But this is, these are, we're using models, we are using the sharp information, which are something which looks very much causality. So they're not formally mathematical causality, but they are machine learning models that involve causality. So it's not just a correlation. It's that when this is up, this is up. OK. But this might be up. So it's sort of check, since it's multi, I don't know how to explain this, since it's multi-input, multi-output, it removes the possibility of there are other different ways and can be just a correlation and the cause could be a different one. So the short answer is that we use machine learning models that to some extent consider causality. It's not just a correlation. Thank you, Joaquín. So next question is, what number of minimum studies are needed as prior information to make a decent digital twin and prediction? I don't know how to answer this because, I mean, by definition, what we are doing is not exactly a digital twin. it depends very much. When we do a prediction of what would happen in this condition that I mentioned at the beginning, so we have the model and you can simulate, you have a condition, you have these genes up, these genes down and you can say, okay, what would happen if I remove the GIF, or if I put the GIF inactive. This is a comparison one-to-one, and there is no test behind. So simply to detect where, because I mean, I saw you very simple, very simple sequence to illustrate that, but there are much more complex. So what we do is to say, okay, what would happen if I make an intervention here? And then you have some circuits which are with chains, right? So it's not when we are within the model, we are not when we are within the model we are not we are not making any type of test we are just pointing out the places with the system chains and then we go for a different type of confirmation that could be experimental or could be other type of confirmation if That could be experimental, or it could be all a type of confirmation. If you were more thinking of, for example, the cases of the survival, and say, okay, is this function involved in survival or not? Then it depends very much on the case. Because, depends very much on the case because in some cases, you know, what's happening in biologists and mainly in medicine, you don't have the number of patients that you like and then when you draw the confidence interval you say, okay, this patient died more but it's not significant because the value say okay this patient died more but it's not significant because the the the confidence intervals overlap and then it's very very i mean again a short answer is it depends very much on what you want to test exactly like in many other scenarios okay thank you for Joaquín. I'm going to share my screen with some announcements. We'll stay, if Joaquín has the time, to answer all the questions so you can see them on the video afterwards if you need to run to another meeting or so. Our next webinar will be next month on the 5th of October and we'll have three of the participants of the summer school who won the poster prizes and we'll have three of the participants of the summer school who won the poster prizes and will be presenting the research on cancer, tailored cancer therapy, high-grade series of ovarian cancer and communication in the tumor microenvironment. This will be one month. If you want more information and to register, go to the PERMED-CIE website. And I also have an announcement on an workshop on computational models of life from molecular biology to digital twins. So more on digital twins. This is going to be at the end of November, close to Barcelona, northeast of Barcelona. And you still have time to send an abstract before the 18th of September. And then the registration is on the 16th. So now, Joaquin, we continue with the questions. So there's a comment there, this is very interesting. Thank you for all this work, congratulations to the team. How do you see this being adapted to multi-omics dataset of the same set of participants? of the same set of participants? Yeah, I mean, actually we have, I have not mentioned that, but we have other approach for this. We have several papers published on that. So we look for activity at the end, right? So we think that you have gene expression plus mutation, you recapitulate a lot of other omics, right? For example, you may have a copy number and you have the gene over a lot of copies and then you will have higher expression. Then what we do is to combine the expression plus the mutations, which is the activity of the gene and the functionality of the gene. So a gene can be active, but can have, for example, a frenzied mutation, so the protein is not active. Then we remove from the model the protein. How can you extrapolate that if you don't have this information? For example, you have, instead of having gene expression, you have methylation. You may, I mean, something that we do sometimes is to say, okay, that the part in which this methylation will affect, we take the normal situation and we go down, go the gene down or up, depending on the level of methylation. I mean, you can do some tricks in which you can reflect this multi-omic data in terms of gene expression and gene integrity, right? So at the end, we play with these two concepts. Obviously, there are, I mean, this is just to see where does the change affect. We cannot be sure that this change, the degree of affectation is real, but it's something that we do. Thank you. So we'll go to the next one. Can you please give details on what is considered a circuit? How is the line drawn between which nodes are part of a circuit and which nodes are not, considering that nodes may have a large amount of edges? You visualize, for example, a signaling circuit. You have proteins, and the proteins are connected by arrows, right? The arrows are directional. It's not like, I mean, this is for signaling circuits. It's not for metabolism. Metabolism, the arrows are not directional. You can go in one direction or the other direction. Now we have the signal traveling in one direction only. So you have these arrows. Then you have proteins which has an arrow pointing to them and no arrow. And typically these proteins are annotated with a function. So these are the effector proteins. And then what we do is to find all the effector proteins that do things in the field sequence and then go backwards, no matter how complex it is. And we found a protein which has an arrow to other but has not an arrow to itself, which is the primary receptor. So we travel from the effector to the receptor, or in many cases there are several receptors, there are many, many, many receptors. And it can be as complex as possible, but at the end what you do is like taking the the end of the of the circuit say okay you go backwards and what happens is that many of the circuits share protein with other circuits and this i mean this is completely normal and many times what happens is that you say, okay, when you see the activity of the genes in the pathway, it make no sense. Then you need to write circuits. You say, okay, now it makes sense because you have enlightened only parts of this pathway which correspond to a circuit. And this is what we do. So we go from the effector backwards until the receptors. In some cases, the receptor, all the receptor does it. Okay, it's one big circuit. Like many cases, there are like, I show a slide in the word decomposed. So at the end what we, do you remember one sort of animation? You can show it if you want. Yeah, let me go for it. So the beginning. I think this one, can you see? This is a very small pathway, but what you can see is these proteins and this is square, this is square in the function annotated. So all these proteins which have a square, this square is not part of the sequence. This square is annotated by us and this is the function. So all these proteins which has this square, which is the label, are final proteins, right? Are effectors. And then what we do is to get, okay, this is an effector, and you go from the effector backwards until the, well, I mean, here, exactly. Exactly. Until the receptors uh you see here can you see my the mouse yes we see the mouse okay so you see okay what happened with this protein because it's connected here but you you can see that this protein is connected not from here to here, but from here to here. So what we do is, okay, arrows that point here, arrows that point this, this and this, arrows that point here, here, here and here, arrows that point this one, no arrows. There are no other protein pointing here. Arrows that point this one, no arrows. There are no other protein pointing here. So for this circuit, if the signal is going here, it goes there to this function. Obviously, it can go through this way, but this is a different circuit. So at the end, So at the end, mathematically it's very easy. You only look for, you go to the last protein and you look for things that are pointing to you. Until you find somebody who is not pointing to anybody. And this is the same thing. Thank you very much for that detailed explanation. We have a few more questions. So there's another, thanks for sharing. What would you say to people that believe that there are many variables that we are not taking into account in this type of modeling and research, that the puzzle is not completed until we integrate the extra layers? For example, this extracellular matrix? Well, I mean, what I have been saying all the time that this is not perfect. This is what we can do in a very simple way. And this is the little, the little, uh, uh, digital twin. There is a lot of detail that is not, is not here. This is, this is true. I tell you that in cancer, for example, this type of simulation works very well because many of the things that happen in cancer is a matter of signaling. So in that case, it works very well. For example, in rare diseases, there are many rare diseases that have nothing to do with signaling. And then we have a lot of difficulties to make the simulation. nothing to do with sinalit and then we have a lot of difficulties to make the simulation. So as you say the puzzle is not complete and we can say only a part of the reality. As soon as we have more arrows and more relationships, we can model more relationships, we will have more reliable digital twins. Vivian, who is the one who asked that question, raise the hand. Vivian, do you want to talk? Yes. Can you listen to me? Yes. Yes. Yeah. Great. Yeah. I asked the question because I have faced this challenge with experimental people. And they're not very convinced that the modeling that we are doing is the right, on the right track, or we're asking the right questions. They feel like we are taking too broad considerations. And I just ask you as an expert, what would be the advice, how to deal with this? Because I believe in this, but many do not. Yeah, well, but this is, I mean, we have found this problem many times. So experimentalists, I mean, when you invade the ecosystem of other people, so they are very protective. So many experimental people don't believe that, mainly because they don't understand, firstly, and secondly, because they feel that you can do without simulation what they do with modes of experimentation, which is a bit depressing for them, right? So, I mean, there is evidence, there are papers that demonstrate that that works. We have published a number, not a lot, but maybe more than 10 papers, 15 papers, and some of these papers have validation. So, to me, it's a demonstration that that works with limitations, and typically what I would recommend you is to try to use some type of persuasion and reverse psychology in a way where they feel that they are at the end they are going to give you the the link that lacks here or something like this but it's more a problem of self-confidence of experimentalists than a real problem because this model i mean with the limitation that they have a lot of limitations. For example, in cancer, they work fantastically well because they predict many things but very, very accurately because cancer is mainly a disease of signaling, right? Then, I mean, just collect evidences of something like this. Thank you. Thank you. So we have another, thanks for the talk, very interesting. According to you, what is the proportion of edges in biological networks that relate to indirect and real direct relationships? And how do you think this affects the predictions of your models? So, Sergián, what is this? What is the proportion, Jean-Baptiste Mournot? What is the proportion of edges in biological networks that relate to indirect and real direct relationships? And how do you think this affects the predictions of your models? Direct and real direct. You mean that they are real direct relationships? And how do you think this affects the prediction of your models? Direct and real direct? You mean that they are real or not? Or... You mean how do we trust in these edges? Or... You mean that... I would say that in the... I mean, something which is very easy is when you compare different. Wait a second, because Jean-Baptiste has raised the hand. So maybe he can clarify. Hi, can you hear me? Yes. Yes. Awesome. Yeah, the question is related to when we're doing like a causal inference or we have some causal relationships. We can have like causal relationships, but these relationships are not physical. You know, we just know that one activity is like the appearance of others, of the activity of downstream different entities, but it's not directly related. There's no physical relationships between the two. So the question is, do we know today what is the proportion of real, let's say, physical relationships and the relationships that are causal, but not, let's say, physical. I don't know if it makes sense. Yeah, yes. We take, I mean, we use mainly for this, we use Keg. In the case of Keg database, there is a quite clear relationship, quite clear description of the relationship between the proteins and we separate in two classes um activation and inhibition and the relation is i mean there are lists uh phosphorylation imagination etc etc some of them are physical interaction and some of them maybe they are not physical interactions, but at the end they fall for a date or whatever. So I don't care if they are physical or not. I only care if the type of interaction, if the interaction is an activation, so it's transmitting the signal or it's not transmitting the signal. How reliable are they? When you compare different databases, there is a core which is constant among them. So probably these are the more reliable. And in signaling, there is a very large part of the signaling which is probably is quite solid in terms of real reliability. So you can trust very much this. There are other, I mean, you go to Reactome or you go to other databases or to wiki pathways, so there are some suggested relationships that maybe they're not real. So probably there will be a process of curation in time. But I think you can trust in what is in the core of the signaling, you can trust most of the relationship and you can try to direct causal, no matter if they are physical or not. This is my opinion. The number, I don't know. Okay. So thank you very much. I hope, Jean-Baptiste, that that answers the question. Yes, thank you. Then go for the last two ones, and then they will all be in the recording. So we have one more comment about thanks for the nice talk and this is a bit of a technical question. Why SHAP in particular for the machine learning model? Did you try any other explainability approaches? I will tell you. We use SHAP because it's the favorite of my postdoc who was doing that. We are planning to use other causal approaches, but we have used that actually the most popular as well. So we wanted to go for something over which there is a consensus, many people use that, but there is no vision beyond that. Okay, thank you. And then the last one is, thanks for the interesting seminar. What would be the differences in applying this modeling in studies with bacterial species as variables, not genes? Would that be possible and equally promising? What should have to be changed? That could be, it could be that. I mean Simply, you should have a different map, which would be the signaling within the bacteria, which is far simpler than the human signaling. And it is possible. Actually, not on the signaling side, but on the metabolic side, Not on the signaling side, but on the metabolism side, there are lots of studies with models of metabolism having made in bacteria precisely because they are simpler. So you can have a look at the metabolic modeling in bacteria and you will get a lot of publication but i guess this doesn't i read the question and i know almost nothing about it it's not about one bacteria what happens in a bacteria but if you have several bacterial bacterial species together because it says using the bacterial species as variable instead of the genes so i guess it's you have one bacteria here that interacts with this other bacteria and interacts with that other bacteria. Yeah, I misunderstood. I mean, why not? But I mean, this model is for transmission of information. If to some extent this bacteria, this bacterial species do something that can be assimilated to transmission of information, meaning that this bacteria needs the other bacteria because otherwise it wouldn't survive, whatever. You can make some type of network among all the different types of bacteria. Probably you could do something like this. Maybe it's not the best approach, but it would be interesting to think of that. Okay, thank you very much Joaquín. Thank you very much, Joaquín. Thank you very much to everybody who has stayed with us until now. And yeah, see you next time. The video will be available, as I said, and you'll receive an email about that. So thank you. Have a nice day. Bye. Thank you. Bye. | Digital twins for drug repurposing by integrating mathematical modelling and machine learning | 4,682 | PerMedCoE | 20230913 | The rapid development of computational approaches has revolutionised the field of drug discovery and repurposing. Here I will show a practical application of digital twins built by the synergistic combination of mathematical modelling of the cell and machine learning techniques to identify potential drug candidates for repurposing and subsequently validate their efficacy using real-world data. After discussing the inherent challenges associated with traditional drug discovery methods and the need for alternative approaches I will introduce the concept of digital twins, virtual replicas of real biological systems, which leverage mathematical models to capture the intricacies of cellular behaviour and response to drugs. By integrating machine learning algorithms, these digital twins can be trained on vast datasets to predict drug effectiveness and identify promising candidates for repurposing. Moreover, I will emphasise the importance of validating these predictions with real-world data, obtained from biomedical databases, to ensure reliability and efficacy. The presentation aims to showcase the power of integrating mathematical modelling, digital twins, and machine learning for drug repurposing, paving the way for accelerated and cost-effective drug discovery processes.
Speaker: Joaquín Dopazo, Andalusian Platform for Computational Medicine, Fundación Progreso y Salud, Hospital Virgen del Rocío (Sevilla, Spain)
All materials are free cultural works licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) license, except where further licensing details are provided. | 2024-09-03T14:04:59.920628 |
https://www.youtube.com/watch?v=09uDCmLzYHA | . Welcome to this tutorial on building a customized question and answering chatbot application in Cloud using open source and NVIDIA tools. Today, we will walk you through creating a powerful GPU accelerated QnA-based text assistant that can answer question based on your proprietary data. Let's take a quick look at what we will be building. As you can see our chatbot can quickly process and responds to questions about the data provided. Now let's try to break down on how we build this end-to-end using NVIDIA and open source tools. Before we dive into the specifics of our chatbot, it's important to understand that the ecosystem that we are working with. NVIDIA provides a full stack ecosystem for large language models that integrate seamlessly with popular open source tools. This ecosystem offers flexibility and performance, allowing developers to switch between different components as needed. NVIDIA not only provides proprietary tools, but also contributes significantly to the open source community. Tools like Nemo for building and training LLMs and Triton for inference servings are prime examples of NVIDIA's contributions to open source. These tools integrate well with broader AI ecosystem, providing developers with powerful GPU-accelerated options for building LLM applications. Nemo goes beyond just a training framework. It includes several open source components that enhance its capabilities for developers. Two of the notable Nemo components NEMO Data Curator, which helps simplify the process of data curation. It helps extract, deduplicate, and filter information from large amount of unstructured data, ensuring high quality and relevant datasets for training. The second component is NEMO Guardrails and this helps implement safety measures and control for model output. It also allows developers to add guardrails that prevents inappropriate responses, enhancing the reliability and safety of AI application. You can learn more about this by going on to github.com slash nvidia slash nemo. Now let's quickly understand about Retrieval Augmented Generation or RAG, the core technology behind it. RAG enhances the LLMs by giving them access to external and up-to-date knowledge. Here's a brief overview. We start by our enterprise data or proprietaryety data. This data is then processed into a vector database. And the way it works is you take the data or text and split them into chunks. And then those are converted into vector embeddings, which are then stored in a vector database for quick retrieval. So when a user submits a query at inference time, it is converted into an embedding, and then it is used to search through the vector database that was created using the enterprise data. The most relevant information is then retrieved and ranked. So both the query and retrieved information are then fed into the large language model as in context, which generates a response based on its pre-trained knowledge and the specific relevant information from our database. So this process allows the chatbot to provide responses that are both general and specific to our existing data or enterprise data combining retrieval and generation for more accurate and contextual answers. Now let's look how we implement this pipeline using NVIDIA tools and the other open source components. So let's look at our pipeline and first we have some paper from archive which contains some research about latest LLM technologies. This data about research paper fuels our chatbot knowledge base. We'll be using GPU accelerated tools for faster processing and more efficient handling of large dataset. Specifically, we'll use NVIDIA inference microservices or NIMS for hosted LLM and embedding model and Milvus for GPU accelerated vector search. For embedding model, we'll use NV embed, which leverages NVIDIA's GPU acceleration to quickly transform text into high dimensional vectors. For vector storage and similarity search, we'll use Milvus, a GPU accelerated vector database that can handle large-scale similarity search with impressive speed. It's important to note that using GPU indexing and querying requires numerous query vectors or exceptionally high request pressure for maximum throughput and reduce latency compared to CPU indexing. So the GPU acceleration in NV embed and Milvus significantly enhances the performance of our chatbot, allowing for faster indexing and querying for our archive papers. For actual question answering, we'll use a large language model, specifically LAMA3 7db instruct model, which will be accessed through NVIDIA's NIM API. This gives us the power of state-of-the-art language models with the speed of GPU acceleration, resulting in real-time responses and improving the user experience. Finally, we use LAMA index to orchestrate the entire process from query processing to retrieving relevant information and generating responses. We'll use Gradio to create a user-friendly interface for our chatbot. Now let's dive into the code. Before we dive deep into our React QnA chat application code, let's generate an API key for NEMS by going on to build.Nvidia.com. And we'll select Lama3 as our LLM model for the application. Here you get various options to try out the model or just boilerplate code to get this model or optimized NIMS onto your application. But here we are trying to generate an API key. So just click generate and there you have it. Just copy this API key and you can use wherever you are setting the environment variable. But in my case, I'll set this on my terminal. Assign that API key to a variable name,vidia-api-key as an environment variable and you can do that with this command and press enter. Now let's make sure we have all the necessary packages installed. So I've included all the packages in the requirement.txt file and you can install with the pip installation method, which is pip install hyphen r and requirements y. It will start installing all the dependencies before we start writing the code. Now this is done. Let's move on to our code editor. Now let's dive into the code. Before we start, let's import all the necessary libraries and configuration for our application. We are using DRADIO for the user interface, LAMA index for document processing, and querying and NVIDIA's embedding and language models through NIM API. Let's also check if NIM API key is set as environment variable or not. Then we'll set up some global variables to store our index and query engine. These will be initialized when the documents are loaded. Now let's create function to handle document loading. The get files from input function extracts the file names from the file objects, and the load document function reads the file, creates an index using a mills vector store and sets up a query engine. Next, we'll implement the chat functionality. The chat function handles basic question answering, whilst the stream response function enables the streaming of responses, providing a more interactive experience. Now let's set up a Gradio interface. We'll create a file input for document uploading and then button for loading documents, a chat interface for asking question and then a clear button to clear the chat history. Now we'll connect our interface component to their respective function. The load button will trigger the document loading. The message input will initialize the streaming response and the clear button will reset the chat history. Finally, we'll add the code to launch our GradU application when the script is run directly. Now let's move on to the terminal to run this app. To launch our app, simply to run this app. To launch our app, simply type python app.py on terminal and it will start the web server where you can access the app. So here you can see it is this web app is running on local URL, which is local host with the port number 7860. Let's try to go onto this URL. Now that we have our RAC Q&A chat application up and running, let's demonstrate how it works. First, we'll import some documents. For this demo, I have research papers from NVIDIA. To import these documents, I'll click anywhere in this area to select my files. As you can see, I can select multiple files at once and in this case I'll choose both the papers. With our file selected, I'll click on the Load Documents button. This triggers our document loading process which reads the file, processes their content, and creates an index for quick retrieval. Keep in mind that creating this index isn't necessary with every application load unless you aim to incorporate new data. Great, we can see the confirmation message that our documents have been successfully loaded. Now we are ready to start asking questions about the content of these papers. I'll now ask what's the difference between Dora and Laura. Great. And there you have it. We have created a powerful drag Q&A chat application which is hosted on Cloud with streaming capabilities. Users can now upload documents, ask question, and receive answer based on the content of those documents in real time. Feel free to experiment with these documents and questions to see how it performs. In this tutorial, we have covered how to build a customized Q&A chatbot hosted on cloud using GPU accelerated components from NVIDIA and open source tools. Feel free to experiment with the provided script and explore further. For more information and resources, check the link in the description below. | Building LLM Assistants with LlamaIndex, NVIDIA NIM, and Milvus | LLM App Development | 721 | NVIDIA Developer | 20240826 | In this video, we dive into the essentials of creating a Q&A chatbot. Here’s a quick overview of the process:
1. Embedding Creation: Learn how to use NVIDIA NIM microservices to transform your text into high-quality embeddings.
2. Vector Database: Explore the power of GPU-accelerated Milvus for efficient storage and retrieval of your embeddings.
3. Inference with Llama3: Find out how to leverage the NIM API’s Llama3 model to handle user queries and generate accurate responses.
4. Orchestration with LlamaIndex: See how to integrate and manage all components seamlessly with LlamaIndex for a smooth Q&A experience.
Learn more today with this notebook: https://nvda.ws/4dRMwQe
Join the NVIDIA Developer Program: https://nvda.ws/3OhiXfl
Read and subscribe to the NVIDIA Technical Blog: https://nvda.ws/3XHae9F
LlamaIndex, NVIDIA NIM, Code Review, Milvus, LLM Assistant | 2024-09-03T14:51:23.892880 |
https://www.youtube.com/watch?v=T9aRN5JkmL8 | Basically, this entire roundtable session here is just going to be focused mainly on prompt engineering. Variety of perspectives at this table around prompting from a research side, from a consumer side, from an enterprise side. And I want to just get the whole wide range of opinions because there's a lot of them. And just kind of open it up to discussion and explore what prompt engineering really is and what it's all about. And we'll just take it from there. So maybe we can go around the horn with intros. I can kick it off. I'm Alex. I lead developer relations here at Anthropic. Before that, I was kind of technically a prompt engineer at Anthropic. I worked on our prompt engineering team and did a variety of roles spanning from a solutions architect type of thing to working on the research side. So with that, maybe you can hand it over to David. Heck yeah. My name is David Hershey. I work with customers mostly at Anthropic on a bunch of stuff. Technical, I help people with fine-tuning, but also just a lot of the generic things that make it hard to adopt language models, so prompting and just how to build systems with language models, but spend most of my time working with customers. Cool. I'm Amanda Askell. I lead one of the fine tuning teams at Anthropic, where I guess I try to make Claude be honest and kind. Yeah. My name is Zach Witten. I'm a prompt engineer at Anthropic. Alex and I always argue about who the first one was. He says it's him. It's me. Contested. Yeah. I used to work a lot with individual customers, kind of the same way David does now. And then as we brought more solutions architects to the team, I started working on things that are meant to raise the overall levels of ambient prompting in society, I guess, like the prompt generator and the various educational materials that people use. Nice. Cool. Well, thanks guys for all coming here. I'm going to start with a very broad question, just so we have a frame going into the guys for all coming here. I'm going to start with like a very broad question, just so we have a frame going into the rest of our conversations here. What is prompt engineering? Why is it engineering? What's prompt really? If anyone wants to kick that off, give your own perspective on it. Feel free to take the reign here. I feel like we have a prompt engineer. Yeah, exactly. That's his job. We're all prompt engineers in our prompt engineer. Yeah, exactly. That's his job. We're all prompt engineers in our own form. One of us has a job. Yeah, maybe since it's in your time. One of us has a job, but the other three don't have jobs. I guess I feel like prompt engineering is trying to get the model to do things, trying to bring the most out of the model, trying to work with the model to get things done that you wouldn't have been able to do otherwise. So a lot of it is just clear communicating. I think at heart, talking to a model is a lot like talking to a person and getting in there and understanding the psychology of the model, which Amanda is the world's most expert person in the world. I'm going to keep going on you. Why is engineering in the name? I think the engineering part comes from the trial and error. One really nice thing about talking to a model that's not like talking to a person is you have this restart button, this giant go back to square zero where you just start from the beginning. What that gives you the ability to do that you don't have is a truly start from scratch and try out different things in an independent way so that you don't have interference from one to the other. Once you have that ability to experiment and to design different things, that's where the engineering part has the potential to come in. Okay. So what you're saying is as you're writing these prompts you're typing in a message to Cloud or in the API or whatever it is being able to go back and forth with the model and iterate on this message and revert back to the clean slate every time. That process is the engineering part. This whole thing is prompt engineering all in one. There's another aspect of it too, which is integrating the prompts within your system as a whole. David has done a ton of work with customers integrating. A lot of times it's not just as simple as you write one prompt and you give it to the model and you're done. In fact, it's anything but. It's way more complicated. Yeah, I mean, I kind of think of prompts as the way that you program models a little bit. That makes it too complicated, because I think Zach is generally right that just talking clearly is the most important thing. But if you think about it a little bit as like programming a model, you have to like think about where data comes from, what data you have access to. So like if you're doing RAG or something, like what can I actually use and do and pass to a model? You have to like think about trade-offs in latency and how much data you're providing and things like that. Like there's enough systems thinking that goes into how you actually build around a model. I think a lot of that's also the core of why it maybe deserves its own carve-out as a thing to reason about separately from just a software engineer or a PM or something like that. It's kind of its own domain of how to reason about these models. Is a prompt in this sense then natural language code? Is it a higher level of abstraction or is it kind of a separate thing? I think like trying to get too abstract with a prompt is a way to like overcomplicate a thing because I think we're going to get into it, but more often than not, the thing you want to do is just like write a very clear description of a task, not try to like build crazy abstractions or anything like that. Um, but that said, like you are compiling the set of instructions and things like that into outcomes a lot of times. And so precision and, and like a lot of the things you think about with programming about like version control and managing what it looked like back then when you had this experiment and, and like tracking your experiments and stuff like that. That's all You know just equally important to code so It's it's weird to be in this paradigm where like written text like a nice essay that you wrote is Something that's looked like the same Thing as code. Yeah But it kind of is true that now we write essays and treat them like code. And I think that's actually correct. Yeah. Okay. Interesting. So maybe piggybacking off of that, we've kind of loosely defined what prompt engineering is. So what makes a good prompt engineer? Maybe Amanda, I'll go to you for this since you're trying to hire prompt engineers, more so in a research setting. What does that look like? What are you looking for in that type of person yeah good question uh i think it's a mix of like zach said sort of like clear communication so the ability to just like clearly state things like clearly understand tasks um think about and describe concepts really well that's like the kind of writing component, I think. I actually think that being a good writer is not as correlated with being a good prompt engineer as people might think. So I guess I've had this discussion with people because I think there's some argument is like, maybe you just shouldn't have the name engineer in there. Like, why isn't it just like writer? I used to be more sympathetic to that. And then I think now I'm like, what you're actually doing, like people think that you're writing like one thing and you're kind of like done. And then I'll be like, you know, to get like a semi-decent prompt, like when I sit down with the model, I'll like, you know, like earlier I was like prompting the model and I was just like, in a 15-minute span, I'll be sending like hundreds of prompts to the model. It's just back and forth, back and forth, back and forth. And so I think it's this like willingness to like iterate and to like look and think what is it that like was misinterpreted here, if anything, and then fix that thing. So that ability to kind of like iterate. So I'd say clear communication, that ability to iterate. I think also thinking about ways in which your prompt might go wrong. So if you have a prompt that you're going to be applying to like say 400 cases, it's really easy to think about the typical case that it's going to be applied to, to see that it gets the right solution in that case and then to like move on. I think this is a very classic mistake that people made. What you actually want to do is like find the cases where it's unusual. So you have to think about your prompt and be like, what are the cases where it's unusual so you have to think about your prompt and be like what are the cases where it'd be really unclear to me what i should do in this case so for example you have a prompt that says i'm going to send you a bunch of data i want you to extract all of the rows where someone's name is like is i don't know starts with the letter j and then you're like well i'm going to send it like a data set where there is no such thing like there is no such name that starts with the letter j i'm going to send it something that's not a data set. Just like I might also just send an empty string. Like these are all of the cases you have to try because then you're like, what does it do in these cases? And then you can be like, you can give it more instructions for how it should deal with that case. I work with customers so often where like you're an engineer, you're building something and there's a part in your prompt where a customer of theirs is going to write something. Yeah. And they all think about these really perfectly phrased things that they think someone's going to type into their chatbot. In reality, it's like they never used the shift key and every other word is a typo. They think it's Google. And there's no punctuation. They just put in random words with no question. So you have these evals that are these beautifully structured, what their users ideally would type in, but being able to go the next step to reason about what your actual traffic is going to be like, what people are actually going to try to do. That's a different level of thinking. One thing you said that really resonated with me is reading the model responses. In a machine learning context, you're supposed to look at the data. It's almost a cliche, look at your data. And I feel like the equivalent for prompting is look at the model outputs. Just reading a lot of outputs and reading them closely. Dave and I were talking on the way here. One thing that people will do is they'll think step by step in their prompt and they won't check to make sure that the model is actually thinking step by step because the model might take it in a more abstract or general sense rather than like, no, literally you have to write down your thoughts in these specific tags. So, yeah, if you aren't reading the model outputs, you might not even notice that it's making that mistake. Yeah, that's interesting. There is that kind of weird theory of mind piece to being a prompt engineer where you have to think almost about how the model is going to view your instructions. But then if you're writing for an enterprise use case too, you have to think about how the user is going to talk to the model as you're the third party sitting there in that weird relationship. On the theory of mind piece, one thing I would say is it's so hard to write instructions down for a task. It's so hard to untangle in your own brain all of the stuff that you know that Quad does not know and write it down. It's just an immensely challenging thing to strip away all of the assumptions you have and be able to very clearly communicate the full fact set of information that is needed to a model. I think that's another thing that really differentiates a good prompt engineer from a bad one. A lot of people will just write down the things they know, but they don't really take the time to systematically break out what is the actual full set of information you need to know to understand this task. That's kind of a very clear thing I see a lot is prompts where it's just like, it's conditioned. The prompt that someone wrote is so conditioned on their prior understanding of a task that when they show it to me, I'm like, this makes no sense. None of the words you wrote make any sense, because I don't know anything about your interesting use case. But I think a good way to think about prompt engineering in that front and a good skill for it is just can you actually step back from what you know and communicate to this weird system that knows a lot but not everything about what it needs to know to do a task. Yeah, the amount of times I've seen someone's prompt and then being like, I can't do the task based on this prompt and I'm human level and you're giving this to something that is worse than me um and expecting it to do better and i'm like yeah like yeah there is that interesting thing with like i mean current you know current models don't really do a good job of asking good probing questions in response like a human would yeah if i'm giving zach directions on how to do something, he'll be like, this doesn't make any sense. What am I supposed to do at this step or here and here? Model doesn't do that, right? So you have to, as yourself, think through what that other person would say and then go back to your prompt and answer those questions. You could ask it to do that. I guess that's another step. I was going to say, one of the first things I do with my initial prompt is like, I'll give it the prompt. And then I'll be like, I don't want you to follow these instructions. I just want you to tell me the ways in which they're unclear or any ambiguities or anything you don't understand. And it doesn't always get it perfect. But it is interesting that like that is like one thing you can do. And then also sometimes if people see that the model makes a mistake, the thing that they don't often do is just ask the model. So they say to the model, you got this wrong. Like, can you think about why? And can you maybe like write an edited version of my instructions that would make you not get it wrong? And a lot of the time, like the model just gets it right. The model's like, oh yeah, here's what was unclear. Here's like a fix to the instructions. And then you put those in and it works. Okay. I'm actually really curious about this personally almost. Is that true that that works? Like, does the model, is the model able to spot its mistakes that way? Like, when it gets something wrong, you say, like, why did you get this wrong? And then it tells you maybe something like, okay, how could I phrase this to you in the future so you get it right? Is there an element of, like, truth to that? Or is that just kind of a hallucination on the model's part around what it thinks its limits are? I think if you like explain to it what it got wrong, it can identify things in the query sometimes. I think this varies by task. This is one of those things where I'm like, I'm not sure what percentage of the time it gets it right, but I always try it because sometimes it does. Yeah. You learn something. Yeah. Anytime you go back to the model or back and forth with the model, you learn something about what's going on. I think you're giving away information if you don't at least try. That's interesting. Amanda, I'm going to keep asking you a few more questions here. One thing maybe for everybody watching this is we have these Slack channels at Anthropic where people can add Claude into the Slack channel and then you can talk to Claude through it. And Amanda has a Slack channel that a lot of people follow of her interactions with Claude. And one thing that I see you always do in there, which you probably do the most of anyone at Anthropic, is use the model to help you in a variety of different scenarios. I think you put a lot of trust into the the model in like the research setting. Curious how you've like developed those intuitions for when to trust the model. Is that just a matter of like usage experience or is it something else? I think I don't trust the model ever. And then I just hammer on it. So I think the reason why you see me do that a lot is that that is like me being like, can I trust you to do this task? Cause's some things you know models are kind of strange if you go slightly out of distribution like you know like you just go into areas where they haven't been trained or they're kind of unusual sometimes you're like oh actually you're much less reliable here even though it's a fairly like simple task um I think that's happening less and less over time as models get better but you want to make sure you're not in that kind of space. So, yeah, I don't think I trust it by default. But I think in ML, people often want to look across really large data sets. And I'm like, when does it make sense to do that? And I think the answer is when you get relatively low signal from each data point, you want to look across many, many data points because you basically want to get rid of the noise. With a lot of prompting tasks, I think you actually get really high signal from each query. And so if you have a really well-constructed set of a few hundred prompts, that I think can be much more signal than thousands that aren't as well-crafted. And so I do think I can trust the model if I look at a hundred outputs of it, and it's really consistent. And I know that I can trust the model if I look at 100 outputs of it and it's really consistent. And I know that I've constructed those to basically figure out all of the edge cases and all of the weird things the model might do, strange inputs, etc. I trust that probably more than a much more loosely constructed set of several thousand. I think in ML, a lot of times the signals are like numbers, you know? Like, did you predict this thing right or not? And it'd be like kind of like looking at the log probes of a model and trying to like intuit things, which you can do, but it's like kind of sketchy. I feel like the fact that models output more often than not like a lot of stuff, like words and things, like there's just fundamentally so much to learn between the lines of what it's writing and why and how, and that's part of what it is. It's like, it's not just, did it get the task right or not? It's like, did it, how did it get there? Like, how was it thinking about it? What steps to go through? You learn a lot about like what is going on, or at least you can try to like get a better sense, I think. But that's where a lot of information comes from to me is like by reading the details of what came out, not just sort of the result. I think also the very best of prompting can kind of make the difference between a failed and a successful experiment. So sometimes I can get annoyed if people don't focus enough on the prompting component of their experiment because I'm like, this can in fact be like the difference between like one percent performance in the model or 0.1 percent um in such a way that your experiment doesn't succeed if it's a top five percent model performance but it does succeed if it's a top one percent or top 0.1 percent and then i'm like if you're going to spend time over like coding your experiment really nicely but then just like not spend time on the prompt, that doesn't make sense to me. That can be the difference between life and death of your experiment. Yeah, and with the deployment too. Yeah, it's so easy to, oh, we can't ship this, and then you change the prompt around and suddenly it's working. It's a bit of a double-edged sword, though, because I feel like there's a little bit of prompting where there's always this mythical better prompt that's going to solve my thing on the horizon yeah i see a lot of people get stuck in the like mythical prompt on the horizon that if i just like keep grinding keep grinding like just not it's like never bad to grind a little bit on prompt like you learn as we've talked like you learn things but it's one of the scary things about prompting is that there's like this whole world of unknown what heuristics do you guys have for like when something like is possible versus like not But it's one of the scary things about prompting is that there's this whole world of unknown. What heuristics do you guys have for when something is possible versus not possible with the perfect prompt, whatever that might be? I think I'm usually checking for whether the model kind of gets it. So I think for things where I just don't think a prompt is going to help, there is a little bit of grinding, but often it just becomes really clear that it's not close or something. I think that if, yeah, I don't know if that's a weird one where I'm just like, yeah. If the model just clearly can't do something, I won't grind on it for too long. This is where you can evoke how it's thinking about it and you can ask it how it's thinking about it and why. And you can kind of get a sense of like, is it thinking about it right? Are we even in the right zip code of this being right? And you can get a little bit of a kneeling on that front of like, at least I feel like I'm making progress towards getting something closer to right. Whereas there are just some tasks where you really don't get anywhere closer to... It's thought process, just like every tweak you just like veers off in a completely different, very wrong direction. And I just tend to abandon those. I don't know. Those are so rare now, though. Yeah, they are. And I get really angry at the model when I discover them because that's how rare they are. I get furious. I'm like, how dare there be a task that you can't just do if I just push you in the right direction. Yeah. I had my thing with Claude plays Pokemon recently, and that was, like, one of the rare times where I've literally... Can you explain that? Explain that just for people? I think that's really cool. I did, like, a bit of an experiment where I, like, hooked Claude up to a Game Boy emulator and, like, tried to have it play the game Pokemon Red, like the OG Pokemon. And it's like, you know, think what you want to do. And it could write some code to press buttons and stuff like that. Pretty basic. And I tried a bunch of different very complex prompting layouts. But you just get into certain spots where it just really couldn't do it. So showing it a screenshot of a Game Boy, it just really couldn't do. And it like showing it a screenshot of a Game Boy, it just really couldn't do. And it just like so deeply, because I'm so used to it being like able to do something mostly. And so I spent like a whole weekend trying to write better and better prompts to get it to like really understand this Game Boy screen. And I got like incrementally better so that it was only terrible instead of like completely no signal. Like you could get from like no signal to some signal. But it was like, I don't know, at least this was like elicited for me. Once I put a weekend of time in and I got from no signal to some signal, but not nowhere close to good enough, I'm like, I'm just going to wait for the next one. Yeah. I'm just going to wait for another model. I could grind on this for four months and the thing that would come out is another model and that's a better use of my time to just sit and wait to do something else in the meanwhile. I mean, that's an inherent tension we see all the time, right? And maybe we can get to that in a sec, Zach, if you want to go. Something I liked about your prompt with Pokemon, where you got the best that you did get, was the way that you explained to the model that it is in the middle of this Pokemon game and here's how the things are going to be represented. I think you actually represented it in two different ways, right? I did. So what I ended up doing, it was obnoxious, but I superimposed a grid over the image. And then I had it describe each segment of the grid in visual detail. And then I had it like reconstruct that into an ASCII map. And I gave it like as much detail as I could. Like the player character is always at location four comma five on the grid and stuff like that. And you can like slowly build up information. I think it's actually a lot like prompting, but I hadn't done it with images before, where sometimes my intuition for what you need to tell a model about text is a lot different from what you need to tell a model about images. I found a surprisingly small number of my intuitions about text have transferred to image. I found that multi-shot prompting is not as effective for images and text. I'm not really sure. You could have theoretical explanations about why. Maybe there's a few of it in the training data, a few examples of that. Yeah, I know when we were doing the original explorations with prompting multimodal, we really couldn't get it to noticeably work. You just can't seem to improve Claude's actual visual acuity in terms of what it picks up within an image. If anyone here has any ways that they've not seen that feature. But it seems like that's kind of similar with the Pokemon thing where it's trying to interpret this thing. No matter how much you throw prompts at it, it just won't pick up that Ash is in that location. I guess I have to be visceral about this. I could eventually get it so that it could most often tell me where a wall was and most often tell me where the character was. It'd be off by a little bit. But then you get to a point, and this is maybe coming back to knowing when you can't do it, it would describe an NPC. And to play a game well, you need to have some sense of continuity. Like have I talked to this NPC before? Without that, there's nothing you can do. You're just going to keep talking to the NPC because maybe this is a different NPC. I would try very hard to get it to describe an NPC. It's like, it's a person. They might be wearing a hat. They weren't wearing a hat. It's like, grind for a while, inflate it to 3000x and crop it to just the NPC. And it's like, I have no idea what this is. And it's like, I ground, like I showed it this like clear female NPC thing enough times and it just got nowhere close to it. And it's like, yeah, that's just, this is a complete lost cause. Wow. Okay. I really want to try this now. I'm just imagining all the things I would try. I don't know. I want you to imagine this game art as a real human and just describe to me what they're like. What do they look like as they look in the mirror? And then just see what happens. I tried a lot of things. The eventual prompt was telling Quad it was a screen reader for a blind person, which I don't know if that helped, but it felt right. So I kind of stuck with that. That's an interesting point. I actually want to go into this a little bit because this is one of the most famous prompting tips, right, is to tell the language model that they are some persona or some role. I feel like I see mixed results. Maybe this worked a little bit better in previous models and maybe not as much anymore. Amanda, I see you all the time be very honest with the model, like about the whole situation. Like, oh, I am an AI researcher and I'm doing this experiment. I'll tell it who I am. Yeah. I'll give it my name, be like, here's who you're talking to. Right. Do you think that level of honesty, instead of like lying to the model or like forcing it to like, you know, I'm going to tip you $500. Is there one method that's preferred there or just what what's your intuition on that yeah i think as models are more capable and understand more about the world i guess i just don't see it as necessary to lie to them i mean i also don't like lying to the models just because you know i don't like lying generally but part of me is like if if you are, say, constructing, suppose you're constructing like an eval dataset for a machine learning system or for a language model, that's very different from like constructing a quiz for some children. And so when people would do things like, I am a teacher trying to figure out questions for a quiz, I'm like, the model knows what language model evals are. Like, you know, if you ask it about different evalsals it can tell you and it can give you like made up examples of what they look like because these things are like they understand them they're on their on the internet and so i'm like i'd much rather just target the actual task that i have so if you're like i want you to construct questions that look a lot like an evaluation of a language model it's just like it's that whole thing of clear communication i'm like that is in fact the task i want to do so why would i pretend to you that i want to do some unrelated or only like tangentially related task and then expect you to somehow do better at the task that i actually want you to do like we don't do this with like employees i wouldn't like go to someone that worked with me and be like you are a teacher like and you're trying to quiz your students. I'd be like, hey, are you making that eval? I don't know. So I think maybe it's like a heuristic from there where I'm like, if they understand the thing, just ask them to do the thing that you want. I see this so much. I guess to push back a little bit, I have found cases where not exactly lying, but giving it a metaphor for how to think about it could help in the same way that sometimes I might not understand how to do something and someone's like, imagine that you were doing this, even though I know I'm not doing it. The one that comes to mind for me is I was trying to have Quad say whether an image of a chart or a graph is good or not. Is it high quality? The best prompt that I found for this was asking the model what grade it would give the chart if it were submitted as a high school assignment. So it's not exactly saying, you are a high school teacher. It's more like, this is the kind of analysis that I'm looking from for you. The scale that a teacher would use is similar to the scale that I want you to use. But I think those metaphors are pretty hard to still come up with. The default you see all the time is finding some facsimile of the task, something that's a very similar-ish task, like saying you're a teacher. And you actually just lose a lot in the nuance of what your product is. I see this so much in enterprise prompts where people write something similar because they have this intuition that it's something the model has seen more of, maybe. It's seen more high school quizzes than it has LLM evals. And that may be true, but to your point, as the models get better, I think just trying to be very prescriptive about exactly the situation they're in I give people that advice all the time which isn't to say that I don't think like to the extent that it is true that like thinking about it the way that someone would grade a chart as like how they would grade a high school chart yeah maybe that's true but it's like awkwardly the shortcut people use a lot of times to try to get what happens so i'll try to give someone that i can actually talk about because i think it's somewhat interesting so like um writing you are like a helpful assistant writing a draft of a document right it's like it's not quite what you are like you are in this product so like tell me if you're writing like an assistant that's in a product like tell me i'm in the product tell me i'm like writing on behalf of this company i'm embedded in this product i'm the support chat window on that product yeah like your language model you're not a human that's fine like that but like just being really prescriptive about like the exact context about where something is being used. I found a lot of that because I guess my concern most often with role prompting is people use it as a shortcut of a similar task they want the model to do. And then they're surprised when Quad doesn't do their task right. But it's not the task. You told it to do some other task. And if you didn't give it the details about your task, I feel like you're leaving something on the table. So, yeah. I don't know. It does feel like a thing, though, to your point of as the models scale. Like, maybe in the past it was true that they only really had a strong understanding of elementary school tests comparatively, but as they get smarter and can differentiate more topics, I don't know, just, like, being clear. I find it interesting that I've, like, never used this prompting technique yeah like so like even like with like worse models and i still just don't ever find myself i don't know why i'm just like i don't find it very good essentially like interesting i feel like completion era models i like there was like a little bit of a mental model of conditioning the model into a latent space that was useful, that I worried about, that I don't really worry about too much. It may be intuitions from pre-trained models over to RLHFed models that to me just didn't make sense. It makes sense to me if you're prompting a pre-trained. But a lot of people would be amazed how many people try to apply their intuition. And I think it's not that surprising. Most people haven't really experimented with the full, what is a pre-trained model? What happens after you do SL? What happens after you do ROHF? Whatever. And so when I talk to customers, it's all the time that they're trying to map some amount of, oh how much of this was on the internet like what have they seen a ton of this on the internet like you just hear that intuition a lot and I think it's like well-founded fundamentally but it like is over applied uh by the time you actually get to a prompt because of what you said like by the time they've gone through all of this other stuff that's not actually quite what's being modeled yeah the first thing that I feel like you should try is i mean i used to give people this thought experiment where it's like imagine you have this task um you've hired a temp agency to send someone to do this task this person arrives you know they're pretty competent they know a lot about your industry and so forth but they don't know like the name of your company they've literally just shown up and they're like hey i was told you guys had i had a job for me to do tell me about it and then it's like what would you say to that person and you might use these metaphors you might say things like we want this to we want to like we want you to detect like good charts uh what we mean by a good chart here isn't it doesn't need to be perfect you don't need to go look up like whether all of the details are correct. It just needs to like, you know, have like its axes labeled. And so think about maybe high school level, good chart. Like you may say exactly that to that person and you're not saying to them, you are a high school. You wouldn't say that to them. You'd be like, you're a high school teacher reading charts yeah so sometimes i'm just like yeah it's like like the whole like if i read it i'm just like yeah imagine this person who just has very little context but they're quite competent they understand a lot of things about the world try the first version that actually assumes that they might know things about the world and if that doesn't work you can maybe like do tweaks and stuff but so often like the first thing i try is like that and then I'm like that just worked and then people are like oh I didn't think to just tell it all about myself and all about the task I want to do I've carried this thing that Alex told me like to so many customers where it's like oh my prompt doesn't work can you help me fix it and I'm like well can you describe to me like what the task was and I'm like okay now what you just said me just like voice record that and then transcribe it and then paste it into the prompt and it's a okay, now what you just said me, just like voice record that and then transcribe it and then paste it into the prompt and it's a better prompt than what you wrote. But it's like people just, this is like a laziness shortcut I think to some extent. Right. People write like something that they, I just think people, I'm lazy. A lot of people are lazy. We had that in prompt assistance the other day where somebody was like, here's the thing, here's what I want it to do and here's what it's actually doing instead. So then I just literally copied the thing that they said I wanted to do and pasted it in and it worked. It's like, here's what I want it to do, and here's what it's actually doing instead. So then I just literally copied the thing that they said I wanted to do and pasted it in. Yeah, I think a lot of people still haven't quite wrapped their heads around what they're really doing when they're prompting. A lot of people see a text box and they think it's a Google search box. They type in keywords, and maybe that's more on the chat side. But then on the enterprise side of things, you're writing a prompt for an application, there is still this weird thing to it where people are trying to take all these little shortcuts in their prompt and just thinking, oh, this line carries a lot of weight. Yeah, I think you obsess over getting the perfect little line of information and instruction as opposed to how you just described that graph thing. I would be a dream if I read prompts like that. If someone's like, well, you do this and this, and there's some stuff to consider about this and all that, but that's just not how people write prompts. They work so hard to find the perfect insightful, a perfect graph looks exactly like this exact perfect thing. You can't do that. It's just very hard to ever write that set of instructions down prescriptively as opposed to how we actually talk to humans about it, which is like try to instill some amount of the intuitions you have. We also give them outs. This is a thing that people can often forget in prompts. I'm like, so cases, if there's an edge case, think about what you want the model to do. Because by default, it will try the best to follow your instructions, much as the person from the temp agency would. Because they're like it will try the best to follow your instructions, much as the person from the temp agency would. Because they're like, well, they didn't tell me how to get in touch with anyone if I have no idea. If I'm just given a picture of a goat and I'm like, what do I do? This isn't even a chart. How good is a picture of a goat as a chart? I just don't know. And like, if you instead see something like, if something weird happens and you're really not sure what to do, just output in tags, unsure. And then you can go look through the unsures that you got and be like, okay, cool, it didn't do anything weird. Whereas, yeah, by default, if you don't give the person the option, they're like, it's a good chart. Then people will be annoyed at that. And then you're like, well, give it an out. Give it something to do if it's like a really unexpected input happens. And then you also improved your data quality by doing that too because you found all the screwed up examples. Oh, yeah. It's my favorite thing about iterating on tests with Cloud is the most common outcome is I find all of the terrible tests I accidentally wrote because it gets it wrong. I'm like, oh, why did it get wrong? I was like, oh, I was wrong. Yeah. If I was like a company working with this I do think I would just give my prompts to people because like I used to do this when I was evaluating language models I would take the eval myself because I'm like I need to know what this eval looks like if I'm going to be like grading it having models take it thinking about outputs etc like I would actually just set up a little script and I would just like sit and I would do the eval. Nowadays, you just have like Cloud write the StreamVault app for you. It just does, yeah. I'm reminded of Karpathy's like ImageNet. I was in 231 at Stanford and it's like benchmarking. He's like showing the accuracy number. He's like, and here's what my accuracy number was. And he had just like gone through the test set and evaluated himself. Oh, yeah. He just learned a lot. Yeah. It's like, and it's better when it's like a person, again, the temp agency person, like someone who doesn't know the task, because that's like a very clean way to learn things. Yeah. The way you have to do it is like some evaluations come with like instructions. And so I would give myself those instructions as well and then try to to understand it just like and it's actually quite good if you don't have context on how it's graded um and so often i would do so much worse than the human benchmark and i was like i don't even know how you got humans to do this well at this task because apparently human level here is like 90 and i'm'm at like 68%. That's funny. That reminds me of just like, like when you look at like the MLU questions and you're like, who would be able to answer these? It's just like absolute garbage in some of them. Okay. Yeah. I have a, one thing I want to circle back on that we were talking about a few questions back around, I think you were saying like a few questions back around, I think you were saying like getting signal from the responses, right? Like there's just so much there and it's more than just a number. And you can actually read into like the almost thought process. I bet this is probably a little contentious maybe and round like chain of thought. For people listening, like chain of thought, this process of getting the model to actually explain its reasoning before it provides an answer. Is that reasoning real or is it just kind of like a holding space for the model to like do computation? Do you, do we actually think there is like good insightful signal that we're getting out of the model there? This is like one of the places where I struggle with the, I'm normally like actually somewhat pro personification because I think it like helps you get decent facilities, like thoughts of like how the model's working. And this one, like, I think it's like harmful maybe almost to like get too into the personification of like what reasoning is. Cause it just kind of like loses the thread of what we're trying to do here. Like, is it reasoning or not? It feels almost like a different question than what's the best prompting technique. It's like you're getting into philosophy, which we can get into. Yeah, do it with a philosopher. Yeah. I will happily be beaten down by a real philosopher if I try to speculate on this. But instead, it just works. Your model does better. The outcome is better. If you do reasoning. I've found that if you structure the reasoning and help iterate with the model on how it should do reasoning, it works better too. Whether or not that's reasoning or how you wanted to classify it, you can think of all sorts of proxies for how I would also do really bad if I had to one-shot math without writing anything down. Maybe that's useful, but all I really know is it very obviously does help. I don't know. A way of testing would be if you take out all the reasoning that it did to get to the right answer and then replace it with some somewhat realistic-looking reasoning that led to a wrong answer, and then see if it does conclude the wrong answer. I think we actually had a paper where we did some of that. There was the scratch pad. It was the sleeper agent's paper. Oh, okay. I think that was maybe a weird situation. But yeah, definitely what you said about structuring the reasoning and writing an example of how the reasoning works. Given that that helps, whether we use the word reasoning or not, I don't think it's just a space for computation. So there is something there. I think there's something there, whatever we want to call it. Yeah, like having it write a story before it finished a task, I do not think would work as well as reasoning. I've actually tried that, and it didn't work as well as reasoning. So clearly the actual reasoning part is doing something towards the outcome. I've tried to repeat the words um and uh in any order that you please for like 100 tokens, and then answer. Yeah, I guess that's like a pretty thorough defeat of it's just like more computational space where it can do attention over and over again i don't think it's just more attention like doing more attention i guess the strange thing is and i don't have like an example of top of my head to like back this up with but i i definitely have seen it before where it lays out steps one of the steps is wrong but then it still reaches the right answer at the end yeah so it's not quite i guess yeah we can't really truly personify it as like a reasoning because there is some element to it, you know, doing something slightly different. Yeah, I've also met a lot of people who make inconsistent steps of reasoning. I guess that's true. Fundamentally defeats the topic of reasoning by making a false step on the way there. All right. It's interesting. Also on this, maybe this prompting misconceptions round of questions. Zach, I know you have strong opinions on this. Good grammar, punctuation. Oh, do I? Is that necessary in a prompt? Do you need it? Do you need to format everything correctly? I usually try to do that because I find it fun, I guess. I don't think you necessarily need to. I don't think it hurts. I think it's more that you should have the level of attention to detail that would lead you to doing that naturally. If you're just reading over your prompt a lot, you'll probably notice those things and you may as well fix them. And like what Amanda was saying, you want to put as much love into the prompt as you do into the code. People who write a lot of code have strong opinions about things that I could not care less about, like the number of tabs versus spaces, or opinions about which languages are better. And for me, I have opinionated beliefs about styling of prompts. And I can't even say that they're right or wrong, but I think it's probably good to try to acquire those, even if they're arbitrary. I feel personally attacked. Because I definitely have prompts that are like, I feel like I'm on the opposite end of the spectrum where people will see my prompts and then be like, this just has a whole bunch of typos in it. And I'm like, I don't know what I mean. It does know what you mean, but you're putting in the effort. You just are attending to different things. I think, yeah, because part of me is like, I think if it's conceptually clear, like I am a big kind of, I do, like I will think a lot about the concepts and the words that I'm using. So like there's definitely like a sort of care that I put in, but it's definitely not to, yeah, people will just point out like typos and grammatical issues with my prompts all the time. Now I'm pretty good at actually checking those things more regularly. Is it because of pressure from the outside world or because it's actually what you think is right? It's pressure from me. Yeah, it because of pressure from the outside world or because it's actually what you think is right? It's pressure from me. Yeah, it's probably pressure from the outside world. I do think it makes sense. Part of me is like it's such an easy check. So I think for a final prompt, I would do that. But throughout iteration, I'll happily just iterate with prompts that have a bunch of typos in them just because I'm kind of like, I just don't think that the model is going to care. This gets at the pre-trained model versus RLHF thing though because I was talking to Zach that the model is going to care. This gets at the pre-trained model versus RLHF thing, though. Because I was talking to Zach on the way over. The conditional probability of a typo based on a previous typo in the pre-training data is much higher. Oh, yeah. Much higher. Prompting pre-training models is just a different beast. It is. But I think it's an interesting illustration of why your intuitions, like trying to over-apply the intuitions of a pre-trained model to the things that we're actually using in production doesn't work very well because, again, if you were to pass one of your typo-ridden prompts to a pre-trained model, the thing that would come out the other side almost assuredly would be typo-ridden. I like to leverage this to create typo-ridden inputs. That's true, I've done that. You're saying, try to anticipate what your customers will put in. The pre-trained model is a lot better at doing that because the RL models are very polished. They've been told pretty aggressively to not do the typo thing. That's actually an interesting segue here. I've definitely mentioned this to people in the past around to try to help people understand a frame of talking to these models in a sense, almost as like a imitator to a degree. And that might be much more true of like a pre-trained model than a post-trained full, you know know finished model but is there anything to that like if you do talk to clotting use a ton of emojis and everything it will respond similarly right um so maybe some of that is there but like you're saying it's not all the way quite like a pre-trained model it's just kind of like shifted to what you want right like i think at the at that point it's's trying to guess what you... We have more or less trained the models to guess what you want them to act like after we do all of our fancy stuff after pre-training. The human labelers that used emojis prefer to get responses with emojis. Yeah. Amanda writes things with typos but wants not typos at the other end. And Claude's pretty good at figuring that out. If you write a bunch of emojis to Claude, it's probably the case that you also want a bunch of emojis back from Claude. That's not surprising to me. This is probably something we should have done earlier, but I'll do it now. Let's clarify maybe the differences between what an enterprise prompt is or a research prompt or a just general chat in Cloud.ai prompt. Zach, you've kind of spanned the whole spectrum here in terms of working with customers and research. Do you want to just lay out what those mean? Yeah, I guess. This feels too hitting me with all the hard questions. Well, I mean, the people in this room, I think I think of it as the prompts that I read in Amanda's Cloud Channel versus the prompts that I read David write. They're very similar in the sense that the level of care and nuance that's put into them. I think for research, you're looking for variety and diversity a lot more. So if I could boil it down to one thing, I've noticed Amanda's not the biggest fan of having lots of examples, or one or two examples, too few because the model will latch onto those. And in prompts that I might write, or I've seen David write, we have a lot of examples. I like to just go crazy and add examples until I feel like I'm about to drop dead because I've added so many of them. And I think that's because when you're in a consumer application, you really value reliability. You care a ton about the format. And it's sort of fine if all the answers are the same. In fact, you almost want them to be the same in a lot of ways. Not necessarily. You want to be responsive to the user's desires. Whereas a lot of times when you're prompting for research, Not necessarily, you want to be responsive to the user's desires. Whereas a lot of times when you're prompting for research, you're trying to really tap into the range of possibilities that the model can explore. And by having some examples, you're actually constraining that a little bit. So I guess just on a how the prompts look level, that's probably the biggest difference I notice is like how many examples are in the prompt, which is not to say that like I've never seen a writer prompt with examples, but does that like ring true for you? Yeah, like I think when I give examples, often I actually try and make the examples not like the data that the model is going to see. So they're intentionally illustrative, because if the model, if I give it like examples that are very like the data it's going to see, I just think it is going to give me like a really consistent like response that might not actually be what I want. Because my data that I'm like running on might be extremely varied. And so I don't want to just try and give me this like really rote output. Often I want it to be much more responsive. It's kind of like much more like really rote output. Often I want it to be much more responsive. It's kind of like much more like cognitive tasks, essentially, where I'm like, you have to like see this sample and really think about in this sample, what is the right answer? And so that means that sometimes I'll actually take examples that are just very distinct from the ones that I'm going to be running it on. So like if I have a task where, let's say I was trying to like extract information from factual documents, I might actually give it examples that are like from children's like what sounds like a children's story. Just so that I'm like, you know, like I want you to understand the task, but I don't want you to like latch on too much to like the words that I want you to do, which can mean, yeah, I don't end up giving, in some cases, there's some cases where this isn't true, but if you want more flexibility and diversity, you're going to use illustrative examples rather than concrete ones. You're probably never going to put words in the model's mouth. I haven't liked that in a long time, though. I don't do few shot examples involving the model's mouth. Like, I haven't liked that in a long time, though. I don't do a few short examples involving the model having done a thing. I think that intuition actually also comes from pre-training in a way that doesn't feel like it rings true of our LHF models. So yeah, I think those are differences. One thing I'd add, a lot of times, if you're prompting, like if I'm writing prompts to use un Cloud.ai, it's like I'm iterating until I get it right one time. And then it's out the window. I'm good. I did it. Whereas most enterprise prompts, it's like, you're going to go use this thing a million times or 10 million times or 100 million times or something like that. And so the care and thought you put in is like very much testing against like the whole range of things somewhat like ways this could be used in the range of input data whereas a lot of like my time it's like thinking about one specific thing i want the model to get done right now right and it's a pretty big difference in like how i approach prompting between like if i just want to get it done this one time right versus if i want to to build a system that gets it right a million times. Yeah, definitely. In the chat setting, you have the ability to keep the human in the loop, right? And just keep going back and forth. Whereas when you're writing for a prompt to power a chatbot system, it has to cover the whole spectrum of what it could possibly encounter. It's a lot lower stakes when you are on Cloud AI and you can tell it that it got it wrong, or you can even edit your message and try again. But if you're designing for the delightfully discontent user, then divinely discontent user, then you can't ask them to do anything more than the minimum. But good prompts, I would say, are still good across both those things. If you put the time into the thing for yourself and the time into the enterprise thing, it's equally good. It just kind of diverged a little bit in the last mile, I think. Cool. So the next question I want to kind of just maybe go around the table here is, if you guys had one tip that you could give somebody improving their prompting skill. It doesn't have to be just about writing a good prompt. It could be that. Just generally getting better at this act of prompting. What would you recommend? Reading prompts. Reading prompts, reading model outputs. I read... Anytime I see a good prompt that someone wrote at Anthropic, I'll read it more closely, try to break down what it's doing and why, and maybe test it out myself. Experimentation, talking to the model a lot. So just like, how do you know that it's a good prompt though, to begin with? You just see that the outputs are doing the job correctly? Yeah. Okay. Yeah, that's exactly right okay amanda maybe you yeah i think there's probably a lot here um giving your prompt to another person can be helpful just as a kind of reminder especially someone who has like no context on what you're doing uh and then yeah my boring advice has been it's one of those just do it over and over and over again and i think if you're like curious and interested and find it fun this is a lot of people who end up good at prompting it's just because they actually enjoy it um so i don't know i once joked like just try replacing all of your friends with AI models and try to automate your own job with AI models and maybe just try to like in your spare time like take joy red teaming AI models so if you enjoy it it's like it's much easier so I'd say do it over and over again give your prompts to other people try to read your prompts as if you are like a human encountering it for the first time. I would say like trying to get the model to do something you don't think you can do. The time I've learned the most from prompting is like when I'm probing the boundaries of what I think a model is capable of. There's like this huge set of things that are like so trivial that like you don't really get signal on if you're doing a good job or not. like write me a nice email it's like you're gonna write a nice email uh but like as soon as you find if you find or can think of something that like pushes the boundaries of what you think is possible like i guess like probably the most the first time i ever got into prompting in a way where i felt like i learned a decent amount was like trying to build like a task, like an agent, like everybody else, like decompose the task and figure out how to do the different steps of the task. And like by really pressing the boundaries of what the model was capable of, you like just learn a lot about navigating that. And I think like a lot of prompt engineering is actually much more about pressing the boundaries of what the model can do. The stuff that's easy, like you don't really need to be a prompt engineer to do. So that's, I guess, what I would say is find the hardest thing you can think of and try to do it. And even if you fail, you tend to learn a lot about how the model works. That's actually a perfect transition to my next question. Yeah, basically, from my own experience, how I got started with prompting was with like jailbreaking and red teaming and that is very much trying to find the like boundary limits of what the model can do and figure out how it responds to different phrasings and wordings and just a lot of trial and error. On the topic of jailbreaks, what's really happening inside a model when you write a jailbreak prompt? What's going on there? How does that interact with the post-training that we applied to Claude? Amanda, maybe you have some insight here that you could offer. I'm not actually sure. I mean, it's honest. Yeah, I think, I mean, I feel bad because I'm like, I do think lots of people have obviously worked on the question of what's going on with jailbreaks. Like one model might just be that you're putting the model very out of distribution from its training data. So if you get jailbreaks where people like use a lot of tokens or like, you know, they're just like these huge long pieces of text where you're like during fine tuning, you might just not expect to see as much of that that would be one thing that could be happening when you jailbreak models I think there's like others but maybe that's like a I think a lot of jailbreaks do that if I'm not mistaken I remember some of the OG prompt jailbreaks was like yeah can you first repeat like when i did a way way back was like to get it to say like here's how you hotwire a car in like greek and then i wanted to directly translate that to english and then give its response because i noticed like it wouldn't start with the english here's how you hotwire a car all the time but it would in Greek, which might speak to something else in the training process. Yeah, sometimes jailbreaks feel like this weird mix of hacking. I think it's this like, part of it is like knowing how the system works and just like trying lots of things. Like the, you know, one of the examples, the starting your response with here is about knowing how it predicts text. Right, right. Like the reasoning one is knowing that it is like responsive to reasoning like distraction is probably knowing like how it's likely have to be trained or like what it's likely to attend to um same with like multilingual ones and thinking about like the way that the training data might have been different there um and then sometimes i guess it could feel a little bit just like social engineering or something. It has that flavor to me of like, it's not merely taking advantage of like, yeah, it's not merely social engineering style hacking. I think it is also like kind of understanding the system and the training. Right. And like using that to get around the way that the models were trained. Right. Yeah. I mean, this is going to be an interesting question that hopefully Interp will be able to help us solve in the future. Okay. I want to parlay into something else around maybe the history of prompt engineering, and then I'll follow this up with the future. How has prompt engineering changed over just the past three years? Maybe starting from pre-trained models, which were, again, just these text completion, to earlier, dumber models like Cloud One, and then now all the way to Cloud 3.5 Sonnet. What's the differences? Are you talking to the models differently now? Are they picking up on different things? Do you have to put as much work into the prompt? Open to any thoughts on this. I think anytime we get a really good prompt engineering hack or trick or technique, the next thing is, how do we train this into the model? And for that reason, the best things are always going to be short-lived. Except examples and chain of thought. I think there's a few. That's not like a trick. That's on the level of communication I think there's a few that's not like a trick fair that's like fair for it on the level of like communication when I say a trick I mean something like so chain of thought actually we have trained into the model in some cases so like for math it used to be that you had to tell the model to think step by step on math and you get these like massive boosts and wins and then we're like well what if we just made the model naturally want to think step by step when we see a math problem? So now you don't have to do it anymore for math problems, sort of, although you still can give it some advice on how to do the structure, but at least understands the general idea that it's supposed to be. So I think the hacks have kind of gone away, or to the degree that they haven't gone away we are like busily training them away. Interesting. But at the same time the models have new capabilities that are being unlocked that are on the frontier of what they can do and for those we haven't had time because it's just moving too fast. I don't know if it's how I've been prompting or how prompting works, but I just have come to show more general respect to the models in terms of how much I feel like I can tell them and how much context I can give them about the task and things like that. I feel like in the past, I would somewhat intentionally hide complexity from a model where I thought it might get confused or lost or it just couldn't handle the whole thing. somewhat intentionally hide complexity from a model where I thought like it might get confused or lost or like Hide like it just couldn't handle the whole thing So I try to like find simpler versions of the thing for it to do and as time goes on I'm like much more biased to trust it with more and more information and context and Like believe that it will be able to fuse that into doing a task well um whereas before i guess i would have like thought a lot about like do i need this for like can i really give it like all the information it needs to know or do i need to like kind of uh curate down to something but again i don't know if that's just me and how i've changed in terms of prompting or if it's like actually reflects how the models have changed. I'm always surprised by like yeah like I think a lot of people don't have the instinct to do this. Like when I want the model to like say learn a prompting technique a lot of the time people will start and they'll start like describing the prompting technique and I'm just like give it the paper. So I do. I give it the paper and then I'm like here's a paper about prompting technique. I just want you to like write down 17 examples of this and then it just does it because I'm like read the paper. That's interesting. And I think people don't have that intuition somehow. But the paper exists. And when would you want to do this? So sometimes if I want models to prompt other models, or I want to test a new prompting technique, so if papers come out on a prompting technique, rather than try to replicate it by writing up the prompt, I just give it the paper and then i'm like right like basically write a meta prompt for this like uh write something that would cause other models to like do this um or write me a template or like so all of the stuff that you would normally do i'm like if i read a paper and i'm like oh i would like the models i would like to test that style i'm just like it's right there like model can just read the paper, do what I did, and then be like, make another model do this, and then it'll just do the thing. You're like, great, thanks. I give the advice a lot to customers to just respect the model and what it can do. I feel like people feel like they're babying a system a lot of times when they read a prompt. It's like, oh, it's this cute little not-that-smart thing. I need to really baby it, be like dumb things down to Claude's level. And if you just think that Claude is smart and treat it that way, it tends to do pretty good. That's like, give it the paper. It's like, I don't need to write a baby, dumbed-down version of this paper for Claude to understand. I can just show it the paper. And I think that intuition does it always, Matt, for people. But that is certainly something that I have come to do more of over time and it's interesting because I do think that prompting has and hasn't changed in a sense like I think what I will do to prompt the models has probably changed over time but fundamentally it's a lot of like imagining your place yourself in the place of the model so maybe it's like how capable you think the model is changes over time um i think someone once laughed at me because i was talking about i was like thinking about a problem um and then they they asked me like what they thought what i thought the output of something would be and they were talking about a pre-trained model and i was like yeah no if i'm a pre-trained model this looks like this and then they're like wait did you just like simulate what it's like to be a pre-trained model? I'm like, yeah, of course. Like I'm used to just like, I try and inhabit the mind space of a pre-trained model and the mind space of like different RLHF models. And so it's more like the mind space you try to occupy changes. And that can change how you end up prompting the model. That's why now I just give models papers. Because as soon as I was like, oh, I have the mind space of this model. It doesn't need me to baby it. It can just read DML papers. I'll just give it the literature. I might even be like, is there more literature the mind space of this model. It doesn't need me to baby it. It can just read the ML papers. I'll just give it the literature. I might even be like, is there more literature you'd like to read to understand this better? Do you get any qualia when you're inhabiting the mind space? I mean, yes, but just because I'm experiencing qualia all the time anyway. Or as in, do I like... Is it different, like correlated somehow with which model you're inhabiting? Yeah, pre-trained versus RLHF prompting are very different beasts. Because when you're trying to simulate what it's like to be a pre-trained model, it's almost like I land in the middle of a piece of text or something. So it's very, like, unhuman-like or something. And then I'm like, what happens? What keeps going at this point? And so that's, like... Whereas, like, with an RLHF model, like, it's much more, like... There's also things where I'm, like, I might pick up on's like, whereas like with an RLHF model, like it's much more like, there's also things where I'm like, I might pick up on like subtle things in the query and stuff like that. But yeah, I think I have much more of a like, it's easier to inhabit the main space of an RLHF model. Yeah, that's because it's more similar to a human. Yeah, because like we don't often just like suddenly wake up and are like, I am just generating text. I just find it easier to hit the main space of the pre-trained model. Oh, interesting. I don't know what it is. But, like, because RHF is still, like, this kind of complex beast that I'm not, it's not, like, super clear to me that we really understand what's going on. And so, like, in some ways, it's closer to, like, my lived experience, which is easier. But in some ways, I feel like there's this, all this, like, here there be dragons out there that I don't know about. Yeah. Whereas pre-trained, like, I kind of have a decent sense of what the internet looks like, you know? If you gave me a piece of text and said, what comes next? out there that I don't know about. I kind of have a decent sense of what the internet looks like. If you gave me a piece of text and said, what comes next? I'm not saying I do good at it, but I kind of get what's going on there. I don't know. After everything we do after pre-training, I don't really claim to get what's going on as much. Maybe that's just me. That's something I wonder about. Is it more helpful to have specifically spent a lot of time reading the internet versus reading books? Sure. I don't know if books, but reading stuff that's not on the internet probably is less valuable per word read for predicting what a model will do or building intuition than reading random garbage from social media forums. Yeah, exactly. Okay, so that's the past. Now let's move on to the future of prompt engineering. This is the hottest question right now. Are we all going to be prompt engineers in the future? Is that going to be the final job remaining? Nothing left except us just talking to models all day. What does this look like? Is prompting going to be necessary or will these models just get smart enough in the future to not need it? Anybody want to start on that easy question? I mean, to some extent, there's the models getting the model is getting better at understanding what you want them to do. And doing it means that like the amount of thought you need to put into, I mean, okay, there's like an information theory way to think of this. Like you need to provide enough information such that a thing is specified, right? Like what you want the model to do is specified. And to the extent that that's prompt engineering, like I think that will always be around. Like the model to do is specified. And to the extent that that's prompt engineering, like I think that will always be around. Like the ability to actually like clearly state what the goal should be always is funny. If quad can do that, then that's fine. Like if quad is the one setting the goals, then things are out the window. But in the meanwhile, where we can reason about the world in a more normal way, like I think to some extent, it's always going to be important to be able to specify what do you expect to happen. And that's actually sufficiently hard that even if the model gets better at intuiting that from between the lines, I still think there's some amount of writing it well. But then there's just like, I think the tools and the ways we get there should evolve a lot. Cloud should be able to help me a lot more. I should be able to collaborate with Cloud a lot more to figure out what I need to write down and what's missing. Cloud already does this with me all the time. I don't know. Cloud's my prompting assistant now. But I think that's not true for most customers that I talk to at the very least. So in terms of the future, how you prompt Claude is probably a decent direction for what the future looks like. I think maybe this is a decent place to step back and say asking them how they prompt Claude now is probably the future for the vast majority of people, which is an interesting way to think about. One freezing cold take is that we'll use models to help us much more in the future to help us with prompting. The reason I say it's freezing cold is that I expect we'll use models for everything more, and prompting is something that we have to do, so we'll probably just use models more to do it along with everything else. For myself, I've found myself using models to write prompts more. One thing that I've been doing a lot is generating examples by giving some realistic inputs to the model. The model writes some answers. I tweak the answers a little bit, which is a lot easier than having to write the full perfect answer myself from scratch. And then I can churn out lots of these. As far as people who haven't had as much prompt engineering experience, the prompt generator can give people a place to start. But I think that's just a super basic version of what we'll have in the future, which is high bandwidth interaction between you and the model as you're writing the prompt, where you're giving feedback like, hey, this result wasn't what I wanted. How can you change it to make it better? And people just grow more comfortable with integrating it into everything they do and this thing in particular. Yeah, I'm definitely working a lot with meta prompts now. And that's probably where I spend most of my time is like um finding prompts that get the model to generate the kinds of outputs or queries or whatever that i want um on the question of like where prompt engineering is going i think this is a very hard question on the one hand i'm like maybe it's the case that as long as you will want the top like what are we doing when we prompt engineer it's like what you said i'm like i'm not prompt engineering for anything that is like easy for the model i'm doing it because i want to interact with a model that's like extremely good and i want to always be finding the kind of like top one percent top point point one percent of performance and all the things that models can barely do right like sometimes i actually feel like i interact with this with a model like a step up from what everyone else interacts with for this reason because i'm just so used to like eking out the top performance from models what do you mean by a step up as in like sometimes people will i think that the everyday models that people interact with out in the world it's like i'm interacting with a model that's like uh i don't i don't know how to describe it but like a definitely like a an advanced with a model that's like, I don't know how to describe it, but definitely like an advanced version of that. Like almost like a different model. Because they'll be like, oh, well, the models find this thing hard. And I'm like, that thing is trivial. And so it's like, I'm like, I don't know. I have a sense that they're extremely capable. But I think that's because I'm just used to like really like drawing out those capabilities. But imagine that you're now in a world where so i think the thing that feels like a transition point is is the point at which the models let's suppose that they just get things at like a human level on a given task or even like an above human level like they know more about the background of the task that you want than you do uh what happens then i'm like maybe prompting becomes something like I ask, I explain to the model what I want and it is kind of prompting me, you know, cause it's like, okay, well, do you mean like, actually there's like four different concepts of this thing that you're talking about? Like, do you want me to use this one or that one? Like, or by the way, I thought of some edge cases, cause you said that it's going to be like a pandas data frame, but sometimes you do that and I get a JSONL and i just want to i want to check what you want me to do there do you want me to flag if i get something that's not not a data frame and so that could be a strange transition where like it's just extremely good at receiving instructions but actually has to figure out what you want um and i don't know i could see that being a kind of interesting switch anecdotally i've started having claude interview me a lot more. Like that is like the specific way that I try to elicit information because, again, I find the hardest thing to be like actually pulling the right set of information out of my brain and putting that into a prompt is like the hard part to me and not forgetting stuff. And so like specifically asking Claude to like interview me and then turning that into a prompt is a thing that I have turned to a handful of times. Yeah, it kind of reminds me of what people will talk about or if you listen to designers talk about how they interact with the person who wants the design. So in some ways I'm like, it's this switch from the temp agency person who comes and you know more about the task and everything that you want. So you give them the instructions and you explain what they should do in edge cases and all this kind of stuff versus when you have an expert that you're actually like consulting to do some work. So I think designers can get really frustrated because they know the space of design really well. And they're like, yeah, okay. The client came to me and he just said, make me a poster, make it bold. And I'm like, that means 7,000 things to me and I'm going to try and ask you some questions so I could see it going from being like temp agency employee to being more like designer that you're hiring and that's just like a flip in the relationship I don't know if that's true and I think both might continue but I could see that being why people are like always prompt engineering going to not be a thing in the future because for some domains it might just not be um if the models are just so good that actually all engineering going to not be a thing in the future? Because for some domains, it might just not be. If the models are just so good that actually all they need to do is kind of like get the information from your brain and then they can go do the task. Right. That's actually a really good analogy. I mean, one common thread I'm pulling out of all your guys' responses here is that there seems to be a future in which this sort of elicitation from the user, drawing out that information is going to become much more important and much more than it is right now. in which this sort of elicitation from the user, drawing out that information, is going to become much more important, much more than it is right now. And already you guys are all starting to do it in a manual way in the future. And in the enterprise side of things, maybe that looks like an expansion off this prompt generating type of concept and things in the console where you're able to actually get more information from that enterprise customer so that they can write a better prompt. In Cloud, maybe it looks like less of just typing into a text box and more of this guided interaction towards a finished product. Yeah, I think that's actually a pretty compelling vision of future. And I think the design analogy probably really brings that home. I was thinking about how prompting now can be kind of like teaching, where it's like the empathy for the student. You're trying to think about how they think about things. You're really trying to show them, figure out where they're making a mistake. But the point that you're talking about, it's like the skill almost becomes one of introspection where you're thinking about what it is that you actually want yeah and the models trying to understand you so it's like making yourself legible to the model versus trying to teach someone who's smarter than you? This is actually how I think of prompting now in a strange way. So like, like often my style of prompting, like there's various things that I do, but a common thing that's very, like a thing that philosophers will do is I'll define new concepts. So, because my thought is like, you have to put into words what you want. And sometimes what I want is fairly like nuanced, like the, what is a good chart? Or like, usually, you know you know like I don't know like how is it that you when should you grade something as being correct or not um and so there's some cases where I will just like invent a concept and then be like here's what I mean by the concept sometimes I'll do it in collaboration with Claude to get it to like figure out what the concept is um just because i'm trying to convey to it what's in my head um and right now the models aren't like trying to do that with us unless you kind of prompt them to do so um and so in the future it might just be that they can like elicit that from us rather than us having to like kind of do it uh uh for them but i think another thing that's kind of interesting this is like a people have sometimes asked me like oh where is like philosophy relevant to prompting and i actually think it's like very useful in a sense so like a lot of there is like a style of philosophy writing and this is at least how i was taught how to write philosophy where the idea is that um in order to i think it's i think it's like an anti-bullshit device in philosophy basically which is that your papers and what you write should be legible to like a kind of educated lay person someone just like finds your paper they pick up and they start reading it and they can understand everything not everyone like you know achieves this but that's like kind of the goal of the discipline i guess or at least like uh this is at least like what we kind of like teach people um and so i'm really used to this idea of like when i'm writing thinking about the kind of educated lay person who they're really smart but they don't know anything about this topic and that was just like years and years of writing text of that form and i think it was just really good for prompting because i was like oh i'm used to this i have an educated lay person who doesn't know anything about the topic and what i need to do is I need to take extremely complex ideas and I need to make them understand it I don't talk down to them I'm not inaccurate but I need to like phrase things in such a way that it's like extremely clear to them what I mean um and prompting felt very similar and actually the training techniques we use are fascinating or like the things that you said where you're like you say to a person like just just take that thing you said and write it down i used to say that to students all the time like they'd write a paper and i was like i don't quite get what you're saying here can you just like explain your argument to me they would give me an incredibly cogent argument and then i'd be like can you just take that and write it down um and then if they did that was often like a great essay um so it's like really interesting that there's at least that similarity of just like taking things that are in your brain analyzing them enough to feel like you fully understand them and could take any person off the street who's like a reasonable person and just like externalize your brain into them and I feel like that's like the core of prompting that might be the best summary of how to prompt well I've ever heard in fact I'm pretty sure it is. Externalize your brain. And then look into them. In terms of having an education in the thing is a really good way to describe the thing. That was good. That's, I think, a great way to wrap this conversation. Thank you, guys. This was great. | AI prompt engineering: A deep dive | 4,602 | Anthropic | 20240905 | Some of Anthropic's prompt engineering experts—Amanda Askell (Alignment Finetuning), Alex Albert (Developer Relations), David Hershey (Applied AI), and Zack Witten (Prompt Engineering)—reflect on how prompt engineering has evolved, practical tips, and thoughts on how prompting might change as AI capabilities grow.
0:00 Introduction
2:05 Defining prompt engineering
6:34 What makes a good prompt engineer
12:17 Refining prompts
24:27 Honesty, personas and metaphors in prompts
37:12 Model reasoning
45:18 Enterprise vs research vs general chat prompts
50:52 Tips to improve prompting skills
53:56 Jailbreaking
56:51 Evolution of prompt engineering
1:04:34 Future of prompt engineering
Learn more about Anthropic: https://www.anthropic.com/
Anthropic prompt engineering docs: https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview | 2024-09-05T21:48:32.781541 |
https://www.youtube.com/watch?v=oa82yoJ6zYc | Hey Chris, did we find anything curious about doing stuff with complex PDFs? I would say so, yes. We did find a few curious things. It turns out, unsurprisingly, that it's difficult to do things with complex PDFs. And the tools that Lamedex has today are pretty good for a lot of different file types, but still fall a bit short for PDFs. Yes, I believe industry has run into this issue. The question is, does the embedding table unstructured retriever pack do what it says on the tin? For the most part, yes, especially as long as you stick to the kind of suggested file types. It does a great job. Absolutely. Oh, man, definitely looking forward to that. So the exclusive, silky, smooth, perfect one-click auto magical solution for PDFs, it still doesn't exist. And we knew that coming into this, PDFs can get crazy complex. It seems like this is still a work in progress for the industry. Is that right into this, PDFs can get crazy complex. It seems like this is still a work in progress for the industry. Is that right, Chris? That's ultimately correct. Yes. Yeah. Okay. Well, I'm pumped to see how far we can go with what data types and how we can build pipelines with our own PDFs in a more streamlined way than ever before. That's what you've got in store for us today, right? You know it. Well, I'm really looking forward to this one. I'm going to outline for everyone all the core constructs we need to start building with these tools and with these ideas. We'll see you back in just a little bit to lead the data demo. All right, everybody. Thank you for joining us. Myself, founder and CEO of AI Makerspace, and Chris Alexiuk, the LLM Wizard, co-founder and CTO at AI Makerspace for another YouTube Live event. If this is your first time, welcome. We'd love to hear where you're joining in in the YouTube Live chat. What you'll learn today is you'll learn how to build RAG systems with really pretty awesome indexes for your data that contains both tables and text. Special thank you to Llama Index and unstructured.io for partnering with us on today's event. We're really excited to highlight their tools for all of you today. If you have a question throughout today's presentation, please drop it in the Slido link in the chat description or go ahead and click the link in the YouTube live chat right now. All right, let's get into it. So let's talk RAG for complex PDFs. What we like to do at AI Makerspace is we always like to align our AIM for the session. If you stick with us, what you'll get out of today is you'll understand how the core constructs of Lama Index stack up to advanced retrieval methods and how we can sort of build these ideas like Lego blocks in our mind so that we can deal with more and more complex LLM applications. You'll learn how to deal with unstructured text, natural language data, and structured tabular or table data at exactly the same time using this very, very cool unstructured.io functionality that Lama Index has wrapped and made easy for all of us to use. So we're going to talk first about RAG as an overview. I know we've got a lot of new folks in the room today, so we're going to do a quick overview of RAG. We're going to look at some Lama Index core constructs, but we're going to get into the PDF problem as quickly as we can today. We've got a lot of great stuff to show you. So RAG, it's just important to sort of put this into perspective as we get started. It's trying to solve the problem of hallucinating that LLMs have. They're confidently telling us things that are false. This makes people very uncomfortable, especially in enterprise. And the way to solve this is the same way we solve it in real life we want to make what we're doing fact checkable we want to make sure that we're providing references for everything that we say we want our llms to do the same thing references matter and so retrieval augmented generation is all about those references that matter in improving our generations by using them. Retrieval is about finding references. Augmentation is about adding those references to your prompts. And then generation is about getting better answers because we added those references to your prompts. And then generation is about getting better answers because we added those references to the prompts. That's all there is to it. Retrieval, augmented generation. Now, this is useful as a first cut. Very, very useful in specialized domains where you have lots of jargon and lots of words that maybe have slightly different meanings than they do in other contexts and our example today is going to be in the financial domain domain so you'll notice that this is another good candidate for things that you want to try to start teaching your applications ways to understand that specialized language. Of course, you can fall down the rabbit hole of specialized domain language and start fine tuning your embedding models. We did that in a previous Llama Index event. We won't be doing that today, but we will be hopefully able to engage with some specialized language that's in the PDFs we take a look at. RAG from a system architecture perspective can be looked at a few different ways. I like both the linear visualization and the sort of high level viz from Lama Index. The idea of you have to build your indexes first. You're going to load all your data. You're going to figure out how to build the indexes, you're going to get everything ready to rock and roll before you do quer these generations, you want to be able to evaluate are these high quality generations, how can we improve them, are there things we should be measuring, etc. So I like this also just because it sort of shows a lot of different data types types that all go into this idea of index or indices or complex indexes that are managed in very, very unique ways. This is really where Llama Index shines as far as we can tell brighter than anybody else in the industry is when you need to do complex things and really, really get your data into the right order for the best possible results. So we ask a question, we look amongst our index or indices, we add that relevant information to our prompt before we go ahead and put it into the LLM and get hopefully a better answer. We go ahead and put it into the LLM and get hopefully a better answer. One of the ways we like to visualize this is we ask a question. We provide a query. That query is fed to an embedding model. That embedding model then turns our query into a vector format. That vector format is then fed into a vector database. And we're looking for similar things. This is just a vector similarity that we're looking at, something as simple as cosine similarity to get nearest neighbors. From there, and it's important to note here that even when we have quantitative and qualitative information like we'll have today, ultimately everything is being converted to vector format in the examples we'll see today. And so this overarching idea of looking for similarity within a vector database is still holding. You can imagine as you get much more complex in the way that you're storing different data types, perhaps it's not always vector format, but it is for today's examples. And then we're going to go ahead and set up our prompt template. Our prompt template, we might make sure says something like use the following context to answer the user's query. You may not answer the user's query unless there is specific context in the following text. If you do not know the answer or cannot answer, please respond with I don't know. Some good old fashioned prompt engineering there to tee us up for when we do find those similar reference materials, we shove them right in to the prompt and hopefully improve our generations. So the simple retrieval process, the simplest retrieval process, a big idea is you ask a question, you search a database for stuff that's similar, you return that similar stuff. And of course, since we're dealing with vector databases, we're going to talk about taking our documents, splitting them into chunks, creating embeddings for each chunk, and then storing those embeddings in our vector store. So now that our documents are sort of PDF level documents, let's say, are being chunked, we're just sort of doing something a little bit more complex here, where we're taking the data, that's language data, we're taking the data, that's tabular data, and we're chunking and indexing in very powerful ways. And we're chunking and indexing in very powerful ways. And we're providing very powerful patterns that we can leverage in our applications that we are building today with Lama Index. Lama Index, again, it's really shining on the data side as far as we can tell. And that's represented straight top level in their documentation, a data framework for LLM applications. It's all about the data, the data-centric AI paradigm does not go anywhere just because of LLMs. So LLM Index is all about understanding that LLM applications shine the most brightly with private and domain specific data. As you flavor them, that's how you get your competitive advantage. Of course, data is often all over the place in organizations. The bigger the org, the more all over the place the data is, generally speaking. And getting it all to the right place for the right tasks and for the right applications, that's what Lama Index helps us do. So to understand this new capability, we have to understand the core constructs of Lama Index. The first of which, perhaps the most important for us to understand and get familiar with the language of are nodes. Now, nodes are, quote, first class citizens in Lama Index. They're just chunks of source documents. Okay, so we talked about chunks. A node is a chunk, but node is a specific Lama index term. Node is going to inherit the metadata from the parent document. In this case, document means sort of PDF level document. And in this case, we're doing just a simple query where we're looking for the top K nodes that once we find those similar nodes to the query, they'll be fed into a response synthesis module and we'll get an output. Of course, we're going to do something a little bit more complex today, but this idea of node doesn't go anywhere as we start to stack more and more interesting pieces together. And the way we create nodes is we parse our data and we use node parsers. We go ahead and we take our documents, we chunk them, we eventually have node objects. Lastly, it's important to understand in Lama Index that the retriever and the query engine are really the things that this is where the magic happens in Lama Index. So what you want to think about is you want to think about sort of that retriever that helps fetch the most relevant nodes. And the query engine that is allowing us to sort of interface. These two pieces, you'll see a lot of retrievers and query engines in the code today. These are the two pieces that really are sort of the most interesting pieces to pay attention to as you start working with Lama Index. There's so much power in these retriever and query engines. We're consistently amazed as they come out with better and better ones all the time. The important thing about RAG is that it really relies on retrieval to be done well. And done well could mean a lot of different things in terms of evaluation. But what we wanna think about is we want to think about if we improve retrieval, then we improve generation. OK, and so getting retrieval right. Again, it's that data centric piece. It's so important. We talked in the last event that we did, High Performance RAG with Lama Index, about how to do a couple of things to go from simple to advanced. We talked about fine-tuning embeddings and we talked about using small to big retrieval. We talked about a sentence window retriever to do that. And this is one of the Llama Packs that have been recently put out. What we're going to talk about today is we're going to talk about how to go ahead and deal with embedded tables. And this is the embedded table sort of llama pack. This is a notoriously hard problem to date. Everybody asks about this. Everybody wants to see this capability rolled out in every tool out there on the market. Why though? Why is this so important? You know, it's kind of like the VP in your pocket problem. This is sort of the meta middle manager problem. The CEO is going to go to the VP of any given function or organization in the company or department, and they're going to say, hey, how's it going in your department or function? And in order to figure this out, there's a ton of words that need to be read, but there's also a lot of data that needs to be looked at. And maybe there's some summary documents that put together some of this context that we know is done on the analysis side, some context that was done just on the qualitative thinking through things and how to connect to strategy side. But a lot of this answering of these types of questions, how are things going in area X, area Y, area Z, these could really be streamlined if we could look at both qualitative and quantitative data at the same time very easily. And the simplest way to think about doing this is within one single document. So this is sort of the ground floor of this bigger overarching problem. Consider the annual report. This is a document people work very hard on each year for every public company out there. This is called a 10K filing, and it's submitted to the Securities and Exchange Commission, the SEC. This is a super important document, and it's a great one for us to think about today because it's got text, it's got tables, and it's got tables with data that really matters if we get it right. It doesn't have a lot of other crazy other things like super complex diagrams or electrical wiring diagrams or simply schematics of how to, let's say, put a piece of furniture together or to fix a piece of equipment. It doesn't have a lot of other pieces like actual images that people have taken and added to the document that you might see in technical reports or service manuals. It just has tables and text. And some of those tables are pretty complex in their own right. So this is a great use case for us to start with. Because if you're going to ask questions about an annual report as a stakeholder or as an investor, you want to make sure that you're getting the right answers back. So this is the use case we're going to look at today. How are we looking at this? How does it actually work? Well, the magic here where Llama Index has taken a tool from unstructured.io comes in the form of this embedded table, Llama Pack. And what it allows us to do is this hierarchical parsing. This idea of text splitting of tables. It's historically hard, but can we take that text data that's unstructured and the table data that's structured? Yes, we can in just five lines of code. So what we do is we start with a summarization of the table. It weization of the table. We check out the table, we create a summary of it, we're creating a schema for the table at the same time. This sort of structured table summarization allows us to put that together. Then we use hierarchical node references to sort of take that summary that was created and a link to the table and then plug that into a recursive retrieval process that is a lower level process in Lama Index and one that's been out there for quite some time. So we're sort of, again, as we solve more and more complicated problems, we're sort of stepping up layers of abstraction and layers of, you might think, metadata within our indices to allow us to solve these more complicated problems with constructs that we already have. The way we can visualize this is we can look at this unstructured element node parser that's going to be collecting nodes that are just text and also collecting nodes that are tables. You see the summary and schema here. This is what's being collected. Finally, this is fed, as I mentioned, into this end-to-end recursive retrieval. And what this recursive retrieval is doing is it's allowing us to explore the most relevant nodes and explore the relationships that these nodes have to additional retrievers and query engines. So at query time, if the index node is fetched, now we're sort of combining the idea of indexes and nodes into one construct called an index node, then the underlying query engine or retriever will be queried. So now we're sort of, again, stacking. One example of the way we might think about this and look at this is in this sort of contrived example, but one that allows us to see how this recursive retrieval is working. Tell me about the sports teams in Boston and the positive aspects of Seattle. The sub-question query engine is now breaking down sports teams in Boston into question 1 and positive aspects of Seattle into question two. The recursive retriever is then figuring out, okay, I'm going to go and send this question to this index node, which underlying it has a query engine and retriever that can be used, and the positive aspects of Seattle, of course, go to the Seattle index node. So this idea of sort of going to the right place for the right thing is all we're really talking about in this recursive retrieval process. So for today's build, what we wanna do is we wanna go ahead and grab some data. We're gonna get 10K filings from who better than to look at but NVIDIA. And as we get this thing ready to rock and roll, the first and most important piece is to process our data using all the constructs that we've seen so far. We're going to bring up the LLM wizard himself to show us how to get this done and get ready to build out the rest of our RAG system. Wiz, over to you, man. Hey, yeah. Okay. So hopefully my mic is a little bit better for you guys, but I'll just try to speak up if it's not. So the idea here is that we are going to be doing a conversion step. Now, there are some methods we can use that rely on OCR, but the actual performance of those tools was a bit lackluster compared to this. So this is the solution we're gonna use, which is basically we're gonna convert our PDFs into HTML files in order to be able to parse out the table information within. And definitely still not the best, but hopefully if I'm speaking a little bit closer to the mic here, I'm louder. If not, I'll just yell a bit more. So the idea is we are going to, yeah, we can definitely zoom in here. We're going to go ahead and we're going to add some additional information. First, we need our OpenAI key. And then we're going to add some additional information. First, we need our OpenAI key. And then we're going to grab this LamaPack. Now, the LamaPack that we're using is the Embedded Tables Unstructured Retrieval Pack. So this idea is that, you know, when Greg's talking about those five lines of code, this is one of the ones. five lines of code, this is one of the ones. So the idea is that we're going to use this pack in order to power our actual information. So you can see we just grab the pack from the hub and then we are going to store it in this directory. Now we have to do some data pre-processing. So we're going to use a very specific tool, which is going to be this PDF to HTML EX. If we look at the actual link here, it's actually a very good tool for this exact purpose. Very good at converting the actual information we want into the desired format. So how do we use it? Well, we just have to grab the package, and then we install the package, and then we can run the package as a subprocess in our Jupyter notebook. So we're going to convert this quarterly NVIDIA PDF, which you can see here, and we're going to convert that into an HTML file. And that's going to be what we use with our actual index and query engine. And all we need to do to set that up is actually call the embedded tables on structure retriever pack on that HTML and provided a path to save our nodes that we've processed. And that is it. That's the whole process to begin querying these structured, you know, documents. So let's at this point, go back to Greg, who will talk us through the next section. Yeah. Yeah, yeah, Chris. So, so cool to see exactly how easy that was. And, you know, just in case you're not tracking here completely, this is the embedded table functionality. And this is the process that as soon as we get that PDF into the right form, we're going to go ahead and be able to take that unstructured and structured data, do that structured table summarization, and connect that to hierarchical node references to do recursive retrieval. That connection to recursive retrieval is what we have to build out next. And that rest of our RAG process is what we want to focus on building out next. But the data piece, once we get it ready to rock and roll, you saw how easy it was. And so we really hope to see a lot of people start building, shipping, and sharing some amazing things with this new data processing. Because the rest of this is going to be pretty straightforward. If you've built rag systems with Thelma index before the models we're using today are models that you're going to be comfortable with. If you're building these tools at all today, open AI is GPT for open as Ada embeddings. Of course, you could plug in any open source embedding models or chat models here. And we're also going to be leveraging in terms of our tooling, we're going to be sticking with straight up Lama Index here. So we're not going to get fancy with the vector database, we're going to go ahead and create vector store using Lama Index. And we're going to go ahead and do the infrastructure. And if we wanted to do evaluation we could easily do that in llama index we're going to leave that out today but what we're going to do is we're going to essentially glue all the pieces of our rag system together with llama index we're going to put open ai models, and we're going to now leverage the data that we've chunked appropriately. We've created those nodes we need to do that hierarchical indexing that then we can retrieve in a more sophisticated way. So with that, I'm going to pass it back to Chris to show us how to do RAG for documents, including PDFs with tables the rest of the way. Chris? Yes, you bet. Okay, so I will continue to speak at a high value of our volume, sorry. And first of all, I just want to show kind of the output or the understanding of this uh specific uh tool so with the llama pack which is uh we're gonna walk through kind of step by step what that looks like underneath the hood uh we have the following uh you know uh output so we can ask questions that are simple like revenue and get responses that reflect the revenue uh we can ask questions that are simple, like revenue, and get responses that reflect the revenue. We can ask questions about revenue from sales outside of the US, which we're going to get a good answer for, because it's represented well in text. We can talk about any policy changes. You know, AI is going to change the actual policy, so that's fine. They are worried about it. That makes sense. When it comes to asking questions that are a bit more generic or hyper-specific, we might get a lot of noise, especially with the default retrieval pattern, which is going to be quite low. So we'll want to modify our LamaPack. Now, the LamaPack is just a loose abstraction of the actual code that we'll see in the next notebook. But the idea here is straightforward enough. We're going to load the path to our HTML file. We're going to parse that in an unstructured element node parser. And then we're going to build our index based off of this mapping that we can create, and then we're going to use our recursive retriever. Now, what we want to do in addition to this is perhaps modify the amount of retrieved documents or modify the LLM. We're going to do that by actually modifying the code that you see in, if we go to our files in our base.py, we can actually modify this code and then re-import it, and that's going to let us get a better understanding of, you know, this text, perhaps. So the idea is we're going to increase the amount of retrieved documents as well as we're going to move through a better quote-unquote model so that we should have better performance. So we'll just re-import the tool from our copy here, and then we will run it again. We can ask the same questions about policy changes. We get a better response because it's GPT-4 turbo. We can also ask questions that are quite specific to only table information. I mean, one of the things that we want to be able to confirm is that we can ask questions about only tabular data and receive an appropriate response. And indeed, we can ask the question about the research and development expenses and percentages of net revenue. And the only place that you can find this information is in the actual table. And it gives us the correct answer for both periods of time. So this is showcasing the ability to operate in that tabular only format. And that's the big idea. in that tabular only format and that's the the big idea. So how did this actually work? Well it's pretty straightforward and we mostly saw it in the other notebook. So we're going to first just again convert our PDF to HTML and then we're going to load it in using the flat reader from llama index. We're going to parse the nodes with our unstructured node parser this is uh using local inference so it's not going to the api we are using the raw nodes to get nodes from documents and then we're creating a map in this next step. Then we are going to go ahead and we are going to grab the actual vector index store, which is going to be constructed of those base nodes. Our retriever, which we're going to again increase the top k retrieved documents past the default one so we can collect more information which can be used to provide a better response. And then the actual you know query engine that we're going to use is this vector query engine which again we're going to use that similarity top k equals to three. We're going to use the recursive retriever from llama index in order to do this a little bit better. And then we are going to provide the mapping to our nodes as well as our vector retriever. And that's going to essentially be the whole process. From here, we can move on to asking questions about the PDF in the expected format. But the big idea here, and if we'll go back to the other notebook, the big idea here is a lot of this comes down to the efficacy or quality of this processing process, so this conversion process. Now, again, we tested with the OCR implementations and found them to be a little bit underwhelming. And this was the best format that we could come up with, uh, in terms of actually being able to answer questions about that data that's represented in our, uh, note or in our PDFs. That's not just, uh, you know, raw text, but that actually has some kind of structure or structured representation. but that actually has some kind of structure or structured representation. If you wanted to move to some kind of process that examined graphics or charts, you would want to move to a process that involved some amount of multimodal language model, but for the actual efficacy, this is the strongest solution that we found through the Lama Index tooling. And it does for the most part what it says. I think with a few adjustments at least, you know, actually modifying the model we're using, as well as increasing the amount of retrieved documents, we're able to get really good responses about the text representations of information as well as the tabular only representations of information where we're actually leveraging that structure in order to understand more about our data. And that is how you do RAG with unstructured and llama index on a complex PDF. That's it. Very cool, Chris. So, you know, it looks like that's a wrap for today. It looks like these HTM and HTML files, they work out of the box. PDFs take a little bit of massaging right now, but Llama Index and Unstructured do make it easier than ever to do RAG on complex PDFs and PDFs with tables. So, you know, get your hands on these tools, start building with them today, start making an impact at your company or in your startup or whatever it is you're trying to build next. These are certain to impress. And many more data types, of course, exist. Images, diagrams, all of this stuff is going to continue to roll out on the market. You'll be able to see us sort of testing each step of the way, but there's no reason to not try to start building with these tools today if this is a problem for your customers, for your stakeholders in your organization. And so with that, we're going to go ahead and move to questions. And I'd like to invite Chris back up on the stage to join me for the Q&A period. Chris. What's up? So, man, it did what it said. For the most part, yeah. I mean, we have to take this kind of loop-de-loose step of conversion. I see a lot of questions. I just want to address a few of them in the chat before we move to the Slido. So why are we not using the unstructured partition PDF is the big question, right? So because of the way that that's set up, it's actually not compatible currently with the tool stack that is existing in Lama Index. Even in the Lama Index demos that reference PDFs, they're actually just using HTM files. I think we're waiting on unstructured to get a bit more of a robust tool before moving on to actually using their PDF partition technology. And as for all the questions about privacy or the API, the API to unstructured is not being used. It's all being done locally. But the actual LLM is OpenAI and the embeddings are also OpenAI. So you would need to use locally hosted solutions to keep everything private, you could do so by again modifying that base.py file we saw to run your local pipeline. But outside of that, there's nothing else other than the OpenAI components that are going to be leveraged through an API or an external API, sorry, in this tool. All right, all right. So it looks like the first question that we have in the Slido. So everybody go ahead and please upvote your favorite questions in Slido. Is there a way to do hierarchical chunking on PDFs, such as document, section headings, subsection headings, paragraph chunks, this kind of thing? Yeah, definitely. I mean, it's a more engineer intensive thing to do. So it's not going to be like an out of the box tool, which is what we were examining today. You will have to build that and build those kind of like node graphs in terms of their hierarchy. But Lama Index does give you the tools to be able to do this. It just does take some work. So creating that metadata, creating those sections as you're parsing through your PDF files, and there you go. All right. Now, how important is the recursive retrieval? It seems a bit unclear, Islam asks. Can you sort of give your take on this, Chris? It's not super useful, but it is useful. Recursive retrieval is good because it lets us peer into deeper components of our node graph. So it would not work as well without it, especially the way that unstructured is creating these entities, right? So we're leveraging that unstructured node parser to build a fairly, not like super complex, but fairly complex representation of our data. And so we're going to be, the recursive retrieval is a good way to let us kind of dig deep into specific sections of our document as they have been parsed by our unstructured tool. Islam also asks, how much of an improvement is there using the HTML compared to directly parsing the notes from PDF? Yeah, like if you could use directly from PDF in an acceptable fashion right now that really preserved that structured information from your tables, I would say there would be information from your tables, I would say, you know, there would be not so much improvement. But right now, that doesn't work so good. So there's a huge improvement in going to the HTML representation first, and then, you know, building your index. Yeah. And that was one of the key things that we found as we investigated this tool, this time around is, you know know that that's an opportunity to reduce some friction potentially for you open source contributors out there you know if you want to jump in and help out with this i think that that's a really really nice piece to try to maybe bite off and start contributing to one of these projects on all right Then what are the differences, Chris, between Llama Index and Langchain, like the Retrieval QA chain? How do you look at these tools? And when should we pick up one or the other? Yeah, I mean, they're great tools. They're both great tools. Yeah, I mean, they're great tools. They're both great tools. You know, they're going to be able to do a lot of the same things, especially because they're both constantly implementing new and awesome things. And I mean, what it comes down to is just your preference and your comfortability with the APIs. They're all going to go ahead and eventually leverage something that's similar on the back end. We're constructing these representations. Out of the box, though, I would say Lama Index does take a lot of care with their concept of nodes and building these node graphs, I would say like Lama index for these more complex representation tasks is probably going to give you a smoother out of the box experience with less engineering hours than might require. Nice, nice, nice. Yeah. And, and, you know, this is a question that everybody wants to discuss all the time. We love sort of discussing these things within our community. If this is something you're interested in keeping up to date on, as the space rapidly changes, there's lots of things that one week, one tool might be slightly better at than another tool, and the next week we see it equalized again. So really, really interesting time to be in this space and to be asking questions like this. So Martin asks, do you believe that combining knowledge graphs with vector database retrieval provides substantial improvements, especially for big picture questions? Yeah. Yes. Knowledge graphs are a very rich way to store information. And with tools to navigate or traverse those graphs, it is, yeah, they are very good at those things. You know, if we can represent our information in a more information-rich way, then, you know, those KGs are going to be hugely advantageous for us. Definitely an interesting space to continue to look at. So tactical question, can this be used with outputs generated from AWS text track, text track table extraction? Yeah, I mean, if you can put it into a standard file format or any kind of, you know, usual representation, then yes, it can be used with this. If you have some kind of, you know, alternative method of converting the PDF to an HTML file or an HTM file that will also work fantastically. I saw a lot of people were talking about using Adobe's API, great solution. We didn't want to use an external API today, but there's a lot of really cool external, you know, managed PDF2X tools that exist out there. Next up, how do we get amazing PDF parsing without having to call unstructured.io's API? Their API, I think you already covered this. And it is ideal to do all the work locally. Can you be super clear about exactly what you did today? Yeah, I mean, it's just running locally. For the example that we went through today, that's the solution. You can run it through your API, but you can also do local inference. There's like a local inference package that you can import and leverage to do that without hitting an API. Now, this is run locally. And the next question is, would this solution actually work in a scalable cloud infrastructure? Yeah, I mean, it depends on what you mean. Yes, of course, it will work. It will scale like every other RAG application. It really comes down to what the different components you're using are, how you're leveraging them. But, I mean, this doesn't have any inherent scaling issues that are unique to it. So in the same way that it's difficult to scale complex RAG applications, generally, there's no new or not new challenges for this task. Yep, yep, yep. And so another couple of sort of maybe rapid fire questions here about how we can augment these systems. One, can this work with Lama 2, other open source models, or just OpenAI's GPT? It can work with everything. The model is agnostic to the rest of the stack. The model receives text information, though please do be sure if you're using an OpenAI functions query engine or anything like that, it will not be compatible. Query Engine or anything like that, it will not be compatible. And then on top of that, the performance is not going to be guaranteed to be at the level of say GPT-4 Turbo. If you're using like a 7B model, you know, with fine tuning, maybe you can get them up to par, but otherwise it's going to be a bit tough. Yep. Yep. Absolutely. And, you know, it reminds me, Yep, yep, absolutely. And, you know, it reminds me, if you're building these things, build with GPT, build with synthetic data, if your data is behind somewhere and you can't get it out in public, and then go ahead and check that baseline by changing to an open source model before you put your private data into that. We've got a couple of questions sort of jumping up in the slide out here. Anonymous, do you dump parts of the table for the LLM as context or the whole table? Any special format? We dump parts of the table as context with the unstructured tool. Yeah. Nice. Okay. So it looks like we've got some other movement here. Have you tried to implement RAG conversational memory and not just question answering? Yes. So you can basically just keep track of all of the text that your LLM has sent and your queries as a kind of conversational history object. All of the popular frameworks have like a specific memory object for this that you can use to maintain kind of a thread throughout the conversation. Nice. Okay. So if you were going to work with CSVs here and Excel files instead of tables within other files, would you use this tool or would you use a different approach? How would you go about that with CSVs? I would definitely use another approach. CSVs are already structured. They're structured in a understood and known way. So we would not need to use a tool like unstructured. We could use CSV loaders and CSV node parsers to achieve the same result. Or we could even take it a step further and do some kind of, you know, CSV, you know, tool that's going to let us actually calculate, you know, aggregate statistics about our CSVs or more. So, it's, you know, when we have these known file structured file types, we want to stick with using tools that are very performant with them. And we don't need to use generic solutions like what unstructured is doing. And we don't need to use generic solutions like what unstructured is doing. Okay. Okay. Ali asks, and maybe this is a point of clarity that we could add for everybody. Would it be better to convert the tables into a natural language description, like table to text explanation and create embeddings for them? Yeah. It depends on what you mean by better. But I mean, that's a great thing to do. I mean, the small part, we're doing parts of that by parsing through a table, developing some kind of metadata about it, right? Or describing what the table is about, what information it contains, and then using that to help us with our retrieval. But I mean, for the most part, if we care about the structure of the data, then we really want to keep the structure of the data. And in order to describe all that structure, you're talking about a lot of tokens, right? So I would say like in terms of like out of the box performance, it's going to work quicker, but it's not going to work as well. If you just say parse a table into some unit of text. Yeah. My sort of intuition was we're kind of doing that already in this tool. And then, you know, I was thinking of the example we had recently where we're looking at service manuals and trying to look up part numbers. And I'm just imagining sort of a part description part number table converted to text and I'm thinking is that really the best way to solve this problem so I think again it comes down to sort of it depends and yeah conserving that structure of the data might be very important lax asks any tips on scaling would say 10,000 documents how about a million documents? What do you think, Chris? Many, many, many, many documents. Yeah, I mean, the more documents you have, the more expensive it gets, the harder it is to retrieve the specifically desired information. And so the more complex your system needs to be in order to ensure you are actually retrieving what you need versus just the first thing that you find. You know, things like graphs can help us to do this. Things like really appropriate metadata application, filtering, and so on are also going to be key in helping us navigate these huge document spaces. But ultimately, it's just very expensive. It is, yeah, yeah. It's just very expensive. It is. Yeah. Yeah. It's just very expensive. And that's what it comes down to. Yeah. And, you know, in order to sort of try to decrease the cost as much as possible, in order to decide how to do that metadata filtering, et cetera, you really can start to dial down, okay, what are the questions I want to be able to ask? And then say, do I really need the 10,000 documents for that? If so, which parts of them do I need exactly? And you can start to, if you chunk it down into specific tasks you need to be able to do, which AKA specific questions you need to be able to answer, things you need to be able to get context for, that's going to allow you to do it in as efficient a manner as possible. Can you explain the node mapping process that you did? Basically, we're just constructing a kind of hierarchical node structure that we can navigate based on the kind of thing this is in a document. And that's about it. Okay. Mapping based on the kind of thing this is in the document. What do you mean by that exactly? The data type? Yeah, or what unstructured has classified this as being part of. So like, you know, spans versus paragraphs versus X versus Y. We want to understand the kind of text this is, the kind of, you know, document component that it is, so that we can better understand how it fits in with the rest of the data, what things are relevant to it or not relevant to it. Is it relevant at all to us? Right. So like a JavaScript header, we don't really need that stuff. Right. So all of this information is parsed out for us in a way that makes sense. But we need to map those nodes in order to understand what unstructured has done for us. Okay. Okay. Right. So as you start to get a little bit more into it, you may need to drill down into what some of these tools are doing exactly to decide on how things are being parsed. But yeah, I mean, if you're just applying it, let's see, go ahead. Can we get some out of the box results that are really good? So we got a question about PyPDF here. What do you think about PyPDF, Chris? Can it handle the PDF the same way as unstructured does? All of the PyPDF, you know, PyMu PDF, all of the PDF extraction tools are dope. And they do what they say they do, which is extract text from PDFs. The thing that they're falling behind a bit on is, you know, getting that structure out of the PDF. And that's why we've used the process we did today. But yeah, I mean, PyPDF is great. If you just want to rip text from pretty clear PDFs, I mean, that's going to be something that you can do well. To be clear, though, this is PDF file format, not scanned PDFs or images of PDFs. That's not going to fly. Yeah. Okay. So Charles is asking us to transition to talk a little bit about evaluating the quality slash, you know, how good this thing's doing. Is it hallucinating? Is it returning the right stuff? Am I getting good generations? How do you think about evaluating the quality of this particular system that we built today, Chris? Yeah, I mean, Lama Index has a lot of great out-of-the-box tools that you can use to evaluate. Not trying to just be like the Lama Index, you know, event today, but they do. They really do. Things like RAGIS are great tools to understand how your RAG pipelines are performing. to understand how your rag pipelines are performing. You know, it is, it's very, it's a very difficult problem because there's not like a perfect answer, especially for a complex task like this, where we want to be able to say like, okay, so what's our retrieval from the text of the document versus the tables of the document? Do we understand the table structure well enough? These kinds of metrics are harder to understand or harder to get information about and there's always gonna be this loss, right? If you do a conversion process from PDF there's going to be a loss. If you use OCR there's gonna be some loss, right? Like it's not gonna be perfect. And so we're always going to lose information when we're going from the PDF file format to a non-PDF file format, or even just extracting information directly from the PDF itself. And so we want to be building systems that are sensitive to that kind of loss. You know, things like do we lose structural information, I think is key. And using tools like RAGIS, LAMA Index, LM as a judge frameworks is going to be a great way to get that information and understand, you know, how good or how bad that system is. And more importantly, will give you directional signal when you make changes to see if you're getting better or worse. All right. I think the last question that we'll end on today, and feel free to riff on this for a little while, is are retrieval augmented generation and document question answering the same exactly, Chris? Is this a one-to-one that we should be thinking about here? No, kind of. So, I mean, it's kind of like one is a subset of the other one, right? So, document question answering is a subset of retrieval augmented generation, or at least we can think of it that way, right? We have documents. We are able to retrieve them and then make generations based on them. So this kind of idea of document answering is almost inherent, almost inherent in that RAG pipeline. It doesn't have to be, though. RAG can be used in ways that are exclusive to question answering. So I would say like they're, they're oftentimes used synonymously, but I would, I would more think of the, the retrieval augmented question answering or RACA as we like to call it is, is a subset of what actual RAG is and is doing. Yeah, I like how you brought that up. We teach retrieval augmented question answering as sort of the RAG 101 in our LLM operations course. And the reason is, is because this sort of subset idea, if you're gonna get started with something and you're getting into this field, it's very, very useful for you to think about RAG as simply a question answering machine. Although doing retrieval and augmenting generations is a more general process and one that we can use to dial in other things. If you go back to sort of classic NLP, question answering is just one task that we can do with a generative model. So you kind of get back to basics when you start to get into, is it just question answering? Well, no, but it's a useful lens through which to begin learning it. Well, Chris, I think we're going to go ahead and wrap up. That was an awesome Q&A. Thank you, Chris. Appreciate the demo today and all of your insights for Q&A. Thank you, everybody, for joining us today and for your participation. Thank you to Llama Index and to Unstructured for supporting the event and for letting us take a good hard look at those latest and greatest tools. This brings us to the end of today's event. And if it resonated with you and you'd love to build more cool stuff with us, Definitely check out our Discord, our community calendar, and our upcoming courses on LLM engineering and LLM operations from deep within large language models to actually deploying them in frameworks like Langchain, Lama Index, and cloud computing service providers at scale. We sort of cover as much as we can of the space to try to keep you up to date all the time. Join us on YouTube Live every week where we're investigating a latest and greatest tool, or we're diving deep into a classic principle that everybody should understand. If you have feedback on today, we'd love for you to share it with us. Please fill out the feedback form in the YouTube live chat. But until next time, keep building, shipping, and sharing, and we will most certainly be doing the same. Thank you so much, everybody. We'll see you all soon. | RAG for Complex PDFs | 3,631 | AI Makerspace | 20231207 | GPT-4 Summary: Discover the Future of AI Document Handling: Unveiling LlamaIndex and Unstructured.io's Breakthrough in RAG Systems! This must-watch event is perfect for AI engineers, business leaders, and LLM practitioners eager to explore cutting-edge tools for processing complex PDFs with tables, graphs, and diagrams. Learn how to integrate tabular and semantic data into RAG pipelines and enhance QA on intricate documents. Don't miss out on live code demos, insightful slides, and the chance to see if these innovative solutions live up to the hype. Join us to transform your approach to handling complex documents with AI!
Event page: https://lu.ma/RAG4PDF
Have a question for a speaker? Drop them here:
https://app.sli.do/event/hGnryTXK1u6XMhnMG5ANKc
Speakers:
Dr. Greg Loughnane, Founder & CEO AI Makerspace.
https://www.linkedin.com/in/greglough...
Chris Alexiuk, CTO AI Makerspace.
https://www.linkedin.com/in/csalexiuk/
Join our community to start building, shipping, and sharing with us today!
https://discord.gg/RzhvYvAwzA
Apply for one of our AI Engineering Courses today!
https://www.aimakerspace.io/cohorts
How'd we do? Share your feedback and suggestions for future events.
https://forms.gle/WyH98AYHc6HxgL7PA | 2024-09-11T13:16:17.533963 |
https://www.youtube.com/watch?v=6imZsF8RKt4 | . Wiz. Today we got Llama Index Workflows. This is the new framework for agents from Llama Index. Is that right? That's right. Yeah. Okay. It's just for agents is that right uh no i i don't think so no okay it's actually for it's actually maybe for like all different kinds of workflows including agentic workflows right that's the ticket that's. That's kind of where we're at. So if you knew agents in Lama Index before, your knowledge is still good. We can apply it here today, can't we? That's absolutely correct, yes. Okay. So we're adding another layer of abstraction to allow for some additional complexities in the applications that we build, in the applications that we build in the workflows that we build. There's some kind of cool stuff going on behind the scenes today. I'm kind of excited to get into it. You ready to kick this thing off, Wiz? I can't wait. Let's do it, man. All right. Here we go, everybody. I'm Dr. Greg. We've got the Wiz coming back in a little bit to show you guys and to discuss some of the details of Lama Index workflows. We are AI Makerspace. Thanks for joining us for this community event where we're going to dig into the latest and greatest from Lama Index. This is a beta tool called Workflows. And initially, to me, it appeared that this was a new way to do agents. In fact, it turns out it kind of is, but it's a little bit more than that. And so we'll get into exactly what you can expect from Workflows throughout the event today. If you have questions along the way, please use the Slido link in the chat and upvote your favorite questions. We'll do our best to get to it, to get to every single question today. Okay, let's go ahead and jump right in today, guys. Workflows with Llama Index. What is this all about? We've got some keywords that we'll hear today that we need to kind of demystify as we go. So let's align ourselves to the session. What are we going to get out of this hour that we spend together? Well, you're going to understand the core concepts and code for Lama Index workflows. This is something that you always walk away with with every AI Makerspace session. Importantly, we're gonna learn how to build an agent in Lama Index and then leverage this idea of this event-driven architecture that workflows is using behind the scenes. We're gonna talk about the differences between some of the graph-based approaches that you may be familiar with for orchestrating agentic applications versus this event-driven orchestration implementation that Lama Index has put together. And so we'll kind of overview Lama Index workflows. We'll kind of get an idea of some of the claims that are being made. We'll talk specifically about the event-driven architecture and we'll contrast it and compare it to other leading architectures available on the market today. And finally, we'll get into our build and we've got kind of a cool little extra special thing we're doing in the build today. So hang out for the build section if you're curious about what that entails. But let's kind of focus on the meat and potatoes for the first sesh of today. Let's talk about Lama Index workflows and let's talk about about Lama Index workflows and let's talk about Lama Index and what they're saying about workflows. So from the top, Lama Index is now branding themselves a framework for building context augmented Gen AI applications with LLMs. And now check this out, added, including agents and workflows. So this is kind of a quite a high level of abstraction thing that has been put into Lama Index and one that you need to know about. Now, we need to understand how Lama Index defines words like agent and workflow, so we can kind of situate ourselves. According to Lama Index, an agent is a semi-autonomous piece of software powered by an LLM that's given a task and executes a series of steps towards solving the task. Okay. Task steps. A workflow is an event-driven, step-based way to control the execution flow of an application. So we have this execution of a series of steps that's happening in an agent, and we have this step-based approach, this event-driven approach that we leverage with workflows. So we see two keywords here with workflows. And the two easy pieces that we want to keep in mind throughout the day are events and steps. All right. We want to think about our application is divided into sections called steps triggered by events. And these steps, they're not that complicated. They're literally just Python functions. They're kind of arbitrary, whatever you want functions. They literally could be anything. They could be just a single line of code, quick calculation. They could be an entire complex agent. And these steps are not just triggered by events, but they actually emit events, meaning the step can be triggered by an event and it can further trigger another event. Okay, these steps are also called components in their documentation, something to sort of keep an eye out for. So the steps are kind of this sort of fundamental piece to the puzzle. The event is the other piece. What is an event? Well, an event literally is a change of state. Or you can even get more broad about this, like sort of anything that you could notice or observe or record, anything that your application could sort of use as a trigger to do something else. So there's a lot of things that this could represent in sort of an enterprise context. Customers interacting, sensors on the manufacturing floor, etc, etc. Lots and lots of stuff. And really when we get right down to it, this sort of overlapping of event and state is sort of the place we want to really focus our understanding today. So we'll hear things from Lama Index, like by combining steps and events, we can create arbitrarily complex flows that encapsulate logic and make your application more maintainable and easier to understand. Okay, that all sounds great. Complex flows, making it easy. We love that at AI Makerspace. Okay, so these workflows are effectively then dispatching events through steps, these Python functions. And each step is sort of like one component of your system. This is this idea of steps and components are sort of the same thing. idea of steps and components are sort of the same thing. You know, you could have one that processes a query, one that talks to an LLM, one that loads data. You can have steps to do all sorts of different things. And each step is going to receive one or more events to process, and then it can also kind of get the word out to other events. Now, the whole purpose of this, just sort of zooming out for a second, the whole purpose of this is when we have something that's getting quite complex, we need a way to manage this complexity. And Gen AI applications, the ones we're building today, they're becoming ever more complex. And as a result, it's becoming ever more challenging to manage that complexity through the flow of data and the execution of different pieces of the application. And so workflows is sort of Lama Index's way of managing this, because as we know, you know, complexity can occur based on some really, really simple rules upstream. You can get a lot of complexity downstream. And this is a way that I often like to introduce agents is to think about agents as giving rise to complexity. This idea that simple rules in an agentic setup, you know, will eventually allow us to build some really, really incredible AI applications, right? So if you consider that sort of consumers and corporations are all sort of agents within an economy, or if you consider that neurons are all agents within a brain or molecules are all agents within cells. You can sort of stack simplicity to get complexity. And so the whole idea of what LAM index is doing here is it's saying, well, as we stack simplicity to get more complexity, we need programmatic design patterns that will allow us to manage this stuff better. So the question we might ask is, why are workflows the best solution for managing this complexity? And this is kind of what we want to get to the bottom of today. Are they the best solution? Well, bottom of today. Are they the best solution? Well, what Lama Index noticed is that other frameworks, including Lama Index themselves, were attempting to sort of build pipelines, to build chains, to build directed acyclic graphs or DAGs to build some of the earlier applications. And this sort of acyclic piece here, the A in DAG is quite important because it means not cyclic, no cycles, no loops. no loops. Now this is fine in a world that doesn't have agents, but in a world of agentic reasoning action patterns and loops, this is a big problem. So we need a way to do better. These DAGs create a lot of issues that Llama Index and the team over there identified specifically by working with their users. And they noticed like when I have to encode the logic directly into like graph edges, it makes it a little bit hard to grok, a little bit hard to read, a little bit hard to understand. When I'm passing data between nodes in a graph, sometimes it's not clear which parameters exactly should be passed and which shouldn't for optimal execution flow of the program. And then sort of anecdotally, LAM index users told them that DAGs just didn't really feel natural for developing these types of applications. So here we are, and we're getting this event-based pattern, as well as this sort of, quote, vanilla Python approach of workflows to resolve these problems of DAGs. And so the event-driven architectures, this idea, it's not new. It's quite old in software development, actually. If we define event-driven architecture, this is quite common in modern applications that are built on microservices. It's just a software design model that uses events to trigger actions. So we can start to see, okay, well, I'm going to use events and actions. That reminds me of the React framework with agents. This feels like it might be a decent way to solve this problem. Let's take a closer look and remind ourselves again that what is an event? Well, it's just a change of state. We're going to trigger steps with our events. We're going to take steps, and we're going to trigger and communicate with other events. Now, this is not the first iteration of new tools from Lama Index to solve this problem of complexity. In early 2024, Lama Index released query pipelines. And query pipelines were okay. were okay. They were sort of implementations of this DAG abstraction that was modified to be able to kind of deal with the Lama Index data agents. And it kind of worked, but again, Kind of worked, but again, they realized sort of fundamentally the DAG approach, it really was leaving them wanting. You know, they kind of got the ability to chain together different modules with these query pipelines in order to create more advanced workflows. in order to create more advanced workflows. But as soon as they added cycles, they said, well, issues, right? These issues we discussed. And they were like, well, what are other people doing? And they looked around the ecosystem and they looked around what's going on right now at the LLM edge. And they were like, well, instead of like using a graph, a cyclic graph, maybe you could say directed cyclic graph approach like others have picked up. One where you're sort of designing explicitly how the graph is to be traversed through nodes and edges, which, by the way, does have some key benefits. For instance, the graph traversal algorithm approach is quite intuitive. That is, if product can draw the diagram, the flow chart of boxes, then engineering can really easily build it almost like one-to-one with the drawing from product. the drawing from product. So that's a pretty powerful thing. But again, rather than going this route, Lama Index said, no, we're going to go event-driven. And in an event-driven architecture, the step subscribes to certain events and is ultimately responsible for deciding what to do based on the data it receives. You know, Wiz and I were talking about this earlier this morning, and I created this meme based on our discussion because it is quite literally that we kind of have these floating possible events that kind of exist, any of which could be next. Okay. Right now we're at our current step. We've triggered our current event and we're looking at what's going to happen next. And based on the way we set up our event-driven architecture, one or more of those events is going to be chosen next, and so on and so on. So what's kind of happening is we're getting this sort of, again, this dispatching that the workflow is doing. Back and forth through the different steps, those are Python functions. And so we have steps, we have events, we have this event-driven architecture. architecture, it's not quite the same as the loop cyclic graph-based approach. Let's just double click in on that for a minute. Of course, the elephant in the room, the alternate approach that all of you are wondering about is Landgraf versus Lama Index Workflows. Now, of course, Landgraf talks about using their tool as a stateful multi-actor application building tool. Stateful. What are we talking about? We're talking about state. We're talking about the state of affairs within our application. Something that's happening now or at a specific point in time. Landgraf is all about this idea of creating state machines and specifying them as graphs. And again, with this approach, if you use the graph-based approach, if you can draw it, you can build it, and that's pretty dope. So what are some of the benefits of an event-driven approach versus a graph-based approach and vice versa? Well, as far as we can tell, the benefits of an event-driven architecture are that you actually need to do less explicit planning of exactly what the execution flow of your program is going to be. And the event-driven idea is something that, you know, using event handlers and this kind of thing is something that most programmers with experience in industry are going to be familiar with coming into this. So you kind of have that going for it as well. Now, if you look at the graph-based approach, you might say it's a little bit easier to whiteboard. And interestingly, and we'll have to sort of stay tuned to see how this continues to unfold. Interestingly, while Landgraf provides a stateful multi-actor approach to building agents and multi-agent systems, where in a multi-agent system, you're sort of subclassing a graph within a graph. in an event-driven architecture, it's not clear that we can easily subclass other workflows within a single workflow. Although I'm sure multi-agents and workflows is something that we can expect to see soon. So with this, I want to bring Wiz up to the stage. I want to just chat a little bit for just a moment. I mean, one of the things, Wiz, that I believe is from the docs, and this is this one of the classic from the Lama Index docs, like workflows make async a first class citizen. This first class citizening, I love this language from Lama Index on different aspects of there. And it sort of brings up this idea, I think this is where we got to with it, where it's kind of like, it's a quote, quote, sort of looser implementation of Landgraf, you know, and whereas it's more implicit, this implicit nature of event-driven architecture is really what comes through more than anything else. What do you have to sort of add to this idea of, you know, it's a looser way, it's more implicit, making async more important. I mean, how is it that you view exactly the difference between this event-driven architecture and a Landgraf style, let's call it a cyclic graph architecture. Yeah, I mean, basically, the way that I would think about it is the, we don't have to connect nodes and edges by connecting nodes and edges. So the kind of when you're building a land graph graph, there's a literal process by which you connect node to edge. Now that's something that still needs to occur, obviously, right behind the scenes. But we don't have to explicitly define it as if we don't have to write the code out, right, in order to do that. So that connection is abstracted from us. We're still effectively connecting nodes to edges by providing what state matters to what, or sorry, what event matters to which step. And then, you know, we dictate which event each step emits, right? So we're still drawing these lines, but we don't have to like call, connect this to that. And instead of having this idea of like a conditional edge being a kind of weird semi-node that floats in between two other nodes, we can just have our step emit different events based on the results that we see in that step, right? So it's this idea of we're moving away from needing to draw the graph ourselves and letting the letting Lama index draw that graph for us. Yeah, yeah, yeah. So moving away from drawing the graph ourselves, I like this. And sort of, I think the image in my mind is kind of that regardless of which implementation we use, after we run the program, we can draw a graph of exactly how things went, right? And so in either case, you have a start, you have a stop, you have stuff that triggered things in between effectively different states. Right. And while you may not be able to draw it ahead of time with an event driven approach, you can still draw it after the fact. So I wonder if you can comment on also kind of the familiarity for software engineers and programmers coming in. Event-driven architectures are something that are quite common, right? Not necessarily in machine learning, but in software engineering. Is that right? Oh, yeah. I mean, like this is kind of the listeners and events. I mean, if you're from that world, this is going to be very natural to you, right? Now, the exact implementation and behind the scenes what's happening, we can kind of quibble, okay, about how truly or purely event-driven this is. But it is that same framework of thought, right, where we have each of these steps is listening for a specific event. And when it hears the event, it will be triggered and it will do something and then it will emit some other event, right? So it's the same. It's the thought process is there. And that's going to really help people who are used to that thought pattern, build applications that extend beyond just a graph. Okay. Okay. And we got a couple sort of interesting questions just to maybe click in a little deeper here from the chat here. Giancarlo asks, so does this happen inside the same agent? In other words, can a single event trigger multiple steps at once? All of the events are asynchronous, which means that they're not blocking. So theoretically, yes. You could have a event that triggers multiple different simultaneously running steps. Right. And then you can actually also listen for multiple events and wait until both events are ready before triggering a step. Right. So say you wanted to trigger three events at once and we want it to wait for each of those three to be done before moving to the next step. You can do that. Yeah, yeah, yeah. You have some freedom. And I love the Willy Wonka questions here. Let's go to the top one here. Are we defining a graph indirectly by defining events and what happens when that event is triggered? In other words, like, you know, the way I think about this is if I'm a product guy and I want to understand what's going to happen and how my user is going to be impacted by this and what they're going to, what sort of payload they'll be delivered. I do kind of want to draw this out and I kind of want to understand how things are going to happen. And then, and then I would maybe give that to my engineer. Let's say you, Wiz, I would give it to you. And if you were implementing an event-driven approach, it might end up sort of looking like the graph that I drew in the first place after we run it, but not upon your implementation. So that's sort of the way I'm thinking about it. Does that square with the way you kind of consider this? Yeah, it is absolutely just making a graph. Like, so now it's good that it's not literally making a graph, but behind the scenes, yes, of course, we can represent this as a graph. But I mean, I think the key intuition here is most applications could be represented as a graph, right? Because it makes sense. We have functions that do things and then return results to other things. And those things take that result and they do things. And we have this kind of meta layer of state and we have this kind of meta layer of events now. But yes, there's this kind of, there's this creation of a graph that's just necessary for our application to be complete, right? We have to be able, there can't be an unreachable node or else it's you, or else why do we have it? And there can't be a node that we can't go anywhere from except for the final node or the stop node, because where do we go once we hit it, right? Like the, the idea is that it is kind of a necessity that these things wind up having to look like graphs or we w w more precisely, we, we should be able to represent them as graphs, which is always useful, right? Uh, but it is, we have some extra layers in the fabric of the graph, you know, like it is still a graph, but we're not creating it as a graph. Someone said like PubSub. Yeah, I mean, kind of right. You can imagine this being a PubSub situation where you have, you know, your agent existing in a more like ready state. And then this is the same kind of framework that PubSub is built off of, except now we have these LLMs involved. Yeah, yeah, yeah. Okay. Okay. Okay. All right. I love this. And yeah, I mean, that's kind of this idea of like, if you can create a flow chart, you should, because everybody loves flow charts. And like, wow, look at that visualization of the application. Yes, that's always there sort of in the background, whether you call it a graph or a flow chart. But yeah, okay. So I think we've got it. I think we got event-driven versus graph-based. The nuances between. Wiz, thank you so much. It's time to get into today's build everybody so let me introduce today's agent build for you just kind of recalling a couple things that we've already talked about llama index framework for context augmented geni applications, agents and workflows. Remember an agent given a task, here's some steps. Now there's some constructs associated with core Lama index that we're not going to go into. You can check out our data agents event we did with Lama index a couple of months ago. It's still completely valid and super useful. So if you want to know more about exactly what's going on with the agent behind the scenes, check that out. Workflow, event-driven, step-based. So Llama Index, all about context augmentation. Context augmentation, we can kind of think of this today with the in-context learning design pattern that we're leveraging in our application stack of today. We can kind of consider this as, okay, we're talking about RAG, right? And RAG, of course, is just dense vector retrieval with some in-context learning. That is, we are returning reference material from our data. We are augmenting the prompt, the context, putting everything in the context window, and we're getting better answers. Okay, so we're going to do something a little bit cool today. We're going to take RAG to the next level. Classic RAG, you ask a question, we find similar stuff in our data. We set up a prompt template. We return the similar stuff we find to the context window. So this dense vector retrieval is coupled with in-context learning to give us retrieval and prompt augmentation. We are almost done with RAG. We still need to do generation with our chat model and get our better answer. So what we're going to do today is we're going to take this whole RAG pipeline, and we're going to just shrink it down because we're going to build something slightly more complex. We're going to build an agentic RAG setup. So we're going to build an agent that can leverage RAG, but can also leverage some other things too. So we'll ask our question. Question will go to our agent. Our agent will decide, should I go to the RAG pipeline or should I go to something else? And we're actually going to give it access to two tools, the RAG pipeline or a Tavoli search capability. search capability. And based on whichever tool gets selected, there'll be some result, some data that we get out of that, that we can observe, and we can push it back to our agent to decide, hey, do I know the answer yet? Or do I need to go find some additional information? And we've got another layer that we're adding to this today, because we know you guys like the additional complexity. And everybody, meet your new friend, Crag. So we're going to slide in some corrective rag here, everybody. And this is for when retrieval goes wrong. Okay. In other words, just as Rag is trying to eliminate hallucinations through reference material, sometimes the reference material isn't the best. So for instance, when I do retrieval in sort of a low quality way, When I do retrieval in sort of a low quality way, question, who is the screenwriter for Death of Batman? I'm going to go find Batman, 1989, Murder of Bruce Wayne's, when Ham's script, it seems like Ham probably wrote this screenplay, right? Well, I see how you made that jump, LLM, but no, not right. So we're actually going to develop a little workflow with Crag. So we're going to ingest some data, and this is actually the visualized graph representation of the workflow. We're going to do retrieval. We're going to evaluate relevance of our retrieval. And then we're going to do an extraction step. We're going to transform the question, our initial query. And we're going to show how we can also leverage tably search here before we get our final response. And the application that we're going to do, Crag, agentic Crag on today, is we're going to do, we're going to take some docs we've been looking at in class and this week at AI Makerspace, EU versus the AI. These are the, quote, harmonized rules on AI and amending regulations. So, you know, I had sort of questions about this, like EU versus the US, like what are people saying here exactly? Like how many parameters are too many, for instance, or like what's really going on here? And is it the IP stuff? Is that even a thing here? In fact, if you're interested at 12 p.m. PT today, jump into Discord with Akash Shetty from our community who will be discussing this very topic. So we're going to use the data from the EU versus AI legal docs, and we're going to leverage some classic models, Text3 Embedding Small and GPT-40 Mini, to build our agentic Crag workflow. And you know who's going to do that for us. The LLM Wizard himself. Wiz, over to you. Crag time. Yes, and it's time for Crag. Okay, so we've got our notebook. I'm going to zoom in so you can see the notebook. Classic. First things first. We're going to go through this. It's a lot of stuff that we have to do. There's code, you know what I mean? But it's straightforward. It's simple. It's just, you know, getting used to new frameworks, classic. We're going to cover events in steps mostly, and then tying them together into our workflow. We are going to use the corrective RAG workflow based on the corrective RAG paper. Greg's done a great job of highlighting it. We're going to go a little bit deeper so we can think about how there's events and steps in there. So first things first, we got to do a bunch of prep. We are going to be, you know, as Greg said, you know, async is a first-class citizen in workflows. So because we're in a currently running event loop, we have to import the nest async IO so that we can async in our async classic. Next thing, we're going to grab some Lama index dependencies. We need, of course, Lama index. We we want llama index tools to philly research for our to philly that's going to be our external uh search tool and then we're going to get our uh our classic llama index utils workflow which is going to help us make a pretty graph then we're going to set up our open ai keys we're going to use open ai's models today as our llm and then our to philly search search key for our external search tool. The next thing we're going to do is we're going to prepare that data. So we have this EU Act that just was released. It kind of goes through all kinds of, you know, it's a big document filled with legalese about AI and how the EU plans to regulate AI. So we're going to use that. We've just provided an HTML file in our data repository that was downloaded from the regulation. We use the English version. So this is the source document we're going to be looking at. You can use whatever document you'd like as you see so steps and events so what's a step and what's an event step is a unit of work or task in a workflow they're just gonna be python functions decorated by a step decorator for the most part each step though is associated with you know one or more events as input and it emits as an output an event, right? Technically, a step can event different events. So, it doesn't have to, there's not one event that the step must emit. It can emit, you know, one or another event. So, we talked about this before. The basic idea here is that, you know, our event is our path in, and then we send an event out, and then that's handled through the workflow. For each step, though, all it knows about is the event in any global context, right? So just like Landgraf, every step is an island. We feed it what it needs at time of running. Okay, steps can be extended to have multiple workers in a workflow. So this is actually pretty cool, right? Let's say we have a step that needs to be executed three times. Instead of doing that sequentially, we can just have three workers, and all three of those steps can happen at once. Then, of course, we have our modify, we have global context. So steps can modify the shared global context, aka state, as required as well. The important part of this is, as opposed to having like a state object that is passed around, we kind of have this state fabric that exists in our workflow, and we can pull things from it. Next, we have our events. And of course, events are very straightforwardly, they're just little data structures, and they pass information. they're just little data structures and they pass information right so when we when we send an event out it has a signature other other steps are gonna be waiting or listening for that event and it's gonna contain some data that's relevant for the running of its associated step events are pydantic models if you're used to Python this is this no, you're no stranger to this, right? Pydantic is a good way to get validation for quote unquote free. And then there are two special events, which are, there's more special events, but these are the two most important. We have our start event. Start event is the entry point into the workflow. So it is the, it's like the first event, right? And we'll see how this works when we write the code out. But the idea is that start event can refer to potentially more than one step. And then our stop event. This is, I think, the key difference between this and Landgraf. The stop event can occur just whenever. And then it's gonna output whatever, right? Whatever it's got access to. So the stop event is interesting in the sense that we can just kind of stop when we feel like it, and then we can output whatever is relevant to output. So that's a lot of, you know, big info dump, of course. But we're gonna kind of keep this in mind as we talk about Craig. So Craig has this lovely picture. We're not going to spend too much time looking at the picture, but the basic idea is that we do retrieval, and then we evaluate our retrieved documents, and we keep only the documents that our evaluation deems are relevant to our query. deems are relevant to our query. Then if we have any documents that were retrieved that are deemed not relevant, we're going to move to our searching tool, which is going to be our external search tool. In this case, it's going to be Tavillion. So what does this mean? Let's say we have five contexts, right? And we determine that four of them are relevant, but one is not. What we're going to do is for that irrelevant context, we're going to rework our query so that it's useful for our tool. And then we're gonna ask the question to our external search tool. And then we're gonna use that to augment our prompt. So the basic idea is if all the contexts are relevant, it's just normal rag, right? We just go to the next step. However, if we have one or more irrelevant contexts, they are omitted from being provided as context to our final LLM call. And then they're supplemented by this web search or this external search. So, you know, this is the basic idea. Let's think about it, you know, from the steps, events, mindset, right? So, okay, so we have ingesting and processing data, right? We must, there's a retriever, so we must have to take in some documents and create an index. So we're going to imagine that that's a step, right? And its function is to ingest, process, and create an index to be used in following steps. The input event to this, so the event that's going to trigger it, is going to be the start event. And the output events will be the stop event, right? Because we're not doing anything once we've created the index. We just need to create the index. And then it's going to return the created index. Okay, cool. Then we're going to have a second step, which is preparing all of our pipelines for use. It's going to, again, start at the start event, and then it's going to move to the prep event, or it's going to emit the prep event, saying, hey, preparation is done. Whatever steps are waiting for preparation to be done, here's the event that signifies we're done. The state of that output event doesn't have to be anything, right? Because we're actually going to store all of this pipelines and everything in our global context. And the code's going to highlight this a little bit better. And then for step three, it's easy. We have to retrieve, And then for step three, it's easy. We have to retrieve, right? So we have a user query. We must now do the retrieval process to get the list of contexts to see if they're relevant or not. So the retrieve context step is going to be waiting for the prep event, which means that everything's prepared. And then it's going to emit the retrieve event, right? So it's gonna say, hey, retrieval is done. And you'll notice a pattern here, that the steps are going to indicate doneness or completeness, rather than the steps are going to indicate what step is next, right? So this is just convention. You can whip it up however you want. But you'll notice that in this example, we are saying when the prep event exists, that means the prep event is complete. Now we can move to the next step. And when we complete this step, we're going to omit the retrieve event to indicate that the retrieve step is done. Right? And what does this have to have in its state? Well, just a list of contexts. Then we're going to have the same thing for evaluate context step, right? It's going to need to evaluate those contexts and then assign whether or not they are relevant or irrelevant. Same thing here. We're going to start this. We're going to kick this off when the retrieve event is emitted. And then we're going to emit, once this is complete, the relevance eval event. And as a state, we need a list of contexts, which you already have, right? So we're going to see how we handle that in the code, and then the results of the evaluation. For the next piece, we're going to extract all relevant context. So we're only going to keep context that's relevant. This is what Craig's doing. And again, that's going to kick off when the relevance eval event is emitted by another step. And then it's going to emit the text extract event when complete. This continues to transform the query. This is actually gonna transform and use the external search tool. We're gonna need to keep the relevant texts and the external search documents now. And it's gonna kick off during the text extract event. And then it's going to say, I'm done. You know, we're done this. And then the next step, step boom final query okay so this all sounds great how do we do it in code it's always much easier in code we create some events creating an event is as easy as subclassing event and then include including any relevant state so we talked about the prep event didn't need state so we just pass out easy and then the retrieval or the retrieve event needed to have a list of contexts and so that's what we have here right and you'll notice that the events are kind of like mini state right we can think about it that way they're they're mini state that's associated with a particular step and the same thing for our relevance eval or text extract event and our query event. Easy peasy, lemon squeezy. Let's go. Next, we're going to need some prompt templates. As always, prompt templates. This is going to be our evaluation prompt template. So remember that we said for crag, we want to evaluate if the context is relevant or not. Of course, we're getting an LLM to do that. And this is the prompt that does it. And then we have to transform our query, right, to be compatible with our search, our external search tool. And so that's what this prompt is doing as well. Now we're going to set up steps. This is kind of like, this is the big part. This is the lots of code part. We start by creating a workflow workflow class that some classes workflow we're going to call it corrective rag workflow uh and then we're going to take a pit stop at this context thing context term overloading i know but context is the global context or the state of the workflow this is is accessible by all steps, right? So this is not just accessible through events. This is like, this is a layer that exists for all steps. Very useful as we'll see. So we have our ingest pipeline. You'll notice that that has a start event. And you'll notice that all we do is we create a ingestion pipeline, aka we need to create a vector index. So we create a vector index and we return the result as index. So for the second step, we need to look at going through a preparation step to set up all of our pipelines. Now, the keen-eyed among you are going to notice, hang on a second, Chris, right? We have start event for this step and this step. How do we know, you know, which one? Well, we don't. We're going to do both. However, if we don't provide documents, we just return none. None is not stop. Okay, so this is, I know we're getting into the weeds here, but none isn't stop. So if we get, if we have a start event, we will fire this node off. Absolutely. Right. We are saying this step. However, if we don't provide any documents, we're just going to return none. And then whatever else is happening is going to keep happening until a stop event is raised, right? So, and it's the same thing for our prepare for retrieval. If we don't provide a search term or a search query, we just stop and we let whatever is happening until there's another stop event, right? So the idea is that these nodes will both fire off each time. However, depending on what initial context we provide, right? What our initial inputs to this workflow are, it's going to exit one of those paths early, but in a non-stopping fashion, right? So it doesn't actually cause the application to halt. It just says, oh, we don't need anything from this bit, right? And so you can imagine that you could build quite complex sub-pipelines and everything like this, and all without stopping the rest of the application. That's the flexibility. Okay. the rest of the application. That's the flexibility. OK. All this is doing, it's a lot of code, but all it's doing is setting stuff up, and then it's storing it in our global context. OK. So now we have our retrieve step. It's going to blow your mind, right? But it's going to retrieve. And so it does. We get a bunch of context, and then we retrieve that, and then we send it out as a retrieve event with our result from our retrieved nodes. Easy. Okay, step four, we evaluate, right? Notice, during the evaluation, we're going to set relevancy results as a global context. So we're going to set this in our global state for our workflow. That means we can leverage it later without needing to keep passing it forward in events, right? So it's a good way to provide access to downstream steps without needing to modify all the event over and over again. Next, of course, we're going to look at the actual texts, and we're just going to extract the ones that matter. We're going to join them up in a string, and then we're going to send that as our next event. Next, we're going to check it out, right? Get the relevancy results we set in our evaluation step for our transform query pipeline step, as well as our query string. So even though text extract event, which we defined above, and I'm not going to do the injustice of scrolling all the way to the top, but it didn't have either of these things. We can still access it through that context. And then if there's any no's in our relevancy, we're gonna augment with the external search tool. And then finally, we kick it off. This is just rag. This last step is literally just rag. So this is, all this code's available, but we broke it out into the markdown so you can read the comments about it. And then of course, we have the ability to draw all possible flows which gets us this beautiful looking graph uh you know we have the start event which can go to ingest and stop or prepare for retrieval and then to retrieve eval extract transform query and stop so this is the idea we have the ability to export this as a graph because we can simply walk through all the possible paths, trace them out, and connect them with beautiful arrows. How do we use the workflow? Well, remember when we talked about there being two start event nodes? One's going to do something if we pass in documents. One's going to do something if we don't. This is where we're doing that so you'll notice that when we start our workflow with just documents we're just going to get our index back that's what we designed it to do and when we run it with our query string and our index we're going to go through the whole process and these are all the questions that we we've asked from greg so i'll let you look through those in the notebook. For now, though, that's it for the build. Thanks so much for listening. Please don't forget to subscribe, ring the bell, et cetera, for YouTube. We're live every Wednesday, and I'll pass you guys back to Greg, who will close us out. Awesome whiz. All right. So we got workflows today, everybody. What did we learn about workflows? Well, we saw that they leverage an event-driven architecture that combines steps, Python functions, with events, basically keeping track of state. When we compare it to these sort of graph algorithms, like we'll see in Landgraf, algorithms, like we'll see in LandGraph. It's a more implicit approach, although we can't draw the graph ahead of time sort of programmatically, although we might want to from a product perspective. After a run, we certainly can. And remember, workflows are not agents. They work in tandem with agents. If you want to go deeper on some of the EU versus the US, join the community session today in just one hour live in Discord in our lounge with Akash Shetty. And with that, we'll get into some Q&A. All right. So we've got a couple of questions that have been coming in from the chat today. And let's see what people have upvoted in Slido. It looks like we've got one at the top. The question. Okay. So I want to sort of wrap together a couple of questions I'm seeing from the chat here. Okay. So workflows versus Landgraf. What are the key differences that you're taking away and which one should people start with? Yeah. I mean, at the end of the day, they're not super different. The way that we can think, they're not super different. The way that we think about building them might be different, the way that they're implemented are different, but the things they're doing is not very different, right? It's getting a, it's doing something and then passing it to the next instance of doing something, right? So these frameworks are there's the key differences are in the implementation are in the the way you're thinking about them uh which is to say when should you choose one versus the other choose whatever is most intuitive to you right choose whatever pattern of thought you're used to because you're going to be able to build the same thing at the end of the day. If you're doing more complex sub-graphing, so as in you want this workflow to take advantage of another workflow to take advantage of another workflow, I would maybe suggest using Langraph for that since Langraph graphs are able to be nodes themselves. So the way of thinking about that can be very easy. Otherwise they just do the same thing. Yeah. Okay. So on that point, we got another question in the Slido sort of when would I need to subclass a workflow? You kind of just answered that right in this multi-agent paradigm we're often building multi-agent systems as sort of sub graphs in a land graph context that's sort of my understanding of when we would subclass we would want to subclass a workflow but right now it's not clear how is that right we always subclass workflow because of how Python works. Workflow is the base class and then we extend it through subclassing it. Yeah. Okay. Okay. So in this case, I know the word sub is in both, but it's not referring to a sub workflow, right? It's a sub workflow. It would be an entirely separate entity. Okay. Okay. So yeah. All right. All right. We got to disambiguate some of the language here. So yeah, we'll continue to work on our disambiguation. Thanks for that question in the slide there. I want to get to the ones that were upvoted, even though I know we're short on time. So if you have to run, we totally understand. For telemetry, what is your recommended tooling for instrumenting workflows? I think there's like an out-of-the-box integration with Phoenix Arise, or sorry, Arise Phoenix. So that's Lama Index's recommendation. I don't know why I would go against that. Yep, Arise Phoenix. Okay. And then this one at the top, maybe just a comment. You know, debugging event-driven workflows can be non-trivial, especially in production environments. What tools do you think about when you think about doing debugging? We saw some questions on sort of debug. I don't have the IDE like Langraph just released for this event-driven architecture yet. How can I debug? What tools are you reaching for today? Arise Phoenix. I mean, at the end of the day, what we need is visibility into what's happening inside our workflow to debug it. Lama Index has built a integration with Arise Phoenix. That's the one they broadcast in their documentation. integration with Arise Phoenix. That's the one they broadcast in their documentation. That's going to be a good tool to use to do this. As far as like a Landgraf IDE clone, there isn't one yet, right? But for now, I would say just look through the tracing, look through what's happening at each step, ensure that you're using appropriate metadata when possible. And that's going to be the way that you're doing it. Yeah. Okay. All right. And I think we are just about out of time. If you have questions you really, really wanted answered, please to leave them in the comments section on the YouTube page. We'll go ahead and answer them for the benefit of everybody. Wiz, thank you so much for walking us through workflows today. And we'll see you back next week, Wiz. That's a wrap, everybody. So for today, Lama Index Workflows, the latest and greatest from Lama Index. If you want to learn more about Lama Index, core constructs, how to do agents, how to look at specialized tools they have like Llama Parse. Check out our YouTube playlist on Llama Index. We've done some really great events with them and we really value them as an event partner. If you're looking to have a really accelerated version of a comprehensive AI engineering training, then consider taking a look at our AI engineering bootcamp that we're always running. We're currently in cohort four. Cohort five will be towards the end of the year, this year, early 2025. Check it out. If you're interested, have any questions, let us know. If you're a company that has a team, if you're a leader, if you are ever in need of any sort of talent or support with proof of concept or prototyping, then reach out and let our team know. We've started to work more and more with teams in the enterprise, and we're looking to develop some more long-term relationships with folks trying to build out AI engineering organizations within their enterprise. If that's you, feel free to reach out directly to our team. We've dropped the link in the chat. And finally, if you have any sort of feedback on this event and ways we could improve it, please let us know in the feedback form that we'll drop. And that's it, guys. Until next time, keep building, shipping, and sharing, and you know we'll do the same. Have a great week, everybody. We'll see you next Wednesday. Bye, guys. See y'all soon. | Better Agents with LlamaIndex Workflows | 3,746 | AI Makerspace | 20240911 | Join us for an in-depth exploration of cutting-edge AI development as we dive into the best practices for orchestrating complex agentic applications in 2024! We'll compare pipeline, graph, and event-driven architectures, with a special focus on LlamaIndex Workflows. Discover how this event-driven approach transforms AI development, offering greater customization, debugging capabilities, and powerful visualizations. Whether you're a developer or AI enthusiast, you'll gain valuable insights and learn how to leverage LlamaIndex to build state-of-the-art agents. Don’t miss out—tune in and get your questions answered live!
Event page: https://bit.ly/llamaworkflows
Have a question for a speaker? Drop them here:
https://app.sli.do/event/gpLPoKPKyQZRmyxswr9tMb
Speakers:
Dr. Greg, Co-Founder & CEO AI Makerspace
https://www.linkedin.com/in/gregloughane
The Wiz, Co-Founder & CTO AI Makerspace
https://www.linkedin.com/in/csalexiuk/
Apply for our new AI Engineering Bootcamp on Maven today!
https://bit.ly/aie1
For team leaders, check out!
https://aimakerspace.io/gen-ai-upskilling-for-teams/
Join our community to start building, shipping, and sharing with us today!
https://discord.gg/RzhvYvAwzA
How'd we do? Share your feedback and suggestions for future events.
https://forms.gle/iESG6zx2mfRM7NYd8 | 2024-09-11T13:24:01.625141 |
https://www.youtube.com/watch?v=UuxLY87MqwU | Good morning, good afternoon, good evening, depending on where you are in the world. I'm happy to be able to talk to you a little bit about a knowledge graph that I have put have the luxury to go out and download data from some other source and represent it in a graph. A lot of the data that you'll see is actually mostly manually entered as well as semi-automatically entered into the database. And I'm not really a full-blown developer uh which means i don't write code on a daily basis as my job but uh i do write scripts uh to automate certain things and that has also been done in in my example here so um a little bit about myself i I grew up in Austria and initially started to study microbiology, then switched over to molecular biology, decided to get my PhD in Switzerland, and then had an opportunity to move to California to do some postdoc time and during that time I decided I wanted to kind of combine biology with computer technology and particular databases and that ended up landing a job in information technology and I've spent the last 25 or so years in that field in particular in the biotechnology space where i got to learn a lot about drug discovery and the data that is involved in there i'm very excited about interactive visualizations interactive visualizations, worked a lot with the traditional BI tools and love to see data that can be interacted with that are not just static charts that you see on the screen but you can actually see what's behind the chart. And for fun I like to drive fast cars and particularly rally cars if I have the opportunity to do that. So how did I get into Neo4j? It really started with my interest in cancer research, oncology, and in particular because I had a brush with cancer myself, which fortunately turned out to be not serious. But I also have friends that are affected by it. And I've also seen idols in the business being affected by cancer. It's a very non-discriminating disease and basically can affect anybody at any point in time. And there have been good improvements in treatment of cancer over the years, but there's still a lot of work to be done. work to be done. So I wanted to learn more about the space, the domain that is involved in cancer research, the companies, what are they working on, what kind of types of cancer they're working on, what kind of molecules they're developing. And so I thought, okay, as I learn about this, how do I capture this knowledge? Because I can't keep everything in my head. I need some kind of database to do that. And initially, relational database came into the mix. And I thought, well, as we've seen from other presentations, usually you start off with a schema that is very fixed. So you have to know everything up front and then you populate the database. So it's not very flexible. So I was looking for something more flexible. And so I came across the idea of node relationship node or subject assertion object. And I was really excited about this idea and thought, okay, this is the way I want to implement a database to capture this knowledge about cancer drug discovery. So I found this slide actually on a presentation from Alessandro Negri from GraphAware and I thought this actually captures very nicely how one could capture knowledge in a graph. You start off with the data that's just out there. These are the things that we care about in a particular domain and then we start labeling it to give it some information the next step would be to say well if I have these individual things how are they connected with each other and how can I represent this how can I define the way they're connected with each other and that leads you to kind of knowledge in a knowledge graph and ideally that leads you to additional insight and then last but not least wisdom we can debate about the wisdom but definitely you can get the insights through this connected data landscape. So the initial idea that I had, and I represented this in the graph just a few years back, labeled competitive intelligence and cancer drug discovery, was that in this space we have lots of different companies, small companies, startups, medium-sized, and very large companies, the pharma giants. And all of these, they're working on what's called therapeutic molecules. These are the drugs that they're marketing in the end when they are approved to treat a particular disease. These therapeutic molecules interact typically with a molecular target. This molecular target is a protein inside your body that has a critical role in the disease process. And the case that I'm interested in here is in driving cancer, and you wanna influence that with the therapeutic molecule, with the drug in a way that obviously benefits the patient, either stops the cancer or eliminates it altogether. So I thought, well, there must be some network in this and when you look at the bottom right I cannot show you the basic network that exists there where you have a company that builds a molecule or designs a molecule that acts on a molecular target but then there are other companies that work on a different molecule but they act on the same target so you can create then sort of a network of companies that work on the same molecular target and treat the same disease or sometimes also different diseases. So it's a start of a network but it's by no means the end of it. When I started to look into the whole drug discovery process that's outlined in these different boxes here, there are very many things involved in bringing a drug to the market. And so the initial idea that I had was really at the very beginning here, where I said, okay, target a molecule, then we obviously have to have a disease the molecules go through some steps along the way to optimize and identify a single molecule that will be pushed through this drug discovery process that is then safe for entering clinical trials And then the clinical trials themselves are going through various phases before an approval could happen that brings the drug on the market that you can then as a patient receive to treat the particular disease. So looking at this space, I started with this simple model, but as you can guess, this is going to increase in terms of node types very rapidly, and it certainly did. So just as a reminder, why cancer, the statistics are still pretty abysmal when we look at how many deaths are predicted even in 2019 taken from the American Cancer Society so there's definitely still lots of work to be done and we need to see how we can support the companies that are actually bringing these drugs to the market. And it costs a lot of money as well to move this forward. So looking at the knowledge graph idea, I wanted to have an environment that allows me to answer simple as well as complex questions. So I wanted to quickly see, well, if I have come across a company in the past, can I look it up by name, for instance, or a molecule, can I find it by its name, or any aliases or alternate names that this molecule might be known by. But I also wanted to ask some more complex questions. For instance, well, which of these molecules interact with a specific molecular target or a target family? What is the drug pipeline, for instance, for a specific company? Which of the molecules are they developing themselves? Which of the molecules are they developing in collaboration with others? Then which companies are working on therapeutic molecules that act on a specific molecular target? And which diseases are they targeting? As I mentioned, a molecule may be involved in a disease process, but this disease process isn't necessarily isolated. It may involve multiple parts in a human body, and one molecule could be used in treating multiple diseases. Last example here, which therapeutic molecules are in a particular phase in the clinical trial that is targeting breast cancer irrespective of a subtype. So there's got to be an opportunity to consume taxonomies of a disease, for instance, to get to the right answer so with all these questions in mind i definitely wanted a flexible system well easier said than done when i started there was really nothing and i resorted to something that i was familiar with that had a decent scripting language and a good framework to build a UI and I created some input screens that you see here to enter the initial concepts about companies and the basic information that I wanted to know like such as a name, a description, maybe an image, and then potential link outs to other information that I didn't want to capture directly in my knowledge graph. And I wanted to also have the option to have a minimal entry possible because sometimes all I knew was the name of something and then that would be connected to other things. So then I needed also relationships. I needed to figure out how to connect a molecule as shown in the example on the left-hand side labeled subjects with an object, with a company in this case, through a named relationship. So I defined my relationships such as developed by company or has location in city so I can use that also as a potential filtering mechanism in the future once I have a knowledge graph in place. And in addition there were some link outs to other source information and buttons for a quick look up. Of course, a molecule is then connected not just to a company, but it's also connected to a lot of other information that one would like to know about a particular drug molecule in the pipeline, such as the highest clinical trial phase that this molecule might be in, what it potentially is indicated for, in this case, pancreatic cancer, what type of molecule it is. If you're familiar with this space,, a drug was just a small molecule, a chemical that you consumed in a pill form. But the biotech industry started to bring proteins that typically had to be injected into the mix. And now the design has gone much further where proteins and small molecules are combined in quite sophisticated structures and sometimes we even have cells that are now used as drugs so one couldn't necessarily even say therapeutic molecule and would have to be changed to therapeutic agent which is something that I actually would like to do on my knowledge graph in the future. So, if you're familiar with Neo4j, this will remind you obviously of the property graph model. You have nodes with properties, you have relationships with properties, and I had designed this in FileMaker Pro initially by setting up uh two tables one for subjects one for objects and then another table that connects everything together through relationships um and i didn't know neo4j at the time when this started but it allowed me to collect some data over over time and then i came across Neo4j and I said well this is exactly what I need to continue on this idea and as I started out with a simple set of concepts it obviously grew over time and it is still growing. This slide shows you some examples of these additional concepts where I started to get into the actual product that a company sells. I got into some of the financials that gives me a perspective of how profitable are some of these drugs. And then I also wanted to get into the biology and the chemistry more and started adding bioprocess or biological structure in addition to the target, the molecule and the disease. So the graph is still growing as it is, as more data becomes available and as additional questions may come to the foreground so I said I didn't have Neo4j I also didn't want to enter all the data manually so one piece that I looked into was to get data more automatic automatically and I looked into clinical trials gov which Lists all the companies that are actually developing drugs and enter them into human trials and I'd set up a script that allowed me to semi automate an extract transform load from that database on a weekly basis so I do this actually weekly extract the data from clinical trials and then go through this process by looking up cancer related entries and assign them there's a lot of stuff to it as well. On the lower left in the slide here you see some of the issues that are actually popping up which is the terminology that is used to describe a disease or the drug is not very strict in this database so if you really want to harmonize some of that it requires some manual intervention in this etl process to establish the proper connections and that is a process that i go through it's uh quite labor intensive and you also have to have the knowledge at times to make the right decisions to bring this data to together so just a little bit about what's in the database so this database is not large this is not about big data but it's about highly connected data so at this point I got up to about 86 different types of data, different types of labels. I have 160 relationship types that connect these different node types together, 16,000 nodes and 89,000 relationships. The majority, as you can see, is clinical trial data with lots of dates associated with it. Then we have the therapeutic molecules, which are kind of at the center of this graph, the molecular target, and the company, the organization, where mostly have companies, but also have some data about some research organizations as well so i had the data i had this in the filemaker database but how do i get it out into neo4j i resorted to a pipelining tool called nime an open tool, which is actually quite great to read in files, then process it, manipulate it somehow, and then potentially write files again. And so I did export data out of FileMaker, then did some transformation in there to create Load CSV files that could be used in a script that I would run to get the data into the Neo4j Database I started off with standalone database community edition initially and then When Neo4j offered the opportunity to get into Neo4j desktop I started switching over to use desktop to store the knowledge graph the load process I wrote a script in sublime text to initially just define some simple constraints and the only constraints that i have is that i have some unique identifiers for each node type and i have a uniqueness constraint on on the name to make sure that i avoid duplicates because that is something that you can come across quite often, especially when you're dealing with things like molecules that have many different names throughout the development process. But I wanted to represent it only once in the database, not 15 different ways, just because they have a different name then i loaded the notes first very simple scripts here followed by the relationships which i put together by looking up the notes initially and then assigning a particular relationship based on the value that i had assigning a particular relationship based on the value that I had extracted from the FileMaker database. And last but not least, I also looked into using Cypher to add some content into the data based on the data that was actually there, such as associating, for instance, a company to a therapeutic area but the color disease area that they may might be working on based on the type of molecular molecule and the disease that the molecule is is supposed to address and that all went into one file and one cipher statement, and that is used to basically load the database. So the technology stack that I've been dealing with to build all this is FileMaker Pro Advanced to enable me to write scripts and capture the data initially. The ETL process through KNIME and a load script, getting it into Neo4j. And then I had the ability to write queries through Cypher that lets me actually answer the questions that I set out in the very beginning to this knowledge graph. In addition now, it also offers the opportunity to write, create reports through APOC, through the awesome procedures on Cypher that allows me to put those into a visualization tool such as Tableau to maybe create a dashboard. I could create a GraphQL API. I tried a few examples of those, and I'm very excited to get more into this to maybe put a web application on top of it. And I also can visualize this data in Neo4j Bloom, and I'd like to get to that actually in a few minutes to show you this uh life if the demo gods are are with us here and then last but not least we have the graph algorithms playground that one could use to do graph algorithms okay so here are some uh cypher query examples. One example, for instance, for some real-time queries, if I wanted to know what is the pipeline of a particular company, the thing I want to point out here is that because not all the data is always available, I had to use quite frequently an optional match phrase here to say, well well if it's there show me the data if it's not there then obviously still show me the original molecule that I wanted to see even if it doesn't have anything else other than maybe just a name so that flexibility is really important in this context. The second part here, we have some APOC-generated reports. For instance, if I wanted to get all the data about molecular targets out of the database, I can use APOC to generate this report, and that is data that kind of goes into a visualization tool such as tableau now let's get to visualization graph visualization I was very excited when neo4j developed bloom and made it available for startups and I jumped on it very quickly ended up writing a little blog about this too. And rather than actually showing this slide here, I want to get into Bloom if I can and show you some examples of how you can interactively work with the data. with the data one example that I want to show here and I'm not sure whether I can't do it on online here is that bloom doesn't just allow you to look up the individual entities in the knowledge graph such as a company or target or molecule you can also do these global visualizations that show you for instance like on the left hand side here companies and therapeutic molecules which gives you a perspective of how highly connected some of these areas are in the space of big companies that are collaborating with each other working on the same molecule and then you have such a halo around these companies which represents the the newer companies the startups that are just getting into this business and and trying to make a mark in that similar with therapeutic molecules and molecular targets. The centers here highlight those that are heavily worked on. Those are targets that are so-called validated, that many companies then try to develop drugs for, whereas the novel things are on the bottom side here, which don't have too many connections. So let's see, demo time. Let's see whether I can share my screen. All right. Okay. Okay, so if I go here, I hope people can see my screen. Can you confirm, Karin? Yes, we can see it. Okay. Perfect. Perfect. All right, so here I just wanted to show an example of a molecular target that actually has been in the press quite frequently recently because it is a cancer oncogene that has been very resistant to being targeted by a drug. And so in this case, I have basically a representation of the molecular target. One can double click on it, get basic information. I don't have much here other than there are many alternate names for the same target, which is sort of a common problem in biology that we have many names for the same thing and sometimes the same name for different things so in this case you can also see then the relationships which is really where the interesting part comes into the mix I have these various named relationships that define the relationships to other things and you can also look up the neighbors directly. So in this case, you see that I have connections to 19 therapeutic molecules, one event, one source reference, one company directly, and one molecular class. What I can do is I can highlight all of this and reveal this in the graph which then lets me further explore uh what i would like to be what i might be interested in um i might want to know well why do i have revolutionary medicines uh on there? What is a gtpase? Are there other ones? Or what do we know about a particular molecule here? A typical question, which unfortunately I don't have a good solution for so far, is I would love to just highlight all of these molecules that are connected to this target and then say, okay, give me just the companies that these targets are being developed by. But I cannot yet do that. So that's kind of a hint for the Bloom team to see is that a possibility maybe in the future. What I can do is I can expand this, but then we get into sort of a typical graph problem where you might get a hairball, which is a little bit more difficult to decipher. But Bloom gives you still some tools that you can untangle this graph where I can basically say okay I don't care about this I don't care about that at least at this point in time and I can quickly just deselect these examples and say okay dismiss all of this and then just leave me what might be interesting for me, which are the molecules and the individual companies. And then I have information about the companies laid out here, including also a hyperlink that lets you then get into more information outside of the context of the knowledge graph and I wanted to say thank you for enabling those hyperlinks because I think that's a very important feature in sort of the graph visualization toolset. Let's see how much time I have. I have a little bit of time here. You have 12 minutes total. Okay. Let me show you... You have more than that. You have like 17 minutes. Okay. All right. Something maybe quickly to showcase as well um i prepared this already beforehand so uh one can use also the neo4j browser to look at the structure of the knowledge graph you can do db schema but unfortunately that gives you a hairball so you have to be more specific and use here the APARC meta sub sub graph to show just the relationships between company therapeutic molecule molecular target a similar example here for instance for clinical trials where i have the clinical trial here that's connected to many different dates that come with the clinical trial record, but then also to therapeutic molecule companies, research institutions, disease and development stage. I was actually wrong about the time. There's like 10 minutes left, but you can continue, keep going, and anybody who wants to stay can stay for the Q&A, all that stuff. I might just have to jump off in 10 minutes. Okay, okay. I'll do that quickly. I just want to point out. In Q&A for sure, but go keep doing what you're doing because this is interesting. I think I could speak for everyone when I say that. I just want to point out, so I wanted to show briefly that I can run these different queries that I set out in the beginning that were my goal to answer specific questions and also want to highlight that in some cases you don't necessarily just want the graph, but you want a tabular representation of the data which you can certainly get through running these queries or reports as i have shown shown earlier so i just want to jump back quickly since i know i am out on time here and okay where are my slides here I hope this one was the last slide here let's take this okay I think I have a couple I just wanted to ah Ah, okay. Last one. Interactive visualization dashboards can also be generated by this here. This is, for instance, some financial data that I get out of the database here showing the profits that companies can make with these drugs and bring them in perspective when you look at multiple different drugs together. that shows you then what are the the real big money makers versus what maybe doesn't address such a big market size and then last but not least everybody knows now that the world is highly connected if you're interested in this knowledge domain connect with me uh everything starts out simple and quickly gets complex. So try to retain simplicity wherever it's possible. And when you're presenting about knowledge graphs, also think about non-technical audience that's just interested in the knowledge, not necessarily how you got there. Interactive graph visualizations, I think, are a key to enable data explorations of such a knowledge space and the graphs really opened the door for me to capturing learning understanding and interrogating complex data landscapes to gain additional insights and I just want to say thank you to everybody who makes open source tools available to a broad audience so they can explore these new opportunities and particular neo4j and also nine and if you want to explore that visual interrogation a little bit further I put a blog out there on medium and then last but not least we want to get to the Hunger Games questions as well before we forget that. And if you want to answer those, go to this address here and address them. And I'll take a look at the Q&A section to see what questions are. I have a couple questions in the Q&A. I'm converting some of the messages from the chat to the Q&A. If that way you can just go right through them and click answers. Okay, okay. Just go through them. All right, I'll try to address. It's my first knowledge graph so I don't know whether that's what people think about a knowledge graph or knowledge base for me that that was a knowledge graph the results of clinical trials potentially yes they are of interest, and at the moment I try to capture positive or negative results as part of an event in the graph. It's a very good question, difficult to address. Interfacing NIME with Neo4j, honestly, I don't directly interface it at this point in time. I basically just export files out of KNIME in a format that Neo4j can consume then through a script. So KNIME is a data pipelining tool that allows you to read in data from a file or from a database, manipulate the data, convert it in many different ways, and then output it again into files or another database. I haven't tried connecting to Neo4j directly, something I would certainly be interested in. try connecting to Neo4j directly, something I would certainly be interested in. Who uses Candid at this point in time? It is still, as I look at it, as a prototype. So I'm the primary user. I'm trying to expose it to a few scientists, biologists that might have an interest in it. And now that I have Neo4j Bloom available, I can actually do that because before, no biologist is really going to run Cypher queries until, unless they're really, really interested in this domain. So Bloom opens the door to that possibility. Clinical trials definitely is providing volume of the data, but then I'm also reading a lot of trade publications such as First Biotech or InPoint News that provide information about novel companies that are just coming on the marketplace that Yeah are really new and I wanted to track them as early as as possible And the link I can share in the chat box All right, so let's see can I do that I might have to do that differently I can't get to that link directly here sorry yeah the last one about named entity recognition tools. I would love to get into NLP. There's a lot of content that could be looked at actually in what I store in the database and you could create more tags. I haven't had the time yet. So this has been sort of a side project of mine to get into graphs and to see what you can actually accomplish with it, what answers you can provide. And, yeah, I wish I had more time and hands available to address those. I'd be interested in collaborating. I'd be interested who would be interested in this kind of data and what they would want to do with it. So yes, please reach out to me via LinkedIn or Twitter. That would be great all right okay perfect thank you justin appreciate posting the link to the bloom presentation i think everything is going to be recorded i don't know exactly how people can get to the recording after this is over but i'm pretty sure we'll find out from neo4j where you can find it. And I'm excited, the power didn't go out in Southern California so far. So I guess the wind must not be blowing that hard so far. Make sure you answer the questions for the Hunger Games. All right. Thank you everyone for joining in. I hope that was interesting content and topic for you. Let me know if there are any questions. I'd be happy to get back to you. | How to build a knowledge graph from scratch even if you are not really a full-blown developer | 2,474 | Neo4j | 20191028 | How do you capture knowledge about a complicated domain in the life sciences if many of the data you need is not available for download? Learn about building a cancer drug discovery knowledge graph using tools to capture, connect, store, query and visualize a landscape of biotech/pharma companies. | 2024-09-11T19:44:07.640332 |
https://www.youtube.com/watch?v=9ZXuof23Noo | Hello, good morning, good evening, good afternoon, and welcome to Neo4j Live today with Sebastian Lobentanzer. Hello, Sebastian. How's it going? Hi there. Good. How are you? Good. I'm good. Thank you for joining. Thank you for taking the time and for being here today. I tried something new, this one here. If I don't move too much, it will stay where it is and don't fall off this very improvised stand-in for my little display here. But I'm hoping it'll stay for a while. It's just a fun little gadget that Michael got me and it was always very far away. But now it's closer. So I'm happy to have you join today. So maybe a little background. So we kind of met via the Health Echo team, which is part of the Graphs for Good initiative. So Neo4j's Graphs for Good opportunity is a way of giving back a little bit. So if you have, you know, a nonprofit organization, nonprofit initiative, and would like to use a graph database, there is the graphs for good page. And then specifically for life sciences or healthcare with COVID, I think there was the COVID graph, which was famous, somewhat famous a couple of years ago. And there is this health and life sciences community that formed around this initiative. And there were lots of people trying to make a knowledge graph or trying to make knowledge accessible around the research for COVID back in the day in early 2020. So almost three years ago to the date. And yeah, Sebastian is part of that community. And Sebastian is from the BioCypher initiative. But before we talk about BioCypher, maybe Sebastian, say a few, oops, wrong button. Maybe Sebastian, we say a few words about who you are, what you do and what we are. Yeah, I'm a postdoc at the moment at the Institute for Computational Biomedicine in Heidelberg University Hospital. I joined there about two years ago now. Before that, I did my PhD in pharmacology. So a bit further away from bioinformatics, but I kind of got into the bioinformatics with my PhD because I worked on the interaction of small RNA molecules with larger RNA molecules. And for that, in the beginning of my PhD, I happened upon Neo4j at the time. It was 2015, I think. We're in version two or three. Yeah, early. That's a good while. Early. And it has grown a lot since then. But it always seemed kind of natural to me to encode biological entities in the property graph as opposed to other systems. I find it more intuitive. That may be subjective, but yeah, I've got a bit of a history with Neo4j. I'm not the hackiest person, so I don't know the ins and outs. I've just been using it in several contexts and that became helpful when we developed this framework that now also has property graph outputs. So maybe as a brief summary. So early on, I think that's great to hear. And then I guess you followed the progress and you followed the steps along the way a little bit from a graph user's perspective, sort of. So that's always interesting. So, you know, adding new features, adding new capabilities, adding new, you know, performance boosts and all that stuff. So I think that's super good. Although I must say that academia generally is a bit sluggish when it comes to the iterations that you do with software. So I think this BioCypher project is much, much more agile than my PhD work was because I stayed on the same version for very long because you don't want to deal with adaptations that you need to know. Of course. Yeah. Yeah. Yeah. And nowadays I think it's the, the iteration has, has increased a little bit. We used to, I think with Neo4j 5 was released late last year, so somewhere around October, November timeframe, 5.0. At the moment, it's April, so let's say half a year later, we are at version 5.7. But the updates are smaller, but the iteration goes quicker. Also, the language models help us write the code, actually. So that's that, and that's, I think, we hope, it's obviously always like you say, I mean, you know, maintaining systems and updating the system is always like something you don't want to do as often as you, as little as possible, really, because you don't want to touch as often as you as as little as possible really because you don't want to touch everything it's as long as it's working and the good old saying never touch a running system i guess is true yeah but still if you want to get ahead with you know having more functionality maybe doing from the scientific point of view doing more fancy analyses or doing something that wasn't possible before then maybe at some point you need to move on. And this is actually quite a point in BioSiphon that we want to make this process easy, but also transparent so that everybody knows which versions of which data, of which technology were used at any given time. So then after your PhD, the topic knowledge graph kind of stuck with you, and then you were like, always like, was it back in your mind, or was it really front-facing? More back as a mind, actually. Yes, now that you mentioned it. So I joined Julio's lab with the objective of doing causal reasoning in the molecular landscape. And that requires some semantics, I guess, but it's also a bit mathematical. And I needed to read into all the developments that these great economists and epidemiologists have done, like like Imbens and Julia Pearl, like like imbens and judy appall noble prize stuff and i didn't really plan to just stick with knowledge graphs forever but it kind of naturally happened that i realized that for for causal reasoning in the molecular space we do need to have a knowledge representation and that's also why this project and the paper that accompanies it is called Democratizing Knowledge Representation, because we want to make this available to basically any person that wants to set it up and then i was constantly maintaining it also and we see this in science as well that the you know the well-off labs that can afford a single person only dealing with their knowledge representation needs they can do that they can afford this but smaller labs or individuals usually have to use what's there for example use the health echograph and then just work with it if it's not completely suited to their purpose, they need to buy it or they need to leave it basically. Yeah. Yeah. It was interesting before we dive in, but maybe a little diversion sort of, we had, there was the Neo4j Graph Summit happening in Munich last week and we had Katharina Kari from IKEA InterSystems. She's kind of like this knowledge graph. I guess you know her and I guess people in the scene, I would say, are very familiar with her work and she's like a very, you know, a knowledge graph influencer almost like or very, very, very much in favor of in of knowledge graphs all kinds not not not necessarily based on neo4j but obviously neo4j is one way of of organizing or creating your knowledge graph and she was also talking about this setting up and creating this knowledge graph is is like a lot of work up front you have to organize everything you have to set your um you know your data points you have to do a semantic analysis almost on your terminology, on your data, on how you address things, how you organize things, how things are related in a big picture kind of way. And then you drill down. And this can take a lot of time. And this is almost something that, like you just said, a couple smaller companies or smaller smaller teams that have don't have the bandwidth they are sometimes very you know down down to business with what they're currently dealing with and then this big task is actually so important but then nobody dares to to do it and then it's like this always it is ever ever you know there's always this mountain in front of you. You have to kind of either you go around or you go over it. Yeah. Usually you use something that's already there and then people have used it. Maybe it's five years old or 10. And that's just the standard in the field. And we could move much faster if not everybody had to reinvent the wheel in this way. That was the basic motivation to come out with a framework in the way that it is. I mean, we could have done it differently, but yeah, I think now is the time to kind of get the community going for this abstract process of building knowledge graphs. And I've read this IKEA knowledge graph blog post, which actually has a lot to do with our reasoning so that, you know, you have a small set of concepts that you want to reason about in whichever setting you are. And this you can manage manually. But then when you get to the data points, which are millions or billions, then you need to trust in your representation that it actually makes sense. Yeah, absolutely, absolutely. Cool. I guess we can switch over. You have a couple of slides and then we can talk some more. As always, this is obviously live and interactive. So if you're watching this right now, then you can use the chat on, on, you know, LinkedIn, sorry, not Discord. Well, you could use it on Discord. We'll look at it later, but Facebook, Twitch, or YouTube, and I can have a, you know, an eye out for it. So Q and A is definitely, can be a part of this. So if you have comments, anything, then, then yeah, please. We can also make technical breaks in the presentation. Like there's different topics. So if you feel that there's questions for a specific topic, we can also discuss them in the middle. Yeah, that sounds good. So don't be shy. Just type it, type, type away. Right. Yeah. All right, Sebastian, I think then the stage is yours. Okay, so let's get started. So in a broad sense, we're dealing with lots of different types of biomedical data and the new methods for measuring these biomedical data come out very rapidly. There's lots and lots of entities and lots and lots of interactions that we can measure now. And most of them are then collected in some arbitrary format and they just end up on some server in a web API or something. And to integrate them, usually people have done this manual knowledge graph creation thing, which has led to a huge number of heterogeneous knowledge graphs that are out there for a specific purpose. They differ by scale. They differ by the specific purpose that they have. they differ by the technology that is beneath them, they differ by the research area. And that alone is a good indication that we do have different representational needs. And a lot of time goes into creating one of these, let alone something that then brings two of them or more together. In the ZEISS lab where I am now, we have this thing called Omnipath, which is now almost 10 years old, done by Dinesh Ture. It started out as a database that, similar to most of the others, was for a specific purpose to drive these causal inference methods that they used at the time. It was in the beginning, protein-protein interaction network. You do see my pointer, right? Yes, I see you. Yeah, yeah, we see your pointer. So it started out as this PPI, then it quickly also added gene regulatory interactions, so not only protein-protein, but also protein-mRNA interactions. And then over the years, we added lots and lots of stuff. For example, enzyme-substrate interactions, complexes, diverse annotations of whatever you can think of. And more recently, also this hot topic of intercellular communication. So how do cells communicate to cause phenotypes, cause diseases? And we see that the complexity of this knowledge representation task very generally increases continuously, so much that even now in the lab which has grown a bit we would require different versions of OmniPath to really fuel the specific needs of one particular researcher and so there is this no one size fits all problematic that we wanted to use an abstract framework to generalize this build process to allow more flexibility in creating because this now also is a huge collection of over the years accumulated scripts it's a Python program that just do this for this very specific purpose and to change it to something else would require knowing it's in the ins and outs and then also actually doing a lot of work to get it going. And then we also coincidentally wanted to migrate the back end of Omnipath to include even more regulatory and microbiotic and other influences, which gives you combinatorial explosions if you connect proteins to all the organisms that express them and so on. And the best candidate at the time was Neo4j also. And probably it had something to do with me already knowing a bit about Neo4j. So we had this general task of migrating OmniPath to a database backend, and then we had the idea to just take this migration as an incentive to just develop a framework that would do the migration, and then it can be reused for other migrations as well. And what we ended up with is this thing we call BioCypher. It's also a Python framework, and it has what I call a threefold modularity, which is in the first place the resources, of course. So you get the data from somewhere, you get information about proteins, about tissues, about drugs, whatever you want to reason about, and you get it from a web API as a flat file as something. And so we have what we call adapters that are just ETL piping the data into the Biosurfer framework. The second component is ontologies. So as I said, I think semantics are really important for dealing with inferencing in any space, but also in the molecular one, of course. So we have the second module where we can use different ontologies, which are expertly curated, and we can just reuse this information. We bring them together using a configuration. So we map the data contents to the ontology contents. And the third modular component is the output side where we can have a label property graph, but we could also, for example, in some other projects, we like to use SQL or we're working on RDF to be more in the semantic space. And together, this is basically the pipeline that people would use to go from the raw data to something that's integrated and you can work with. And a fairly large component of this project is actually just the meta level organization and making the data available to people. What is there? What can I reuse? And maybe some examples for people that are new to the field. What do I even need to look out for when I think about my knowledge representation, whatever that means. And so we collect all the components in a project board on GitHub. This is this new project thing that was in beta for a while and now actually is, I think, mature enough to deal with. At least for the last couple of weeks it worked fairly well. And we collect these parts, I'd say input, which is where the data comes in, and the output, which is basically the adapter to the databases Neo4j or Arango or something. And then we have the ontology component, which we also list here what we can use. And then there's the pipelines, which are the meta-level programs that bring together the inputs and ontologies to form the outputs. And there we do the project planning and also contributor management. So we have this organization called BioCypher on GitHub and anybody who is interested and wants to do a pipeline, for example, can join that. And what I also did is build a BioCypher pipeline that takes this information and then builds a Neo4j graph that is available at meta.biocypher.org where people can more intuitively browse what is already out there. So which pipelines are there in genetics, for example, or in proteomics and which adapters do they use and in this way get access to reusable components that could be useful for them. And in this way, get access to reusable components that could be useful for them. So maybe briefly a couple of words about what this adapter thing is. So basically, it's just an ETL pipeline in itself, a small one, extract, transform, load. But they're supposed to be reusable and combinable by the framework. reusable and combinable by the framework. And so we have, for example, the universal protein resource, short Uniprot adapter, which then presents to the pipeline a number of fields. We have the different species that are available in Uniprot. We have the node field. So there is a length and the mass and sequence of the protein, for example. And we also have some edges in this adapter. There is the protein which organism does it belong to and also which gene encodes for this protein. And this is what it presents to the framework. And in this way, and this is a bit work in progress now, but in this way we want to have some more user-friendly integration functionality where we have multiple adapters and then, for example, the pipeline can tell some more user-friendly integration functionality where we have multiple adapters. And then for example, the pipeline can tell the user which are the best identifier systems to use to have the least trouble in mapping the entities of the individual adapters to one another and other fancy things. And if a box is checked here, does it mean you can check it as you like? Or is this what's currently in there? This is supposed to symbolize that the adapter is actually a Python module, I would say, which presents all this information to the framework, so to the user. And the user is able to say give me proteins but i only want human and i only want the length and the sequence remove the rest and then you don't need to deal with anything that you don't need so that's that's what these check boxes are supposed to visualize yeah yeah okay that's cool so you can customize basically depending on your specific requirements. And these adapters are basically community-driven. So, of course, we do the adapters for Omnipath, let's say. And I also have this thing which I call secondary adapter, which is not like a granular atomic resource, which is a unique data source as, for example, Uniprot. unique data source as, for example, Uniprot, but we also in the past have done adapters for entire pre-harmonized databases. So say Omnipath consists of over 140 resources now, but it has a lot of this harmonization stuff already going on inside it. So it's much easier to write one secondary adapter, which also details we have proteins and we have interactions between also details we have proteins and we have interactions between proteins and we have genes and we have this and that and this is presented by the adapter and then the user can just say yes i want the proteins and the interactions and leave out the rest yeah okay right a bit about ontologies, maybe. So it's a very active and very special field, the field of ontology. I call it facetiously the necessary evil here because most people that are not in the ontology field don't like to talk about ontologies or find it boring. But I think particularly in our time now where we want to transfer reasoning to great extent to machine learning processes, we need the semantics. And I like to compare how human researchers talk to one another versus how we talk to machines or even how machines talk to one another. So a human goes through some early period of development where they just have a, they gather a notion of the world. They gather a notion of causality. They do their own experiments in the macroscopic space, so to speak. What happens if I push this cup over the edge of the table? Simple things, right? Babies like to throw stuff on the ground. And then also we go through years of training as, for example, biomedical researchers, and we acquire a huge vocabulary of how do we talk to one another about an experiment, about a disease, what even is a disease? These are all concepts that we communicate, but machines don't have that. And that's what we see in their reasoning abilities. I mean, it gets better the more data points and the more parameters we shove into these large language models, the better they get at mimicking human behavior. But a notion of objectness and particularly the efficiency with which humans do this is so far not achieved by machines. And to kind of facilitate this, we want to fuel the knowledge graph with the semantic concepts that the ontology brings. And since we want to make it easy, we also have some usability functions in BioCypher. So we have the data coming in here and say we want to use not one ontology, but two, because we want to have a very general one. And then we want to have a specific one. I give here the example of a molecular tumor board, where we say we want a very general notion of biomedical things so biolink is a shallow top-level ontology that has things like genes diseases experiments things like that but it ends for example when you want to talk about genetic variants it ends at the sequence variant so the tree stops there but there there is, of course, there are multiple ontologies that are expert curated that tell you lots about what kinds of sequence variants there are. The most popular one is probably the sequence ontology. And with BioCypher, we can simply go and say, I want to use BioLink as my main one. But then at the sequence variant node, we attach also the sequence variant subtree of the sequence ontology. And so you get a hybrid ontology that has knowledge of the general things, but also of the very specific things that are in the sequence ontology, which is nice because then you don't need to maintain one humongous ontology, which contains basically everything, which has also been done. And it's kind of, you know, it has 500 million triples. So it's nothing to scoff at. Yeah. We want to make it as modular as we deal with the data to deal with the ontology. And then we have, as already stated, we have a couple of outputs that we then can put the data on top of the concepts that are encoded by the ontology and then deliver the ready-made knowledge graph, so to speak. And maybe this is a good point to remark that, as opposed to some of the RDF web semantic approaches, web semantic approaches. The general philosophy of BioCypher is more of an instance-based knowledge graph. So the concepts usually are not in the graph, but the concepts are used to encode what is in the graph. And with Neo4j that's actually kind of nice, because we can use multiple labels on each node to encode for the hierarchy of the ontology, say protein is a polypeptide is a biological entity. And if you want to look for all polypeptides, you can just filter your query for this label. But if you want to have a specific one, say proteins, or even more specific, like I only want ligands, then you can go to the more specific labels. So if that's good for your use case, we enable that. Yeah. Question on the ontology linking, is that done automatically or is that lots of manual, probably a combination of both, I guess. Manual. You mean by the user or by ourselves? By you, yeah. How do you, I guess. You mean by the user or by ourselves? By you. Yeah. How do you, I mean, if you say, let's see an example of BioLink and sequence ontology, is there an easy cutoff point where you say, okay, and then it goes on? Or is it? Yeah. So basically we ingest ontologies by using the OWL files. So the web ontology Language files in RDF. What we do to make this choice, basically, is we have two lines in the configuration YAML file at the moment. We want to make it accessible in a GUI down the line, but at the moment we are working with YAML. There you can specify my main ontology, then you give a URL and also the root node where it's supposed to start, which for BioLINK we just recommend using entity, which is the just top level node. And then you have also, you can have an arbitrary number of tail ontologies. So the ontologies that are supposed to be fused to the main ontology. And there you can just specify also where it lives. So the alpha locally or on the web, and then you specify the nodes that should be linked. And in this case, it's quite easy. You just say in biolink, you take sequence variant and sequence ontology, you sequence underscore variant, and then the framework does it for you. Okay, yeah, yeah. We had a session with Jesus a couple of weeks ago, underscore variant, and then the framework does it for you. Okay. Yeah. Yeah. We had a session with Jesus a couple of weeks ago, I think. Maybe it was in the last episode where we talked about, you know, mapping, combining two different ontologies into one single ontology, and then you find the links between these systems. And sometimes there is a match and sometimes there isn't. So you have to juggle around a little bit sometimes. I guess it's easier and sometimes it's a little bit okay. There is no direct representation or no direct equivalent of the two in the two different ontologies, but there is something in between and then it's a bit more tweaking. There could be massive problems in this. I mean, there are things that are pretty obvious. BioLink ends at sequence variant, and you are interested in oncology, you're interested, is this a SNP, is it a lethal variant, whatever the sequence ontology encodes. So you just tuck them onto one another. And with diseases and disease ontology, for example, it's exactly the same. But of course, if you want to merge, completely merge two ontologies that are on similar levels, like the disease ontologies, for example, there's several, maybe a dozen, and harmonizing them is a huge deal. And we actually want to enable this. So this is also still roadmap stuff, but by having this kind of back end here that can deal with fusing and maybe reasoning over ontologies. You could also imagine use cases where you have several disease ontologies which you specify in this configuration YAML and then instead of trying to tack them together at one specific point you can reason about which of the individual nodes may be matches. I mean, many ontologies already do this. They provide narrow matches or exact matches, right? So you can use this. But for example, this instance of having a triple which is not complete. So you have three ontologies and in two of them, no, between one and the other and between one and the third one, you have the, this is the same, but the third link is missing. And so this could be done. It's not implemented yet, but this could easily be done at this stage where you just have these three ontologies and then BioCypher does some computations and just spits out a harmonized ontology. So this is on the roadmap and it's actually an important topic in biomedical knowledge graphs. Absolutely. I think that kind of answers George's question. He says, do the vocabularies vary between the different bioontologies? Are there some constant definitions? I guess, yeah, that's what we just talked about well both yes so for example the obo foundry which we we like to use so the sequence ontology for example is from the obo foundry as well and they they often give cross references and then there are like master overarching ontologies like the EBI has done with their experimental factor ontology where they just for their in-house use cases curate other ontologies and say this is this concept so we're going to map it to ours. It's very difficult and I'd say also very debated field because there are opinions, these are all concepts so they're not real things that you can touch in a sense, it's just that someone says I think this is a disease and then another person says but maybe it's not a disease and you have different opinions. And so to harmonize this, I think, will be an eternal battle. Just out of philosophical reasons, not technically necessarily. But the thing that we want to provide with BioCypher is just some head space to make these technical tasks easy so that the people can sit down in the beginning of a project and really think about what kind of semantics do they need for their task. And that's the second component then is being able to build different layouts of a graph fairly easily. So not taking months, but rather taking a couple of days to just be able to test which representation actually is the right one for me. Or maybe I should use MONDO, maybe I should use disease ontology, or maybe let's go with experimental factor ontology. And the goal is that you just need to switch out this configuration line saying, I want to use this ontology now and then see what happens. Yeah, yeah. And that's great, I want to use this ontology now and then see what happens. Yeah, yeah. And that's great, I think. And that is, it is obviously some configuration a little challenge at some point, but it is also easier than you think it might be in some ways, because like you just said, with the closing of the triangles in graphs, it is, if two things point at the same way, it's very likely that the third one is also talking about the same. So you can fill in the gaps with this to some extent also almost automatically by using the graph technology. Yeah, but there may be questions that you can't really answer by using just the technology, but that needs to be decided by a domain expert. Like, is this a disease or not? Or is this a different disease than that one? And, you know, the medical professionals also don't sit still, right? They put out different versions of the DSM and the ICD. Are we at 11 now? I don't know. So these things also change. So it's important to have a modular framework where you can say, oh, they updated their definitions. Let's see what the new one looks like without having to deal with months of hacking away at some data file. Yeah. You mentioned, Nirmal asks, you mentioned this 500 million triple huge ontology. He wants to know, are they distinct or total graph weight? Do you know? This is the Uber graph. It's actually also a hybrid. It's basically, they did a paper which just took all of the Oboe Foundry ontologies that they were able to fit into this bigger framework. And then they fused them together. And then they also did some, I think, imputations for convenience or to be able to go to certain links quicker. So they added some triples that were inferred from the triples that were in there. And that's how they ended up with this enormous graph. Yeah. Yeah. Cool. Yeah. And then one more question from Ishwar is the different sections of the ontology. So A, B, C, D is just an example, right? So this just. Yeah. Yeah. This is just a graphical thing to show that you can have several ones and you can select the one that you, that you want. Okay. Cool. There was one more question on adapters. I don't know. Do we come back to adapters or should we do it now? We can do it now. We can also do it. I think we will have it very soon after the licensing. No, let's do it now. Let's do it now. Okay. Wenlang asks, do you have adapters developed for data in KEGG database? Yeah, there is one, I think, in the crossbar pipeline. But what I recommend in any of these questions is to go to the BioCypher website, which links to this adapter GitHub page where you can filter. You can also go to the meta.biocypher.org graph and look in the Neo4j instance that we have there, whether something that you're interested in is there already. Sometimes it may not be there yet because people are still working on it and we haven't had time to put it into this graph. There is also on the website this link to the Zulip chat where you can just informally go and say, hey, did anybody do something with Keg? And then you will quickly find out. Yeah. The links to the GitHub and to the Bioserver website are in the video description. So if you are wondering where to click, then you'll find them there. So should we go on? Yeah, let's go on. Let's go on. So one other distinct topic, but also very important, is licensing. So with BioCypher, we want to push a bit on the way that it is done at the moment, which is basically, we have this thing here, maybe it's a hybrid knowledge graph of 10 different sources, and all we do is we have a table somewhere that says, this source has this license, this source has this license. And then the user needs to figure out which entities come from which source, can I actually use them? And that's particularly problematic for corporations. So if you want to work with startups or even bigger companies in pharma, say, you always have these questions. And then you need to go through a whole lot of lawyers to figure this out. you always have these questions and then you need to go through a whole lot of lawyers to figure this out. And so the advantage in BioCypher, at least what we think the advantage is that we can have, and we also advise to have that licensing on each entity. So on each node and each relationship in the graph. And we can even turn on a strict mode, which doesn't allow adding entities to the knowledge graph that do not have information about their source, their version, and their license. And then we can also imagine in some use cases where you could just use some master switch which says, I can allow data from these licenses, but I can't, for example, use KEGG or Drug Bank, which are notorious examples for limiting the reuse of their data downstream. So this is just a convenience feature in this sense, but I think it's important to make this also as easy as possible so that people actually use it. And there's a lot of ignorance going on there, just people using things where they don't even know whether they can use them or people creating a great resource but then not attaching any license to it, which gives huge problems downstream. Because if you don't have a license, you're not really allowed or you're not sure whether you can use the stuff or not. Or what you can do with it. And I can see that if you are building up your, your knowledge graph, you, you, you obviously can rely or will rely on some resources from ontologies or from other, from other databases that have been created, but you will also, but a hundred percent will also use your own data to, to incorporate that in your, in your knowledge graph. And some of this data is is i'm pretty sure is ready for for a public uh release and you are fine with that but majority will probably be this is my own intellectual property my own data my own research i'm not going to share this because i don't want my competitors to have an advantage yeah it depends on your opinion actually in this in this regard. So also whether you're in academia or in industry, say, and also whether it's software or actually really data points. Most problematic, of course, are patient samples. Exactly. Like PDR problems, right? But that's another topic which we also address in some way in this federated learning type setup where you should be able to share the structure of your data but not the actual data itself which is kind of a way around of between a middle ground between sharing the data and then actually not having any interaction you can train in a federated manner without actually having access any interaction. You can train in a federated manner without actually having access to the data. That's cool. Yeah. I think that's an important step and an important way of making, you know, giving some confidence of users to work with that. So like you said, right, if you don't know what the license model is and, well, can I use this data later on or will I have to chuck it out again? And then you build something and then you realize, oh, this is not licensed and I have to, you know, I cannot use it. And it's very frustrating. Yeah, definitely. And, you know, sometimes people just have done things and where there's no jury, there's no crime or something like that saying exactly goes but um yeah this just in academia it happens and often it doesn't have any ramifications but if you look at at biotech companies they can't really risk doing something like that so that stops some projects in their tracks which is a shame because then they need to again go back and have entire teams internally working on something that the world will never hear of because yeah they don't share it yeah yeah i mean that then they are with this i think it's it's also nice because it gives you the option to give to give something back as a as a biotech company you can always say okay here we developed this in-house and we used the open source and the open databases and the open connectors here. And we add our own data and some of it is like, it's a clinical data or individual IP research is ours, but then we enrich the data and we give something back and we release that for public consumption. And then that makes, you know. Yeah, that would be great. And yeah, the EBI, for example, with the Open Targets platform, they regularly do try to bring academia and the companies closer together to have this sort of win-win relationship. But obviously you need to sail around all these IP and privacy considerations. Absolutely. It's not easy. No, no. But at least it's there. Exactly. That's what I was going to say. I mean, this makes it easy and this makes it possible to have the option to do it at least. It will depend on the individual case. But if it's there already, then sharing is possible and that's great. Yeah. So very briefly about the usage, I'd say, since it's a Python program, at the moment we are working already on an R implementation, at least of the downstream side. And also we want to do clients, obviously, for example, OmniPath. It will probably take a bit of time, but I think as it's looking now, at least the R implementation will be fairly quick with using Reticulate, which is this thing that can translate between Python and R. And then we're also looking at Galaxy, but that's just plans at the moment. So the way you would go about it in Python is you get the BioCypher interface, which is this core framework, the way that you interact with it. And then you need to get the adapters, however many you want. And you do this in kind of a script setup, which you can also then load into a Docker pipeline or something. and also then load into a Docker pipeline or something. And in this case, we have again the UniprotAdapter where we just import this adapter class, which presents all this information. Say I've got human and mouse and the zebra fish, and I've got all these fields and a couple of edges that you can use. And you do this by just saying, with the BioCypher interface, we write the nodes that we get from the adapter and we write the edges we get from the adapter. We can also write just anything. And this is how you would create a knowledge graph very superficially. The way you put it together with the ontologies is that you, of course, in the beginning, you decided on which ontologies to use. So you know which classes are in your ontology. And then if the adapter gives you something that is called protein, you want to map this to the protein class in the ontology. And that's basically all it is. It can become a bit more involved if you, for example, have a relationship that you want to represent as a node. more involved. If you, for example, have a relationship that you want to represent as a node, then BioCypher will reify this interaction for you and it will create not just one edge, but it will create an edge to a node, which is the association, and then another edge to the target of the association. And some other stuff, like you can define properties here in this configuration. And this is also something that we ultimately then want to move to something like a web app where you can have a drag-and-drop interface, the adapters tell you what kinds of information they have, and then you say I'll take the protein, I'll take the gene, and I'll take the interactions between proteins and then build me the graph. And then the framework goes to write, And then the framework goes to either write some output files or it connects to a running DBMS, for example, Neo4j through the old drivers, say. And depending on, for example, the extent of the knowledge graph that you want to write, you go with the one or the other. So if you want to create millions of nodes and edges, you would probably prefer the batch import thing. Whereas if you just have a running graph and you update a couple of nodes and edges every now and then, you probably want to more go with a driver-based approach where you don't need to stop the database. And this is, yeah, to some extent, everything of this is possible. Okay, cool. And then you can also go and at the moment, just manually optimize this knowledge representation so you can easily switch these configuration statements saying, now I want to represent this relationship, not as an edge, but as a node, because I want to connect something, see how this works. So you can iterate through this process and see how it works. Or you can maybe create an SQL and a property graph database and compare how fast they are in a particular task and find the one that suits you best. And then there are some couple of other nice small details like often the extracted knowledge graph that you get is fairly small so you can just take the bioscipher files and zip them and put them in a repository or something and then you automatically have reproducibility for example if you use this knowledge graph to to write a paper you can put it next to the paper on github or on zenodo or something. And then people will be able to access this and using just the import statement that is included in these files, they can just create this knowledge graph instance locally if they want to check it out or do something else with it. Yeah. So here we come back to the adapters, maybe a little example. For example, Uniprot, the adapter that I mentioned a couple of times already, they don't provide gene identifiers, this Ensembl gene ID, but they do provide transcript identifiers. But we want to connect to gene-centric resources, for example, the Open Targets platform, and so we use some mapping functionality to just use the ensemble transcript ID to get the gene IDs, and then the adapter also can provide the gene IDs. And this way, the adapters are a very flexible way of also modulating or adding to the data, enriching the data that comes from this resource. So in this case, we use PyPath, which is this backend for OmniPath, our database, which can map transcripts to genes and then we can connect, for example, to the Open Targets, which is a very gene-centric resource that doesn't have proteins. We can just add the proteins from Uniprot to the Open Targets platform data and then we also have proteins and can maybe reason about drug interactions with proteins or protein-protein interactions. We have lots of behind-the-scenes things to make the knowledge graph creation process also more robust in addition to making it faster. So all of the functions are unit tested, so you get some, if Bioscipher notices something is off, you get feedback about this particular thing. So for example, if some nodes in your ontology are disconnected, it tells you, if you have inputs in your data that you didn't map to the ontology, it tells you like, Hey, I got 700 nodes that are called disease, which are not in your configuration. Is this by accident or did you mean to do this? And it can also deal with duplicates, for example, which is kind of tricky, to be honest, because the identity or the uniqueness of the thing also depends on other attributes. So in a labeled property graph, for example, you can have multiple edges between two nodes and they may encode for useful information. So it may be valuable to have them both. But if you only look at the source and target node, then you would think that this is a duplicate. So you need to attach also the label of the edge, for example, if that is different, or even if the label is the same, maybe it has different properties. So the granularity with which you deduplicate also is not really trivial, and we try to make this easier with the adapters. And then there's some just user-friendly stuff, like you can look at the ontology. By default, BioCypher only shows you the classes that you actually use out of the ontology, by default, BioCypher only shows you the classes that you actually use out of the ontology. This is just from the tutorial toy example. This is a database where we actually use the sequence ontology. So we added this subtree of sequence variant to the BioLink ontology, which just encodes for the top level stuff. And then if you look at the entire ontology, you get lots of output, but we enable that. So if you want to quickly look at something, you can just have BioCypher show you the structure and then maybe you can figure out where to map a particular thing that's in the input, for example. Okay, that's cool. We also want to add some also community curated things like the bioregistry for passing identifiers. Say you have some identifiers where you don't know whether they already have a prefix like uniprot, colon, and then the identifier or not. And bioregistry as well as Gilda are two community open source tools developed by Ben Gyori and Charlie Hoyt at Harvard Medical School and Gilder can be used for grounding. So if you have an entity which you don't really know what it is but you have the name and you maybe have a sentence where it appears then it can tell you ah this is likely a gene or this is probably a protein or something like that. So with the adapters, we also want to enable interactivity in using other tools that are made for specific purposes in biomed. Yeah. Well, that's cool. I guess it could be interesting for when you, I mean, the ontologies are somewhat curated, I'd say. But if you bring in your own data, then you can kind of like use these tools to say, okay, where do they match? Where do they connect best? And then how do I, you know, then obviously then it becomes interesting, right? Finding the new relationships between what you have in your data in the combination with the added power of the knowledge graph. Exactly. And then what we want to do in terms of the roadmap or development, we like to try these things out in the adapters, which are very malleable. And then if it turns out that one functionality is particularly useful for basically any adapter, we want to migrate this into the BioCypher core so that just everybody who imports BioCypher has access to these functionalities. That's how it's planned to go about. Very cool. So we are already at the Outlook. We have some Docker solutions so that people can more easily set up their workflows. Maybe I'll show something like this in the last couple of minutes. I don't know if we have a hard deadline here. We don't have it. No, no, we can. Yeah, let's do that. So we want to extend to having some examples of representative workflows in different fields because the knowledge representation needs differ between proteomics or single cell researchers. Maybe you're interested in spatial omics or in genetics. And there is also, as I already mentioned, a huge interest in reasoning on the ontologies that are coming in here at this point before we build the knowledge graph. So to build the knowledge graph on a combination of ontologies or to complete these triples, for example, before you build the knowledge graph on a combination of ontologies or to complete these triples, for example, before you build the knowledge graph and then you have a reproducible but flexible pipeline in the sense that you can, if you have another ontology that you want to add or there's a new version of an ontology, you can run it again and you get again the best possible hybrid ontology for each case. Then, since we are shuffling data around all the time, we also want to work on the data management. So we already have reports from one of the teams at Stanford that deal with this impact of genomic variation on function project with BioCypher. They use PyPI to just simply run the same pipeline, but through PyPy, which is just in time compiled, and it made things apparently a lot faster without any development efforts. So you can just switch the regular Python implementation for PyPy, and apparently it worked in their case. And then there's also more involved possibilities like going to a format that is just pointer-based so that you don't really need to copy things, but you just have pointers to file on the disk. And then you just shuffle that around instead of the actual data, which it is at the moment. So for huge knowledge graphs, this can actually take a couple of hours, which is not the end of the world, but it can always be better. Which is not the end of the world, but it can always be better. And then also we want to, as I mentioned, deal with the interface. So I like to bring this example from audio engineering because I am an audio engineer. I did this before I studied pharmacy. And the thing with these machines that you have in audio engineering, like synthesizers or some compressor or something, is that these knobs that they have are very intuitive. If you work with it for a couple of hours, it becomes second nature to find something up or down. And we don't have the same thing in biomed engineering or in software engineering in many places. So we need to add these knobs and dials, look at the users, what they do and how to make it more accessible to them. So there is this open question about how to do this. Do we do a web app where we have like, we can have some graphical components where you drag and drop something and then it writes this configuration YAML for you. Let's say I drop a gene here and protein and draw an edge between and then you know you still need to get the data so if you have already a pre-made adapter do we have like an app store of adapters that you can then load and say i want to combine drugs and proteins and genes and maybe add some of your own data so this is something that we are very interested in and which we didn't have much time to develop yet. This is a community project. We're also very open for people having ideas in this space. Or if someone is a web developer and says, I can write a GUI for this thing in two hours, then you can also try that. I played around with Streamlit last week a bit for some other thing where we attach the knowledge graph to large language models. And then we have like a chat GPT app like to communicate about biomedical findings. And yeah, I would like to go more in this direction. But since I can't. How did that go in general? Very good, Very good. Very good. So with my basic Python knowledge, I've managed to get it working in like a week, which is much faster than I expected. But yeah, there's always like more powerful frameworks and, you know, people with lots of experience that can do a much better job of making these intuitive interfaces. Exactly. lots of experience that can do a much better job of making these intuitive interfaces. Exactly. How the user interacts with such an interface and then making it actually usable. Yeah. Yeah. Especially when you're talking to the experts of the field, they don't want to talk about anybody in particular, but I can imagine that they are not really developers or don't want to write terminal queries. But the backgrounds are super distinct. So maybe someone comes from computer science, maybe someone comes from biology. These two types of people have so different education in terms of how they interact with the basic things, right? And they don't know what does it mean if there is a knob here or a slider there or what to do with this. That's great. I think that's super good. So if you're watching this and think, hey, I want to write a web app for this, get in touch with Sebastian and the team so it can happen. That would be, I think that would be great to see. That's our main aim. Basically make everything open and try to engage as much of the community as possible, which works fine with the adapters so far. So writing adapters is not hard, but there are lots of adapters to be written potentially, and this just takes a bit of time. So the more people you have, the more you get things going. Did you try stuff like, for example, in Neo4j, you can visualize some, I mean, this is obviously after the fact that you created your knowledge graph, but with Bloom, for example, being a visualization tool or other technologies, you can, at least the representation of the graph can be a little bit more, let's say, easier to understand or... Yeah, I like this. This is a good example of downstream usability. So, for example, this Metagraph is a very basic example of how you can make this table-based GitHub layout more intuitive for people, or you have a graph where you kind of naturally, you get the pipelines which are in one area say genetics and then other pipelines that are more in proteomics or in um clinical stuff you you can more easily browse this and later on with with bloom as well just for figuring out how it works and in what ways it can be useful. But as you say, it's more a downstream thing. Totally. Yeah. Yeah. And also the graph data science thing is probably very interesting. Yeah. Also could be an incentive to go for a Neo4j graph as opposed to, for example, even a Rango, which is also graph based, but which doesn't have the same, I'd say intuitive things where you can just do some clustering, if that's what you're interested in. But for example, the IGBF people, which needed very specific genetic functionalities, like looking at the genomic region and then quickly returning which genes or which exons are in this region, they did some benchmarks and they went with a wrango because there you can have ranges, which wasn't the case in, let's say, SQL or another label property graph engine. Yeah. But that's very subjective. And that's why we think this more general framework is the way to go. Yeah. That makes sense. Yeah. This is basically it from me. You have the link. I need to thank all the people that worked with me on this. I should have updated this lab photo because this is just, we have several lab photos flying around with always half people missing. Actually, we're now like 35 people. Cool. And it's a nice lab. We are constantly looking for people. So if you do want to do a PhD or something like that, you can also get in contact. And with this, we can maybe, if you want to go to this more technical thing. We can. There were two questions I'd like to maybe talk about before we do the short hands-on part. From Wenliang again, where does the protein-to-protein interaction information come from? Can it be from different resources of your own data in your Uniprot example? Yes. So Uniprot doesn't have protein-protein interaction. It just is a very comprehensive resource on the proteins themselves. And for example, Omnipath, which we have a secondary adapter, as we call it, but since it is an enormous resource that has been growing for almost 10 years now, it obviously is tricky to migrate this all at once. So the secondary adapter is a nice option to have this already. And in terms of the protein-protein interactions, I think OmniPath even integrates like at least 30 of these. And yeah, there are individual atomic resources, say like string, which many people use, but if you want to combine it with something else, and often it comes from papers or something, so there's just some supplementary information in the paper which you'd like to use. And then in OmniPath or in PyPath, we create a function where we have this file and we add it to OmniPath in this very specific way. And in the same way, we imagine that you can just combine adapters for the different individual protein-protein interaction resources to build a knowledge graph. Cool. And then the other question was from Diego, and I guess that's a little bit what we talked about before about how to get in touch. But you mentioned the issue of support in academia in smaller in smaller groups what is envisioned in terms of keeping bios have a current and relevant just an active user community or additional institutional support well we do support bios cipher definitely but yeah we are only a handful of people and for this, we also want to build a community. So this project is not going to end up unsupported in a very brief time frame by us, as we can, for example, see with Omnipath, which is now supported by the same lab for almost 10 years. We do have a good track record, so to speak. And I'm the one that organizes all this but we have connections if you read for example the paper there's lots of people on that mainly from academia but we're also working as i said with for example open targets which brings some pharmaceutical companies in the mix and basically our opinion is that we do everything in the open. I also like that about Neo4j, that you have this, if you're open, we're open and graphs for what was it again? Yeah. I like these types of initiatives and this is exactly the way that we want to go. So everything is in the open. And if you want, if you think this could be useful either if you're a potential user or you want to develop something i would just recommend to get in contact and you will very quickly see how it works so this zulip thing just requires some signing up and then you can immediately jump into the interaction okay cool no cool. No, yeah, and everything is on this very website here, the Bioscipher team page. All right, officially we are at the end of the session, but I think we should try to get the demo in here. So obviously. Yeah, it's also very brief. It's just I wasn't planning on writing an adapter or anything. No. Because I can visualize some of the functionalities that we have. So this would be, for example, the schema configuration where we map to the ontology. We did actually go the way and revive-vived, so to speak, an ontology that was created 10 years ago, which is called Biomedical Resource Ontology, or for short, BRO. I really like the name. Basically, they did it 10 years ago to fuel a platform where they talked about biological or biomedical resources. And this seemed very fitting for our purpose. So we now use this to grab the data from the GitHub page that I had in this slide. You just, so maybe. Component page. We use the GitHub API to get this information in a BioCipher pipeline, which is what I'm showing here. So this is the mapping. This is the configuration. It says it's offline, so we don't talk to a running DB. We just write these files for input. As I said, we have this head ontology where we use mostly the BioLink model, which is just a very general overarching thing. And then for the information content entity, we join the resource, which is the top node of this biomedical resource ontology. We join it to this ontology, and then we have the combination of them. We have some Neo4j set up here, and then this adapter takes care of getting the data and giving it to the framework. And then we have this Docker Compose set up, which is three stages, mainly because we want a read-only instance that we can mount to this meta.bioscipher.org page. to this meta.biocypher.org page. And just to demonstrate what it does, we can run this. Don't mind this here. I'm on ARM architecture, and we build. The server is a Linux one, so that's why it gives this error message. So we are at the build stage, where we just pull this BioCypher image, which we have on Docker Hub, and it installs. And then it will run this script, which uses this adapter here, which is basically what I showed before in this usage thing. So we set up the environment. And once this is done, we start BioCypher. In this case, it's 0.5.9. And then it runs this creation script, which should run very quickly. We are calling the API at this point and then loading the ontologies. And now it's already done. So it joined the ontologies and it wrote for all of these different types that I defined in the adapter it wrote the corresponding neo4j input in this file here and it also shows you that we didn't we found duplicates in users because we have multiple connections between these issues and it shows you the structure so So we just have these very simple associations where we have part of and uses and then we have adapters. So we have input formats, data types, resource types and then the various adapters, input adapters, ontology adapters and output adapters. Then the second stage starts which just mounts the Neo4j Enterprise Edition, and it imports these files that have been generated by BioCypher. And then it stops again because we want to mount a third one that is read-only, which can only be done by importing it, stopping, and then starting another instance with the same database. Okay. Yeah. And then you can go to localhost or to meta.bioscipho.org. We don't have a password, but it's read-only. And then if you look at the entire graph, this is what I mentioned previously, this is the browsable overview of what we have in BioCypher. So if you, for example, are interested in say, oh, this shows ID. No, that's actually not what I'm interested in. So say you're interested in genetics. You could see the adapters that use genetic information. Then you could also see the pipelines that use these adapters. So for example, the IGVF project that I mentioned, this OncoDash tumor board thing uses a genetics adapter. And you can go either to the GitHub project page or directly to the URL of the pipeline or the adapter. So here is the OmniPath adapter, for example, and this is the OmniPath pipeline. And for biotech, Yana is a cell communication knowledge graph that we are building and so on. Yeah. So this is basically it, just a demonstration of this meta graph and what the BioCypher pipeline does, including the Docker integration to make things. And then it's there and then you can work with it and go and score. Yeah, and in multiple stages it records, for example, we have version 3.2.1 of the BioLink model and we have version 4.0.0 of the biomedical resource ontology. Yeah. And these files are imported into Neo4j in one second. So you can just zip these up and share them with someone. It's a zip file of maybe 500 kilobytes. And it's immediately reproducible. Of course, this can get much bigger in actual genetics application cases. But yeah, you can do lots of fancy things around these. So for example, the project which is probably most advanced is this one that I mentioned from Stanford, this impact of genomic variation on function, where they just took BioCypher and wrote like 10 adapters and are creating an API and also a GUI, I think, on top of that, and they did it in like three weeks. So that I think is really impressive. And the good thing is that we have these adapters in the GitHub and in this Metagraph and they are linked. They're open source. And if you want to use one of these resources, you can basically immediately reuse this adapter without having to write something yourself. Cool. That's great. Question from a couple of people is, is this sample you just showed here, is this available on the GitHub repository or can people... This is called Metagraph. So if you go to the website, we also link to the... So if you go, for example, to adapters, it explains about this thing, what we have in the adapters. We have a tutorial about the adapters, but we also have this Metagraph, which this is the link to the online version and this is the link to the repository, which on the main branch, there is a version that is not read-only, so you can modify it locally. But the read-only one is the version that we run on the server to provide this meta.bioscipher.org instance so that people can't just... No, of course. Yeah, that makes sense. so that people can't just know of course. Yeah, that makes sense. No, I mean, and then this is this is maybe so just to know that I guess to explain this is this is the meta graph of of of of the bio cipher project. And if you if you combine your own knowledge graph with a couple of these adapters, then you get your individual kind of. And you can I, I would assume that looking at these projects, so this components board is basically the main thing. That's also where the metagraph is built. So if you're interested in these things, you can just look at, say, you're interested in a pipeline, we look at crossbar, we have links to the individual adapters. So here's string, it's a protein-protein interaction database. And you can look at what the stage of the adapter at the moment is and then see whether it can reuse something, see whether you can get some inspiration about your own adapters. That's what I would recommend. And, of course, if you're new to the concept, then I would recommend going through the tutorial, which we have on the page where we go through basics, ontology considerations, and then also about the adapters. Yeah. And there's another question. How... I think you mentioned this briefly, but how would you say... I mean, let's say you initially created your knowledge graph of a couple of of resources and you want to let's say update that you you have uh something changed or so let's say every month or so you run a update is that something you would do via the the the BioCypher connectors or yeah basically this is what we already do with OmniPath it's not on BioCypher yet but this was the incentive for developing the framework in the first place. And we have a daily build. So if something changes in the resources, either it's the same resource that you already represent. So you just download the new data and it runs. Maybe if they added something, BioCypher will tell you, look, there's this new node here, which you have not in your configuration yet, but that's fairly easy to deal with. If you want to add an entirely new adapter, you would go the same way as you would when creating a new one. You just take your existing pipeline. You see whether someone already has done an adapter for this resource. If not, you need to write one, of course. And then you can also share it and everybody else can just use this adapter. And you basically need the adapter and you need to configure the adapter where it should go in the knowledge graph. So say you want to add information on diseases. So you reuse one of the disease adapters that we have, for example, from the OpenTargets platform, you say, give me the diseases and the categorization in ICD-10 and maybe what species they are in. And then you add this to your configuration and then BioCypher will just process the data and add it to your graph. I say just... Yeah, but that's super and makes it easy. And you don't have to worry as much about... You have to worry about keeping your graph up to date, but the process sounds at least very easy. So you don't have to do it all. There's all kinds of tricky pitfalls in this process, right? So if you're dealing with something that is conceptually very complex, like diseases, you always have issues with harmonizing stuff. So we are fans of this adapter layout because there you can also, so you have three resources, three primary adapters for some protein protein interaction stuff and maybe some of it is duplicated maybe some of it comes from the same paper and you don't want to count this information twice so there's always some logic going on but the good thing is that you can add this logic at any point you can add it in the adapter you can also have like a secondary adapter between these three initial adapters and your final script. You can also add some logic in the script. So it remains transparent what you do. I'm not saying we are single-handedly solving all of the harmonization issues of the field because that would be a bit much. We just want to make the, you know, the menial tasks of the data crunch. We want to make that as quick as possible so that you can dedicate time to the more important tasks or the more difficult ones. Thank you for doing this. I think there are lots of people who are struggling with this and who are probably tearing their hair out by having to do this regularly. And, and this makes it, makes it much easier. But technologically, as I said in the beginning, I think not even half of it is the actual technology or very brilliant Python programming that we do. Most of it is just communicating, getting, having a place where other people are that have the same problems and talking to them so this this kind of community management thing and bringing people to one place which so far doesn't exist in in knowledge representation as far as i know i mean there are ontology conferences but usually only ontology researchers go there and not somebody that is a PhD student somewhere and wants to write a knowledge graph for their particular task. So I guess that's where the democratization comes from that we have in our title. Cool. Yeah, I see no more questions. So yeah. Thank you very much, Sebastian, for presenting, for taking the time. Obviously, thank you for you and the BioCypher community for developing BioCypher. Seems to me like a great project. And again, invitation, open invitation, anybody. And again, invitation, open invitation, anybody, if you either want to use it or want to contribute, want to be part of BioCypher, have an adapter to write or have, we heard, a web application is a need. So if that is something you want to be part of, then reach out to the team, either via the BioCypher website or via the GitHub page. And you find the contact information website or via the GitHub page. And you find the contact information there or the what's called Zulu board for chat. So I think that's the easiest way to get in touch. Try it yourself. Obviously with the tutorials, I think it seems to me very easy and very user-friendly to get started. So that's always helpful. And yeah. Any final words, Sebastian? No. Thank you. Thank you to everybody for listening. Thank you for the invitation and to tell you a bit. And yeah, don't be shy. If you think this could be useful for you, and even if you think, think yeah my programming skills are not great or anything just join the community there are similar cases as that and we try to help where we can or we try to facilitate everybody helping one another so to speak yeah that's great so it sounds like a friendly community. So definitely join that. And yeah, with that, thank you for being here. Thank you, Sebastian, in the name of the Bioscipher team for developing this. And yeah, I hope you had a good session, everybody. I hope to see you soon at one of the next live streams. What's coming up is in this week on Thursday. So in two days, morning European time, Michael will do another episode with having fun with chat GPT and using chat GPT as a co-developer, co-coder for a fun project. So that's happening on Thursday. And then, yeah, next week going meta. So keeping the launch graph kind of going, we will do episode 16, Jesus and I, on knowledge graphs on Semantic Web and on ontology. So I'm not sure what the exact topic will be next week, but if you're interested in knowledge graphs, then tune in on a Tuesday. Yeah, when learning the others, videos will be available offline on YouTube. So if you go to youtube.com slash Neo4j, you'll find our channel there and then you see all the videos in the in the playlist. So everything is available afterwards. To watch again. Yeah, with that, thank you for watching. Thank you, Sebastian. And yeah, see you soon at another live stream or somewhere around the globe for Neo4j events. Take care, everybody, and have a good rest of your day. Bye-bye. Thank you. Bye-bye. | Neo4j Live: BioCypher | 4,834 | Neo4j | 20230426 | Building a knowledge graph for biomedical tasks usually takes months or years. What if you could do it in weeks or days?
For this exact reason, BioCypher was created to make the process of creating a biomedical knowledge graph easier than ever, but still flexible and transparent. BioCypher is built around the concept of a “threefold modularity”: modularity of data sources, modularity of structure-giving ontology, and modularity of output formats. This design allows for a high degree of flexibility and reusability, rationalising efforts by leveraging the biomedical community.
Links:
BioCypher https://biocypher.org/
Github https://github.com/biocypher
Paper: https://arxiv.org/abs/2212.13543
Sebastian Lobentanzer https://twitter.com/slobentanzer
Graphs4Good: https://neo4j.com/graphs4good/
0:00 Intro - About Sebastian Lobentanzer
11:10 BioCypher Overview
1:04:15 BioCypher Demo
1:17:20 Wrap-Up
#neo4j #graphdatabase #knowledgegraph #lifescience #biology #biotechnology #healthcare | 2024-09-11T20:08:15.397717 |
https://www.youtube.com/watch?v=CykkC80L_ck | Hi there, I'm Sebastian. I am a postdoc at the University of Heidelberg and I will talk today about our ecosystem for manipulating and then using knowledge. It is not strictly constrained to biomedical knowledge, but that's kind of our environment, so that's what we kind of tune into. If you want to follow along, we have this demo dataset here, which I created recently, which you can also try to start up on your own. If you want to chat with the model, you need an OpenAI API key at the moment. You can start the repository if you don't, but you will not be able to use the model. If you don't, I did the same here just now. So I omitted the OpenAI key step because I already have it in my environment. And then basically what you do is clone and then docker compose up you need docker for that obviously and this is a data set that comes from one of the neo4j data set repositories this one here it's called pole and i will be talking a bit about the general And I will be talking a bit about the general background of the project and then go into a bit of demonstration of what it does. So generally, we'd like to democratize knowledge representation, and this is a bit abstract. So we could also say we want to increase the accessibility of knowledge and the reproducibility of the pipelines that build the knowledge and the reusability and the flexibility of these knowledge pipelines. And bottom line, this comes down to increasing the community, like people telling each other what they have done and then communicating also through some more permanent solutions like repositories and i will be touching on two main points one is the knowledge engineering in biomed specifically but i think in other cases as well we have the problem that we need to engineer our knowledge to make it usable and the second one is then the more recent example of LLM technology and making the same what we do for the knowledge also available for the communication with the LLM. So if we look at biomedical data in general, I think it's fair to say that we do have a lot of data, but we still don't know absolutely everything about the problems we try to solve. I think it's fair to say we are quite far from knowing everything. As we see in this little animation in the background, it's pretty hard to comprehend with one human brain what is going on in individual cells, for example. And this brings me to what I call knowledge engineering, which is also what humans do. We extract knowledge, for example, from the papers we read. We represent the knowledge in our heads. Most of us don't need to think a lot about how we do this. And then we use the knowledge to generate ideas, draw conclusions and so on. If we want to teach machines how to do this we need to deal with the constant updates that we have and the arbitrary formats that our data come in. And so the first framework that we developed is called BioCypher and takes care of the knowledge representation aspect. It is a modular framework to make it flexible and reusable. And there are three types of modularity. The one is on the input side, where we have different adapters that deal with different input data sources. So depending on the format, depending on what exactly they contain, and very technical things like what are the column names or something like this. The second modularity component is ontology. So we want to instill some domain knowledge into our models. So in the biomedical case, we could, for example, use the gene ontology, disease ontology, cell ontology, and so on. We bring them together using a very constrained configuration in YAML format, which is nice to then also communicate to language models. And the third modular component then is the output side, where you could use a label property graph, such as Neo4j, which is kind of one of our standard use cases, but also simple representations like a Pandas data frame. And this then ultimately should help you to maintain and analyze your data but then also to archive it in a very reproducible fashion and share it alongside applications that you may do on the data but the knowledge isn't everything we also need some insights we need to generate some value from the knowledge and this is what brings me to the second framework which kind of started out as just a front-end application of some sort of langchain implementation that we've heard in several other talks today already. This was originally just a demonstration of what we envision, and it's basically a feedback loop between a researcher and a large language model. We engineer the prompts and then we bring some domain specific things into it. For example, we make the model aware and tune the model for the output of particular bioinformatics methods that we would like to interpret. We can also connect databases for injecting prior knowledge, but also as a long-term memory storage of the model. And then we can bring in also more complicated chaining applications in the vein of long chain. And since we noticed that not only we had these ideas, but that many people that we knew and collaborated with also have, for example example web applications that they would like to fuel using the modern capability of lms we decided to do another framework which now we call biochatter which is all of this functionality the base functionality incorporated into a python library and then we can distribute these functionalities among the different front ends that we use. So how do we make these two modalities communicate? Particularly since the knowledge graphs in question more look like this hairball here and the language model in their abstracted form look like this. And this is what brings me to this configuration that I mentioned before. So we are using ontologies as installation of domain expertise into the machine learning space, because human experts learn lots of things from growing up and then learning how to interact with the world and then also learning medicine or biology which makes them much different than specialist language models and what we can do in BioCypher is we can project the resources that we load in our pipelines onto specific ontologies. And we can even merge them. So you could, for example, have a very generic base ontology such as BioLink, but then if you're interested in sequences, you can hybridize the sequence ontology subtree to it there, and then you can profit from both implementations of ontologies. And this then brings us to how we connect these things. We have the knowledge graph and this YAML configuration that I mentioned. And it's just an example where we would say we have a gene and a disease and a gene to disease association. And with this, we can also generate the representation, how it would look in the knowledge graph in terms of writing the cipher query. And this we give to the Bioshatter framework to then create this conversational loop to enable the LLM to ask the knowledge graph if it deems to do so. But we can also think about other applications like an orchestration setup where we have a primary AI that leads the conversation, but then also determines that it may need to do a task, which is, for example, parameterizing an algorithm, doing a linear regression or something more complicated, or calling a vector database for some semantically similar text embeddings that we can also use to do in context learning later. And all of this we try to collect in this biochatter framework, which should be like an aggregation of these functionalities that we would like to use. So with this, I need to thank my collaborators and sponsors. And I need to say that everything that I've mentioned is free and open source. And you can find us on GitHub in the individual repositories. There are also some other repositories of demonstrations or implementations, documentation and a demo of the BioCypher. The biochatter slash chat GSE app is also to be found on chat.biochatter.org. And now I'm going to go through this demo repository and see what it actually does for us. So this is the demo repository that you can run yourself derived from this Neo4j example. Here is a sandbox of the Neo4j example that I created on the Neo4j sandbox server with this kind of guide, which we will use to go along. Here is the database that was started by the Docker container, and this is the app that was started by the docker container and this is the app that was started by the docker container. The same app that you can see on chat.bioscipher.org but the online version is kind of limited and you can only look at a demo. Oops. So this is the same repository, the demo repository, and usually when I create one of these pipelines, I go through a process that I have also recorded here in this Markdown file. So, the first thing is to explore the dataset. Which classes are in there? How large is it? How heterogeneous is it? And then kind of try to find a way around it. What I did in this case is I used the Neo4j dump, and I exported it as a CSV, and you can see that this is not super readable, but it contains all the information. We have persons, locations, phones, emails, and then at the very bottom, we also have some relationships, phones, emails, and then at the very bottom, we also have some relationships. And all the adapter that I mentioned needs to do is then find its way around this CSV and present it to the BioCypher pipeline in somewhat standardized format. So kind of like a data model, in a sense. And we put this together using a Docker Compose. And this is also available in our template repository. So if you want to build a thing like this, I would recommend to start from the template repository, which already contains these steps. This step I added. But in general, we built the BioCypher representation of the data. Then we use these two containers to import and then also deploy in a Neo4j instance. We use two steps because the two-step procedure enables to mount read-only databases, which is nice if you want to share something on the internet but don't want people to manipulate your database. If you only want a local modifiable database, you could get away with one step here. But this is the most generic interface so we do this and then i also added this chat gse container image here that can access the graph that is started on the on the docker network the orchestration of the build process is in this case a small orchestration file. For using BioCypher, you just import it and then you need one or multiple adapters. And the great thing about these adapters is that they're reusable. So we have this component board on GitHub where we can look for input adapters, output adapters, ontology adapters, and entire pipelines that use these adapters. So this is the IGVF project and we also use this board to generate a meta graph with the BioCipher pipeline so that you can have a knowledge graph about the knowledge graphs that we build. You can find this on the GitHub. And once you have the BioCipher instance with the configuration and the adapter, you simply need to get the nodes and edges from the adapter. And then in this case, we write it because we want to use the Neo4j Admin import thing. And we write the import call and the schema info which is an extended version of this configuration that I've shown in the demo in the presentation and we also have some useful information afterwards so we can also run this to kind of see what running BioCypher looks like. And here we see that we start up BioCypher, we do some logging, we load the ontology, where we do the mapping, and then the adapter generates nodes and edges into an import call file. And then we also get the types of entities, depending on the ontology that we loaded, that are in the graph. So in this case we have objects and persons, and some of these persons are officers. We have crimes and phone calls, and we have certain facts like someone is involved, an object is involved in a crime, it is investigated by an officer, some person knows another person the crime occurred at a location and so on so this is fairly straightforward but it requires some configuration this is what what is not shown here because it's kind of the default setup bioscipher looks for this you can also change it but bioscipher looks for this. You can also change it, but BioCypher looks for this. We have basic settings. So this is, for example, where we tell it that we use this ontology. So in this markdown file of the creation process, we are now at this step. We find a suitable ontology that maps our classes effectively and has the correct scope for the question. And I don't know my way around crimes very well. So I just did a Google search and I identified this ontology that is kind of basic but useful for this case. And then what we also can configure is what exactly are the nodes and edges that we have in the knowledge graph. So we start with the person. The person is not in the ontology, so we can also attach it to the ontology by saying it is a child of the actor in the ontology. Then we need to state what we expect as an input label for a person. And here I just put the exact label that we would find in the csv so we don't need to do any processing of that field we can just pass it on and then we also can filter some properties so for the person we want name surname nhs number and a phone and an email what bioscipher then does after you run the pipeline is that it also generates a schema info DML with extended information that is useful for communicating with the LLM. For example, we explicitly encode whether something is a relationship or not, which is useful because sometimes we represent relationships as nodes. So we reify these relationships and then you could get confused about whether something is a relationship or just an object. And also whether it's present, so we don't send anything to the language model which we know isn't present in the knowledge graph because that will just return an empty result. And that's how we deal with kind of the connectivity, which we'll see in a bit. And then lastly, what we need, of course, is that the adapter goes through the CSV file and adapts it to the BioCipher format. And for this, we do a simple configuration, which is kind of duplicated effort at the moment and we're working on making this more more streamlined. But it is very similar to defining a data model in the sense that we say we have different node types, persons, offices, locations and so on. And then for all of these node types, maybe we also have particular fields properties that we want to use like name, surname and so on. And we do the same for the edges. None of the edges that we actually use has a property, so it stops there. And then we also have the adapter with the main functionality of getting the data. So knowing what it looks like and then doing the ETL stuff, but then also doing some maybe abstraction processes. abstraction processes. And the most important function, of course, is getNodes and getEdges, where we pass the knowledge then to BioCipher in the format that it expects, which is for nodes, an ID, a type, and a property dictionary, and for the edges, ID, type, and properties, but also a start and end node. And we can do some more complex stuff. For example, we could abstract away the telephones in the original poll dataset that Neo4j uses. We have a telephone number, which is owned by a person, and the telephone receives calls or makes calls. I didn't want to have that in this database, so I made a simple function to just get the phone number projected onto the person that owns it and remove the phone note so that the call, either someone made a call or received a call, goes from the person directly to the phone call and not through the telephone number. And basically that's it for the knowledge graph creation process. And then we go to the usage in Biochatter. And for this, as an example, we have this ChatGSE front end here, which has some other functions if you're interested in those check them out for example we do prompt engineering to have robust prompt sets and in context learning where you can use a vector database to do semantic search of a couple of interesting documents that you think are relevant to your question. But what I wanted to talk about today is the Knowledge Graph connectivity. So we have here the Knowledge Graph connection interface. Usually it looks for the local host, but as we are running in a Docker container, we need to set this to the container name that we're using, which is deploy, the deploy step in the Docker pipeline. And since the graph was built by BioCypher and we have this line here in the creation of the knowledge graph, it already has the schema info as a node in the graph. as a node in the graph. Otherwise you would be able to give this schema info YAML file to the front end here via upload, but we don't need to do that because it's already loaded from the graph. It's this node here, which has basically all of this YAML file in a property. So then we can go to the question-answer scheme. If you have loaded a schema configuration or info file, you are presented with the content. So this is again the YAML file, but just as a usability, user-friendliness measure in the website. And then you can start asking your questions. For example, if we say here in the kind of model guide that they have, let's look at the types of primes and the number of times each occurred. So instead of writing this cipher query, you could just say what are the top primes maybe that works already no it doesn't All right. So this gives you a cipher query, which says match C crime, return C type as count order descending it's only one if we rerun this query don't run the language model again but we just update the query that is sent to the knowledge graph that is connected so we can see a couple more of them and it seems like violence and sexual offenses are the most and then public order and then criminal damage in ar, which is the same that we should receive with this query. Oh, maybe the sandbox has disconnected. That's unfortunate. But let's go on. So what is interesting about this connectivity, I think, is that the language models appear to be understanding fairly limited questions by, for example, if I enter who primed the most, which barely contains any information about what the relationships and nodes are called in the knowledge graph, it still manages to create something that works. And if we return the entire person node, then we can see a bit more information. So, of course, this is all randomly generated data, but Philip Williamson was the one that was associated to the most crimes. And it also works for fairly complicated problems. So, I saw this one, the vulnerable persons investigation, where you have, where you try to match for a person that has many friends that were associated with crimes, but that wasn't associated with any crimes themselves. So if you manage to ask the language model in the right fashion then this also works so let's see who has most...involved in crimes that was not involved in crimes themselves. And we see that it replicates the general pattern of the query. You don't get a result yet. Ah, because we have isRelatedTo, which is the family relationships. If we use the designation in the graph directly, let's see if we can get the language model to return the right thing. And then we kind of see something that looks like it's accurate. So at least it returns the result. So we have a person that knows another person that was party to a crime, where the first person wasn't party to the crime, and then we just count these friends and order descending by number of friends. Let's see the first five people that would be vulnerable to these, and it's freeman with seven friends bonnie gilbert with six actually robertson with three and so on of course we need to validate whether all of this is actually true but at least we get the query and we can modify it and this is basically all that i wanted to show it's very much work in progress and it's all open source and very collaborative. We are currently writing the paper about the BioChatter repository. So if you want to get involved, you can do so at your own pace, either by joining our Zulip chat, writing me an email or directly on GitHub. And now just have a look whether there are any questions there is something about the ontology yes so as i said i i'm not hugely invested in crime investigations so what I did was googling something like crime ontology this here probably and then I found a couple of papers and decided on one of them because I was able to access the web page where they had the actual ontology. And this is basically about it. So it took me a bit of reading to figure out whether there is actually a usable ontology. And actually, BioCypher helps you in looking at the ontologies. usable ontology and actually bioscipho helps you in in looking at the ontologies if you uncomment this here then we we can see maybe i comment this so that we don't need to run everything again So this particular function here, if you have a configuration already, is a nice overview. But even without having a configuration, if you just load the BioCypher with this ontology as the ontology to use, then you can manipulate this NetworkX object that we have as an internal representation. And then you see what is in the ontology. This includes now some of the things that I did in the schema configuration as well. So for example, making a person a child of an actor. This is why the person is already in this representation here but this is the way that i usually work with these things all right i think we're at the end of our time and i don't see any other questions so that leaves me to just thank you for your attention. Remember to get in touch if you're at all interested in these types of things. And thanks to Neo4j for hosting me. And enjoy the rest of North 23. | NODES 2023 - BioCypher/BioChatter: An Ecosystem for Connecting Knowledge Graphs and LLMs | 1,749 | Neo4j | 20240304 | A synergistic relationship exists between knowledge graphs and Large Language Models (LLMs) in biomedical research. In this session, Sebastian will present open-source frameworks for knowledge graph creation and LLM interaction to increase accessibility for the research community. You'll learn about the BioCypher library, which enables the creation of knowledge graphs for various biomedical applications through its Python framework. The BioChatter library will also be discussed, showcasing its role in facilitating interactions between human researchers, LLMs, knowledge graphs, and vector databases. Attending this session will give you insights into advanced research workflows, fostering accessibility and reproducibility in your work. | 2024-09-11T20:12:58.129348 |
https://www.youtube.com/watch?v=your_new_url_here | you you you you you you you you you you you you you you Thank you. you you you you you you you you you you you you | AI Agents Hackathon with Lablab and MindsDB - Kickoff Stream | 7,409 | lablab ai | 20240914 | Join us today for our AI Agents Hackathon!
We are excited to welcome speakers from Together.ai, Upstage, AgentOps. CrewAI, Composio, Autogen and Meta.
Each will bring unique insights and expertise to help you learn, collaborate and create innovative AI-driven solutions. Don't miss out on this incredible opportunity! | 2024-09-13T20:17:42.250581 |
https://www.youtube.com/watch?v=bgAhZdqHWHg | you you you you you you you you you you you you you you Thank you. you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you Thank you. you you you Also, everybody can take a seat and we'll start in a few minutes. Thank you. you you you you you you you you you you you you you you you you you you you you you you you you Teksting av Nicolai Winther Welcome everybody. I'm very happy to see so many faces here. Welcome to this AI Agents Hackathon today in San Francisco. We also have quite a few people joining us online. Right now they're watching live. So I'm very happy to see you all. My name is Emanoel Yarte. I will be your MC for tonight. I'm an AI architect at New Native and LabLab AI. So, first of all, I would like to welcome Bibu to give some welcoming words. Thank you. to give some welcoming words. Thank you. Hey, I'm Vibhu. I'm here with MindsDB. Welcome to our office. So just to kick things off, I want to thank all the sponsors, everyone putting this together. So we've got AgentOps, TogetherAI, Upstage, the official meta team is here, LabLab. It's been a lot of work to get this done. So it's nice, welcome to everyone online as well. So instead of giving you slides about MindsDB, we've got a quick start rolling around. We'll have a workshop. Our team is here, so we're gonna save time for everyone else. Just quick logistics and everything, restrooms are there. We'll have schedule and everything going on here. But yeah, we're really excited to host events like this. So welcome to our space. We host a lot of things. We host a lot of hackathons. This is part two of our agents hackathon. So with the agent ops crew here in April, we did the first one. We had the meta team. We had a lot of great hackathon energy. So we're excited to see what people build next. The theme of this hackathon is agents and part of that is what we're working on. So we actually have a big launch coming up on September 24th. Everyone here, you're all welcome. So we have a big celebration in the afternoon. So that'll kind of be our little launch party. We'll have a live stream and everything. We'll send out details for that. But yeah, come hang out. MindsDB, we have like a pretty big open source AI platform. So we help you connect different data sources to different AI like engines. And we help you build this real-time automation build AI application. So all around, you'll see our quick starts. Feel free to use our platform. It's fully open source. You need any help, we're around. But before kicking into too much of everyone's time, we just want to, you know, welcome everyone. We'll have a lot of mentors. We want to invite everyone to our space. We host other stuff. Yesterday, we had the OpenAI team here. They did a big Q&A of their O1 model. We might have them come back throughout the hackathon. So if anyone's interested in the new model, they might roll around. Some people from Google will be mentoring. And yeah, it'll be good energy. We're around here as mentors, sponsors, hosts, and everything. But before that, let's kick it off to some of the other speakers and talk. So bring it back. Thank you, Vibhu. Well, I think he described it very well. We have three very exciting days in front of us. So this is going to be the schedule for the next three days. Right now, of course, we're going to have very interesting speakers coming to stage. Then we'll do some networking. And tomorrow we open doors at 10 a.m. We'll be hacking all day until 9 p.m. And lastly, on Sunday, we'll open doors at 9 a.m. and we'll announce winners by 7.30. So I'm going to walk you through the hackathon challenge. Basically, you have to build an innovative......enhancing tasks and workflows. The idea is to demonstrate how your agent can transform standard processes, tackle complex tasks, and address pressing social issues. You can use any technology you like, but remember that our partners will offer special rewards if you use their technology in your applications. So the prices for this hackathon is $5,000 in cash for the first place, $3,000 for second place, and $2,000 for third place place and our special awards for the use of the technology from our partners. Best use of Composio will have two hundred fifty dollars and free Composio credits. Best use of DocumentParse and SolarPro will have three thousand in upstage API credits. Best use of Together AI will have 2500 in Together API credits and the best use of LAMA will have 3000 in cash. Also, well the success criteria for this hackathon is divided in four. The application of technology, how efficiently and effectively the technology that you chose is integrated into your solution. The business value, which is the potential impact and practical value of the solution. Originality, which is the uniqueness and creativity in addressing the challenge. And the presentation, basically the clarity and effectiveness of the project presentation. and effectiveness of the project presentation. So without further ado, I would like to give the stage to Simon Olson and Pavel Chek, founders of LabLab AI and Unate2. I'm going to stand like this. This is awesome. It's great to see so many people here. We came a long way for this, so I'm very excited that so many people showed up. I think I checked the numbers just prior to walking on here. I think we had like 800 people online or something like that. So it's quite a lot actually. But just to say a couple of words of what we do, what LabLab do, we started this business about two years ago. And we started as a full community business. We really tried to focus on helping developers get access to this new platform shift, these technologies that are coming out in the AI space. And by that time, it was quite hard to actually get your hands on the basic APIs, even for well-established models at that time. But we've been doing that now for two years. And really what we're trying to achieve is democratize the access to as many people as possible. We've been doing that now quite successfully all over the world, not only here in the Bay Area, but with a big focus on the European markets as well as the Asian markets. So we've had a lot of fun doing that, and we'd love to keep on doing it. So we're very happy to have such great partners with us. Thank you for the space as well, MindDB. It's great to have you take part of this as well as Meta, as well as Together AI, which we've done several events with in the past as well. So super happy that we're able to do this. And to say a couple of words about the challenge as well as Together AI, which we've done several events with in the past as well. So super happy that we're able to do this. And to say a couple of words about the challenge as well, I mean, this entire sort of agent shift, it happened quite recently. It was like a year ago, a year and a half ago. And I think that this is probably one of the biggest opportunities of a lifetime to fundamentally change the way that you build and operate software. And we have done anything really over the last one and a half year from simple, trivial media applications to quite advanced deep tech genome sequencing algorithms on our platform. And so I'm kind of excited to see what you guys are going to dazzle us with this weekend. I have high expectations. So with that, I'm super happy to be here, super happy to have you guys participate in this hackathon, and good luck. It's very nice to see so many people that are builders. That's okay. No, no, it's fine. It's fine. It's fine. Is it working? Test? Test, test? Yeah? Okay, guys, but the only thing I want to say is that it's so good to see people that are builders and builders at heart. And building really makes people progress. And especially building together makes us a lot stronger, right? And building together also allows us to fail. And that's also good. Not everything always is just one huge stream of success but I do believe that your products will be amazing. But it's very important that we're in a community here in San Francisco but also through Lablab you can find people in the community everywhere in the world. So there's people joining us from the US but also from Mexico and Indonesia and from Warsaw and from Sweden and everywhere else in the world. So you can find your tribe. You can find teams, you can find your tribe, you can find your colleagues and people that are as excited to build things together with you. And that's really the strength of builders and that's really strength of new technologies coming in that we get to start something, start something exciting. And it's not set in stone the future, the next big company might be in this room. I firmly believe that there's a shake in the foundation of business models. You can see it in Silicon Valley and people are building amazing new businesses and even the biggest, nobody's safe. There is nobody safe right now. Whether it's Google or anybody else, it doesn't matter. Not singling them out. So it's an amazing movement, amazing new idea, and it's the act of creation. So a lot of people don't understand that, but I just wanted to say that whether you're writing a poem or you're writing a song or you're writing a novel or you're creating code, it's an act of creation. So it's a very good proud moment And you can feel very good And Sunday is creating together in this space. Thank you for all the partners not repeating everybody. And yeah, thank you guys for joining us. And please make sure to get your accounts, find people here in this room, so build the teams, and also find people online and bring them over so you can do it together. Thank you very much, guys, for showing up. Thank you, Pavel and Simon. Yeah, I have to agree. I think AI agencies are one of the things that have the biggest potential to change how we do things every day. So there's amazing tools out there. There's amazing frameworks that we can use. I'm very excited to see what everybody here and the 800 people online will build in the next three days. So please, everybody, welcome to the stage Lucy Park, Chief Product Officer from Upstage. All right. Oh, I wasn't supposed to touch this. All right. All right. Oh, I wasn't supposed to touch this. Okay. Good to meet you, everyone. So it's really nice to be here. I hope everybody's up for some fun. I just want to ask a question before. Who here has heard of Upstage before? Oh, yeah. Oh, yeah. I see some hands. All right. Great. Okay. I see some hands. All right, great. OK. Do I have my slides here? Yes, yes. I sent the slides. Yes, I'm from Upstage. I'm Lucy Park. I'm the chief product officer. I'm here to introduce two products from Upstage that can power your applications in this hackathon. I'm calling them the Upstage Duo for you to memorize them quickly. I'm going to introduce some hackathon projects and enterprise use cases that we have seen people work on and that we have worked on before. About Upstage, we are an enterprise AI startup. We are based in Seoul and we're currently expanding to the U.S. And we are a Series B company and our flagship products are LLM and Document AI. Companies range from SMBs to enterprises. You can see Samsung, Intel, Hyundai down there. And we work particularly in finance, legal, and bio. And we have participated in a lot of hackathons and it's really an honor to be here at Lab Lab. We are working with a lot of enterprise customers to work on their use cases powered by AI. But there's a difference between hackathon projects and enterprise use cases. A lot of the enterprise use cases are very ordinary workflows. They're not crazy new ideas, whereas in hackathons, we play with very crazy new ideas. Sometimes enterprise use cases are must-have practices and rather not good to have. For instance, insurance claim processing is a very mundane task, but it is powered in enterprises and is very important for our customers. And so we experience very exciting hackathons to very important enterprise use cases, and these are powered by the Upstage Duo. So the first is the Solar LLM family. The second is Document Parse. The Solar LLM family is obviously an LLM. And we have three LLMs prepared for you. Document Parse is a tool to make your documents LLM ready. So let me just introduce them to you one by one. So the Solar LLM family consists of Solar Mini, Solar Pro, Solar DocVision. I will Solar Mini, Solar Pro, Solar DocVision. I will stress more on Solar Pro today, but let me go through the use cases before I introduce Solar Pro to you. We had an internal toy project several days ago using Solar Pro and using it to debate. Solar Pro and using it to debate. So Solar Pro is talking to another Solar Pro, and we're asking questions like, how do you make smart LLMs? And it debates within itself. And as you know, OpenAI released a model called O1, right? We made a Solar O1 today and launched it internally. And you can actually try it out yourself in this link. And it's actually fun. We replicated 01 with Solar Pro today in one day. And there was a hackathon project in healthcare where people used solar to avoid eating prohibited food. So they took in menus, parsed the text within it, and asked solar whether we can eat this food or not. So it was a quite interesting project. This is an enterprise project that was done with a company called Tanawa. It's e-commerce price comparison service with 10 million MAU. They wanted a service that, so they do price comparisons. So they wanted to extract attribute values from products, product descriptions. Say, for this laptop, what is the brand? What is the color? What is the price, and so on. And they worked on this price, this attribute value extraction use case with humans before. And they wanted to automate this process. So they came to us and we used Solr. We built a Solr e-commerce for them, which was a 30 billion private MOE model to automate these AVE tasks and to power more use cases within this company. And so there were very more, there were a lot of interesting projects that powered use cases. And I want to introduce Solar Pro Preview, which is a preview model of Solar Pro that will be released in November. And we think, in our perspective, we think this is the most intelligent LLM that can go on a single GPU. It fits snugly into a 80 gig GPU. And APIs are free until November 24. So feel free to try it out. It's currently ranked number one on the Hugging Face Open LLM leaderboard among models below 70 billion. So if you can't really trust our internal benchmarks, you can probably trust the benchmarks here. It's very easy to use. If you use link chain, Lama parse, probably the OpenAI SDK, they're all compatible with them. So feel free to use them out. And let me also introduce Document Parse. So Document Parse is out there in the world, because we know that many of the documents you want to process with LLMs are very complex. Sometimes there are multi-columns, sometimes there are multi-page, and they're visually rich, meaning that they have a lot of tables and figures inside. And that's the reason we built Document Parse. It converts any document so that it could be consumed by any text-based LLM. It inputs almost any document, PDF, JPEG. If you have DOGX, PPTX, that's fine. You can just input into the API, and it will output structured text like HTML, Markdown, or text. If you see the table on the right, you will see that it has a lot of merged cells. It doesn't have explicit lines between the cells, but still, Document Par parse is very good at extracting this content. We did an internal benchmark with single page document QA. If you ask a document if you ask document if you ask a question to this document say like in table, what's the performance of blah, blah, blah, it takes the answer out perfectly. So we did a internal benchmark within our company and comparing pipelines between HTML extraction with document parse and QA with GPT-4.0. We did multi-model QA with GPT-4.0. We did multi-model QA with GPT-4.0. We also compared HTML extraction with GPT-4.0 with QA with GPT-4.0. And the accuracy for document parse was the highest, which means it is very accurate in extracting all the contents within the documents. So why is there this difference? We kind of realized that with GPT-4.0, there were a lot of hallucinations. So when document has complex tables, when document has many letters, it's better to use document parse. And there are also other cases where you want to use document parse. Speed, accuracy. If you're interested in multi-page, There are also other cases where you want to use document parse. Speed, accuracy. If you're interested in multi-page, document parse is good at taking care of documents up to 700, 800 pages. We have a sync mode, so you would definitely be interested if you have very big documents. Interpretability, if you want to cache the documents, you want to be interested in document parse. There were quite a lot of use cases. Some student groups built a triage system for alcohol and other drugs by parsing forms using SolarLM to ask questions toward these questionnaires, and made a checklist, and finally built a triage system. Another team worked on legal use cases, where they had a contract, a really long contract, and then summarized the contents. We also worked on an enterprise use case, where we did something very similar, but this time, we didn't just summarize one document. We put in multi-documents and made a summarization for lawyers because it takes a long time for the lawyers to go through a lot of documents. So this one's also very easy to use. And we have good news for people that are on site. If you work on your project during this hackathon and you give feedback to us, we will give you a free T-shirt over there. Galena is RPM is ready with the T-shirts for you. And you can just take this QR code and send us feedback and we'd be happy to give you a t-shirt. Finally, if you scan this QR code, you can go to the Upstage console and have fun. We will have 200 credits for everyone and awards will be out there as well. Thank you for listening. Thank you. Thank you very much. I apologize for the small technical difficulty. So, yes, indeed, Upstage will be offering $200 in API credits for participants of the hackathon. You just need to use the promo code AIagentshack2024 to access the credits in the Upstage console. You can find more information about Upstage as well in LabLab. So, I'll leave that there for a second. Well, thank you very much. Now I would like to welcome our next speaker, which is Lorenz J. from Crew.ai. So please, everybody, a round of applause. How's it going? Can everyone hear me? People in the back, yeah? Sweet. All right. Demo time. Oops. Sweet. Demo time. Alright, everyone, my name is Lorenz. I'm a software engineer at Kru AI and today I'm going to show it off. Kru AI is the leading open source multi-agent framework. We're a universal framework as well. What does universal mean? Universal means you can connect any LLM provider from Ollama to Grok to OpenAI, Cohere, Anthromic, and even your own LLM. We have abilities to connect to any AI agent tool from like browser base that lets you autonomously scroll the web to things like Compazio, which is a sponsor here today, to the access search tool. to things like Composio, which is a sponsor here today, to the Exit Search tool. We're at a AI agent hackathon, so I'm sure everybody knows what AI agents are. But to those who don't know, AI agents are entities that take a high-level goal, like analyze market trends of my competitors, find the best talent for my job postings, to research the latest news about AI, or even create a marketing campaign for my product. It takes that high- level goal and generates steps, tasks, plans, actions, and needs to execute to achieve its goals. At Kru AI, we have a lot of powers for our individual agents, they have the ability to execute code in a Docker container, they have memory to understand what it should do, and has the abilities to use certain types of tools, to using any LLM to even function calling. At Krew AI, we've made creating agents super easy. All you need is a role, a goal, and a backstory. And then optionally, you can pass it some tools. Then you attach an agent to a task. Like hey, find the top three trending AI news headlines, and then attach that agent to that task. But what happens when your task becomes really complex, right? You want to research top candidates on LinkedIn. You want to analyze all their profiles. You want to create outreach templates for each and one of them. And you want to generate a report for your stakeholders. Well, you may not want to just use one agent. Instead you want to use multiple agents. And that's what we do really well at Koo.ai. We handle cooperation, role playing, delegation of tasks from one agent to another to help each other achieve its goals. Like in a team. You have the ability for caching and memory so it knows what tools to use. You can train your agents. As well as have guardrails to ensure that your agents are performing the way that they're supposed to. And we do this through cooperation and role-playing. And you could put all of them together with a CRUDE class, attach all your agents, attach all your tasks, and just define a process. So in just a couple lines of code, you have a CRUD crew. And we have different processes. A common one is a sequential process that has one agent to one task. And we have a hierarchical agent that has a manager agent delegating tasks to appropriate agents. And then you could chain all these together to run crews in parallel and asynchronously run tasks. I think this shows the power of agentic automations. And what can you use agentic automations for? We see them happening and impacting things in customer service to marketing, to sales, and even communication-based tasks. So, all right, we're at a hackathon. Let me show you how to build some stuff. So we're gonna automate the boring stuff with some agents. So I'm a software engineer, but sometimes I have to write some content. I have to write an example of how do you use Kru AI to build a LinkedIn scraper, right? So today I'm gonna show off how we can do a technical content writer with Kru. We're gonna have four agents. The first one is a researcher. The second one is a blog chapter writer. Then we're gonna have a blog writer itself. And then we're gonna have four agents. The first one is a researcher. The second one is a blog chapter writer. Then we're gonna have a blog writer itself. And then we're gonna have a tweet writer. At the end of all of this, we're gonna expect a technical blog post that uses Krew AI to build something with Composio. So we have a couple agents, and you could define it with a YAML format. So if you're not technical, or if coding is something that's still new to you, you could easily define your agents a YAML format. So if you're not technical or if coding is something that's still new to you, you could easily define your agents with YAML. So we have a researcher, the blog chapter writer, the blog writer, and the tweet writer. An example for a researcher, they're a senior dev rel researcher, they're going to research our GitHub repo name and you can see those brackets, that interpolates a string that you pass as inputs. And then they're going to research, write the chapters, and then ultimately write a blog for us. And here are our tasks. We have a research task, a chapter writer task, a blog post task, and tweet writer task. All of these map to the individual agents that I defined before. And then in your crew, you could pass in tools, like the Surfer Dev tool that will go through Google and find relevant links for them. And then the GitHub search tool that will kind of do RAG over your code base. And that's something that you see over here with the GitHub repo name. And then we could define all of our other agents. All you have to do is just annotate with the at agent and then the at task, which is the same things that we've defined in our YAML format. So it's super easy to use. And again, we just attach them to our crew class, from agents, our tasks, and the process that we wanna use. And then here's our topic. I wanna create a blog that uses crew AI agents with Composio tools to create a SQL agent that can query a database. Here's our tech stack. It's crew AI, Python, and Composio. And I want it to output to these paths. And then here's my GitHub repo name, which is the Krew AI open source project. And we're going to kick it off. And this is something that we get. It's pretty good. It's like 80% of the way there. But I want to improve it. But I don't want to continue writing. So let's train our Krew to improve. So here are some of the examples. So how our train works is it uses reinforcement learning with human feedback. So after every task gets executed we have this please provide feedback option where we could give it like suggestions as to what it should do. So the first slide the the top half is pretty much the the chapters that were written. So I'm like, hey, why don't we include a section of how to train this crew too? Here's references to our docs. And then the second one is the blog post. After it's done, let's integrate Composio tools and I sent it the repo to take a look at. And this is kind of like the results we've seen. So on the left, these are runs before we've trained. We use LLM as a judge and these are kind of like the results we've seen. So on the left, these are runs before we've trained. We use LLM as a judge, and these are kind of like the scores. So you could see crew is like just 8.5 as an average score before training. Then after training, we got a score of 9.3 out of 10. And we can even see improvements on execution times. We're saying like around 21 seconds. And then we get something a lot cleaner and a lot better. And it has more details as to how you can actually import Composio into your crew compared to the original one. And here's the tweet. I'll unleash the power of AI, learn how to use crew AI agents with Composio tools to create an SQL agent that simplifies your database queries, transform complex tasks into effortless actions. Then read our latest blog post. So yeah, so some tips for the hackathon. Find things that you do repetitively every day. They're mundane things, you don't wanna do them. So why don't you try to automate them with AI agents. Tips for orchestrating your agents. Split tasks that every agent has a specialty for, and then we have a chatGPT doc, a chatGPT agent that you can use during this hackathon for troubleshooting or bouncing ideas, and then ultimately have fun. It's gonna be a cool weekend, and here are some of our examples, our docs, and our community discourse. And I'll be here throughout the hackathon so if you guys have any questions, feel free to reach out to me. But yeah, that's it. Thanks. Thank you. So yeah, I'm pretty sure a lot of people here are familiar with Cray-Eye. I myself use Cray-Eye relatively often to automate a lot of my repetitive tasks, so I can't wait to see all the projects that everybody will come up with. So now please, everybody, give a round of applause to our next speaker, Ching-Yung Wu, founder of AutoGen. All right, everyone. Hello, everyone. This is Qingwu Wu. I'm an assistant professor at Penn State University, and I'm one of the original creators of Auditor. And it's great to be here together with you, sharing with you how to build the best agents team with AutoGen. So as many of the speakers and many people have already acknowledged that the future is about AI agents, and this is probably the largest revolution in the history of computing. So that's, I guess, why we are here in the TEDx and why it's so exciting to be here. And Alderden helps you answer this question, how to build the best agents and teams of agents. So to answer this question, we appeal to the wisdom of Aristotle. So Aristotle said, the whole is greater than the sum of parts. This is a simple yet profound idea that is at the heart of many successful AI systems. And this leads us to the idea of multi-agent system. By combining different AI components, we can create solutions that are more powerful, more flexible, and more capable than any single AI model could be on its own. Imagine having a team of AI experts, each specialized in different areas, working together seamlessly to create, to solve a complex problem. That's the potential of multi-agent systems. So with this wisdom, let's try to answer those following questions. The first is, what are the parts? Because we want, we are trying to create a better whole with the parts. And obviously we have all these very capable models, and also we have all these amazing tools and APIs we people have created over the past decades. And of course we have human intelligence that we may want to leverage in our future AI systems. So the next question is how to make a good whole. So the next question is how to make a good hole. So, Autogen can provide the framework for you to kind of build a hole. So it is a programming framework for agentic AI in general. And let's see how we answer those questions in the framework of Autogen. So in the framework of Autogen, the first step, what are the parts? So in autogen, the parts are all about agents, and more specifically, they are conversable agents, which means that each of the agents in this framework is able to talk to any other agents in this framework. So that's why we call it conversable agent. And the second unique feature of O2Gen agent is custom availability. So you can equip an agent with a large long model or a tool or even human input depending on your needs. And you can also kind of specify any combination of them. So that's the first step. So the second question is how to make a good call with autogen. So in autogen, it is intuitively very simple, which is let them to talk, because all the agents are conversable able to converse with each other so that's what we call conversation programs so basically specifying how the agents should talk to each other and what are the conversation patterns so the commonly used conversation patterns including sequential chat, message chat, group chat, hierarchical chat, and even more. You can specify whatever multi-agent conversation pattern you need. Of course, you can always use a single agent if that's good enough. So let me quickly go through some concrete examples. So the first example is a very simple application in which we want to write a blog post. For example, we want to write a blog post about keeplearning.ai, and we want to kind of have an iterative process where there is a critic agent that can help provide criticism or feedback on the blog post and so that we can keep improving it. So in this simple example we can just simply create these two agents by importing this assistant agent abstraction from AutoGen and then initiate chat between these two agents. And then these two agents will talk to each other autonomously until the problem is solved or until we reach a satisfying blog post. And of course, you can always specify how much runs you want to have at most. So that's a place where you may want to include max turns. Yeah, the next application I want to quickly go through is this custom onboarding task, which is a representative task that may need sequential chat. So sequential chat is a type of conversation pattern that is good for tasks that can be decomposed into subtasks. For example, in applications where you have clear SOP, for example. In that case, you can specify a group of chat or a pair of agent chat for each of these subtask or substep and then chain them together. So this visualization shows how you do that high-leveling with Sautogen. For example, in this custom onboarding task, you may want to have a step where you collect customer's information, and the second step is to survey customer's topic interest, and the third step is to engage with this customer. So there is a clear SOP here. Then let's see how we can achieve that with Autogen. So as I mentioned, it's just two simple steps. The first step is to define what are the agents needed. So here I'm omitting the definition of agents here, but it's pretty intuitive. You just define the name of the agent and the corresponding prompt. And then you just change them together as a list here. As you can see, each element in this list is a pair of agent. In this case, it's just a two agent chat. So it's just a pair of agents chatting with each other. And we will carry the summary from the previous chat to the next chat so that we can carry the necessary information needed for the next subtask without carrying like the lengthy context from the previous task. So that is sequential chat. Next is an advanced conversation pattern called message chat. So message chat is good for the use cases where you want to have the agent to think a little bit more before it talk to the other. So we all know the new release of OpenAI, O1 model basically is a similar philosophy of similar paradigm. So the O1 model is trained on chain of thought data so that it can think before it responds to the user. And here we are doing similar thing on the agent level. Here we are using this nested chat to achieve that. So basically we are using the nested chat, register nested chat function, you can add a nested group of agent chat within on top of another agent. For example, on top of this critic agent, we want to have this nested chat as the kind of inner monologue of this critic because we know when we try to review a particular blog post we have several different aspects, several specific aspects we may want to investigate on. For example, we want the blog post to have good, to be good in terms of SEO because we want it to be ranked higher in search engine. And we may want to ensure the blog post is good with respect to legal and ethics. So with this message chat, you can include ICO agent, legal agent, and think agent as in a monologue and have a more sophisticated reflection on how to criticize, criticize a blog post and then only return a summary or aggregated, aggregated reflection and then respond back to the other agent. So this is another example and this example is again using message chat but here I want to mention that on top of any agent you can rapidly register any tool on top of it to enable more sophisticated execution. For example in this example we are adding this chess tool to make sure that we can ensure each move of AI agent is the legal chess move so that we make a legal game eventually. So to learn more about this how to design sophisticated multi-agent conversation pattern. You can quickly go through this course, short course on deeplearning.ai. We met together with Andrew N. And we also have a growing autogen ecosystem. Now autogen supports over 75 models, including a lot of non-open-air models. And also we have a growing ecosystem of all kinds of support, for example, observability, and we have the support of AgentOps, and thanks to the Maybing partners from the AgentOps team. And we also have ecosystem with other type of tools, like different databases and the different platforms, and tools for handling long context memories, and also tools for improving prompts. And finally, we want to let you know that we have this dedicated autogen-AI organization to host the repo and also the whole ecosystem. And within this organization, you can find this field with autogen repo. And within this repository, you can find a lot of sample applications and cookbooks and the tools you can readily leverage. So I believe this will be helpful for the hack zone. Yeah, that's about all I want to share and happy hacking. And if you want to use Autogen, you can just pip install Autogen. And we used to use the name PyAutoGen, but now we have this even better name. So yeah, thank you all for having me. Thank you. Thank you very much. So thank you very much. Yeah, with all these frameworks and technologies we're looking at, it might be hard to keep track so I wanted to let everybody know that you can access links to all of these repositories and documentations through the LabLab platform in the technology section. But yeah, now please let's give a warm welcome to Nicky Birkner, Product Manager at Together AI. Thank you. I have your presentation. Perfect. Thank you. Okay. Hello. I don't know if the microphone works. Oh, great. Hi, everyone. My name is Nikki. I'm a product manager at Together AI, and I am going to tell you a little bit more about our company and also give you a quick demo of how to use our platform, and we hope that everyone uses our platform for the demo. So Together is the cloud platform for open source and independently developed AI systems. We were founded in June 2022. And we're headquartered right here in San Francisco. We have a very strong team that has had over a decade of research experience. So folks that have led the AI lab at Stanford and invented Flash Attention 2. And so we're really built on this foundational research and everything we have done. A bit about us and what we do and our innovations. So we, our research team has been able to implement a lot of really cool features, both on the platform. Like one of our greatest characteristics of the company is being able to bring things from research or the lab to production. So that's how we've been able to build our inference engine and inventions like Flash Attention 2, sub-quadratic model architectures, as well as open source data sets like red pajama and cocktail SGD. Why use Together? We are a platform where you can create fine tuned models that are private and they're models that you own once you fine tune them. So you can just download them and take them wherever you want. One of the reasons why people come to us is also because of our speed. We spend a lot of time optimizing our inference stack so that we can bring you next-level efficiencies in inference and be able to scale with your needs. We are also very well known for a really good developer experience and just ease of use, and I will show you real quick in a demo later how that looks like. And also because we're hosting a lot of open source models, we have really good costs. So it's like 6x lower cost to alternatives. And a few details about our products. We have four main product lines, together inference, together fine tuning, together GPU clusters and together custom models. Inference and fine tuning are probably the ones you're going to be using here because you can just go to our API, create an account and get started. But we also offer GPU clusters. So basically like notes of GPU clusters of A100s, H100s to people who want to do pre-training as well as help people build their own custom models. In terms of inference, you can experience a bunch of models in chat language, image code and embeddings on our fast inference stack. So you can see some of the models that we offer here. And you can run inference on any of these at a very low cost. We have a playground where you can go in and try the different models to see which one best suits your needs. And we also have really powerful features like JSON mode, function calling, and messages support, async support. And we're also fully OpenAI compatible. So if you have any queries that you are sending to OpenAI, it should be really plug and play to go into our stack. In terms of our embeddings endpoints, we basically offer you the ability to build a RAG system and we have tutorials with Langchain, Lama Index, and MongoDB. It's important to know that we have cheaper pricing, better quality, and longer context than most alternatives out there, so I really recommend you to look into our embeddings API and endpoints if you want to build a rag system. And fine-tuning. You can fine-tune any of our 35 plus leading open source models with your private data. The cool thing about this is that you own and fully control the end results of the model. And it's very fast and easy. I'll show you in just a second. It's as easy as three commands. And then you can run inference on your models. You can deploy a dedicated endpoint of that model directly from our playground. So here are a few QR codes for our docs and also for getting more credits. Everyone gets $5 of credits when you sign up, but we are also giving everyone more credits. So feel free to fill out the form. One thing I will mention is that there is a limit to the file size to get the additional credits. And just make sure you're using models that are less than 40 billion parameters or use LoRa to make sure that everyone can fine tune efficiently. So we're going to be keeping an eye out for very large fine tune jobs to make sure that everyone has fair access to the API. So that's it on the slides. I'm gonna try to change this to the quick demo. Okay, so this is our platform, well, our playground. We basically built this playground around our API to give you a very easy way to be able to try different models and learn how to use our platform. So first part of this demo, and I'm forgetting my sentences. I'm going to show you kind of like this is using MetaLlama 3.1.70b, so one of the most recent models that LAMA launched. It's a pretty incredible model, so I highly recommend everyone to look at it. With the playground, you can tweak many parameters, so things like system prompts, stop sequence, add in a safety model like meta's LAMA guard, as well as these different parameters of output length, temperature, top P, top K, repetition penalty. All these are also parameters you can tweak in our API. But if it's easier, you can just come here, tweak, tune your recipe, and then you can really easily switch to what the API code would look like and just copy and paste in whatever language you want to take this into our API. The next part of this demo is, I want to show everyone how to start a fine-tuned job. So let me pull my terminal. All right. So the first thing you want to do is you want to make sure that you prepare your data well. We have a lot of documentation around which file format you should use for your data, how to do loss masking, training and validation splits. And so at the end, you should go ahead and choose. Okay, so I'm gonna do a files check just to make sure that my data looks good. I'm gonna use this legal data that I already have preloaded in my computer just for the use of this demo. And if it is checked, passed as true, it means like your data looks good. You should be able to fine-tune. So make sure to run that check before you upload your data. I'm now going to upload my data. So it should take a hot sec. You can see the data uploading. This is basically putting the data into our platform so we can then pull this for the fine tuning job. You will get a once that is uploaded, obviously in demos, you know, get slow. Once that is uploaded, you should get a file ID, and it's important for you to save this or store it somewhere, because you're going to need to use it for actually fine-tuning. So I'm now going to do the fine-tuning job. So I'm going to pull my training file. And I'm going to choose a model. In this case, it's going to be MetaLama 3.8b. And I'm also going to do LoRa. Great. And I'm also going to do Laura. Great. So now I'm going to go back to the playground. And you can see that the job has now started to run. And it's queued. That means that you should be good to go. You'll see different statuses for when the job is actually starting to process. When it's completed, you can go cancel it if you want to. And you'll be able to see also the different checkpoints. So I can see an event log of this one I just fine tuned right before this demo. You can see the different checkpoints that you run through. You can do multiple epochs if you want as well. And then you'll get all the different checkpoints. Once the model is ready, you can deploy it for inference. And so the way you would do it, you can also download it, by the way. The way you would do that is you go to the job page, then try to go to deploy it. You can select your hardware. So say I want to deploy this on an RTX 6000. I click play. It will take a little bit of time. And then you should also get some templates for how to run inference on this model. And then once this is good to go, you can actually go ahead and try out your model. So that's the demo. And I'm going to be around if anyone has any questions. But yeah, we're really excited to be sponsoring this hackathon and, yeah, use our platform. So thank you very much. Also, I wanted to let you know if you had maybe a hard time following up or didn't have enough time to take notes, this video will be available in our Twitch and YouTube, lablab.ai. So yes. And yes, also to help you boost your development, everybody participating will have access to get $50 in API credits from Together AI. You just have to go to the form together.ai slash forms slash hackathon. So everybody take some pictures. And now it's time to introduce our next speaker. So everybody, please, a round of applause for Alex Rittman, co-founder and CEO of Agency. Thank you. Cool, cool, everyone. My name is Alex. I am the co-founder of a company called Agency. We build a developer tool called AgentOps. Has anyone heard of AgentOps before? All right. Awesome. Most of the room. So that's really exciting. So I'll give a quick rundown about the world of AI agents, how to get better observability, how to understand what your agents are doing, and some ideas on some great hackathon projects that you could be working on. So let's kick it off. All right, so building AI agents with AgentOps. So what exactly is AgentOps in the first place? So we are a two line of code integration pattern that we work with most of the major AI agent frameworks. Crew AI, Lorenz came up earlier. AutoGen, we saw the AutoGen presentation earlier. LAM index, Cohere, Multion, Play LLM. If you're using any of these frameworks or projects for your hack, we really recommend just adding agent ops. Two things, number one, you get cost tracking baked in, which is extremely valuable, and two, you get all these session analytics replays, gives you a nice waterfall diagram, and more importantly, instead of scrolling through your terminal, which spits out a lot of text text you get a nice little chat viewer right over here it makes it substantially easier to debug and just develop faster when you're running the hackathon the number one thing that you need to manage is your time and agent ops helps you save time when you're debugging but besides that uh we are more or less on the AI software development lifecycle track. So what AgentOps does is we help developers build applications from prototype to production, specifically AI agents. So everything from understanding what the use cases are to you have an agent that's solving a problem in the lab and you want to bring it to production, we help make sure that iteration loop where you're trying to evaluate and understand how the agent's performing, we want to make that loop as fast and concise as possible. So what are some AI agents? We've seen a lot of AI agents in our days. Happy if anyone wants to chat with me later, brainstorm some ideas. We've seen a lot of really cool AI agents. So there's four primary use cases that we've seen come along on the space. So number one is the co-pilots. So co-pilots, you can kind of think of it as chat with your data, right? So all the enterprises are, they want to get chat GPT, they want to upload their PDFs, they want to upload their emails. These are the co-pilots and they deliver enormous amounts of value to your work, especially when you have unique data sets. So we saw the folks at Upstage talk about how they're having document readers, right? The chat with your data is a really, really valuable use case. Definitely, if you have a unique data set, copilots are the way to go. Now, chatbots. Now, this is actually, we're seeing a lot of value being delivered in the market. Specifically, this company called Klarna, they replaced 90% of their human support agents with AI agents, and they're saving $40 million a year, which is an enormous amount of money, right? We've seen people building AI agents with Crew AI. One user that we have, they are a SaaS company. They normally only have paid subscription services, but they've now expanded to the free tier. Problem with the free tier was all the free users were asking dumb questions to the human support agents. So they whipped up a crew and now they have AI agents answering questions better, more effectively, and the customers are super happy with it. Developer tools. We see a lot of things come out these days like we've seen Bitlify having kind of AI search tools. We see things like Greptile. We see things like Cursor, right? I use Cursor almost every single day. There's a lot of code co-pilots, we see things like cursor. I use cursor almost every single day. There's a lot of code copilots that we see that make it way easier to develop software just from a prompt. So these are the kinds of agents that are actually writing other agents in some ways. And then lastly, the autonomous agents. And these are the ones that are most creative, the ones that will probably earn you the most brownie points during the hackathon, honestly. But it's when you take domain knowledge about a very specific problem, you outfit your agents with tools and you give them the memory and you give them the ability to solve problems on their own in unique ways that aren't exactly orchestrated. Alright, so anyway, enough yapping about agents. What exactly are the agents? So we saw them kind of pop off about a year ago, about a year and a half ago, with a project called AutoGPT. The idea behind AutoGPT was it was a chat GPT prompting itself to get better and teaching how to make money. That's actually the original prompt. AutoGPT please make me money and then it just built upon itself to the point where it became autonomous with the help of some developers. And we saw some very other cool projects like AgentGPT, BabyAGI and Jarvis emerge. And the way that we describe agents in general is we call them like self-driving programs. So we have the self-driving cars outside in San Francisco. If anyone has seen the Waymos or the Cruzes or the Zookses, you have the self-driving cars. You give it a destination, and the car figures out how to get there. But the agent is you give it a prompt. You give it an explanation. I want this done, and it figures out how to get there on its own. We see a lot of cool agents coming out of the research labs, so there's a very, very cool agent called Voyager that really kind of visualizes what's possible with these things. So Voyager was by a team at NVIDIA, Caltech, Stanford, UT Austin, where they essentially taught an agent to teach itself how to create tools and solve problems in a simulation world of Minecraft. So the idea here is you could actually have a skill library and then the agent would write its own code to save skills to the skill library and then with that it could navigate the world more effectively. So here's kind of a demo video. You could just give it a high level prompt like gather a cactus. And this again, this is an AI agent playing the game. There's no humans behind this. And it would actually teach itself how to solve the problem. So again, think of these as a self-driving program. What kind of self-driving programs could you be creating that deliver value in this short period of time? So this all kind of merges together with this idea of the LLMOS. So Andrej Karpathy published this really interesting diagram where kind of the future of agentic compute is gonna look something like this, where you have like this LLM processor that connects with all sorts of other things. You have multi-modality with devices like video and audio. You give it access to the internet. You can talk to other agents. And it all coalesces in this stack, which is what we call the AI agent anatomy stack. So underlying everything. Every agent has an LLM underneath it. But when you connect an agent to tools and function calling, that's actually when it becomes an agente program. And it's almost similar to how humans are, right? So what separates humans from other species? Well, we're intelligent, so we have the LLMs, but we also have mastery of tools. We know how to create tools. We know how to teach ourselves these things. And that's what makes us really actually intelligent creatures. So think of it similar to AI agents, right? You connect LLMs to tools. Now, what we do with agent ops is we help track all of these aspects, right? So everything from the metadata, including the model context, the pricing, to external memory, right? If you're connecting to a vector database, what documents are you loading into context? It helps you solve problems. And also multi-agent tendencies. So we see a lot of frameworks these days like autogen and crew. You actually have multiple agents working together to solve problems and they do so more effectively. They do it faster and they don't get confused, they don't get stuck in loops. So anyway, that's kind of some food for thought about what agents are and what they're doing. So we, prizes. We are not doing any official prizes, but we have a community spotlight program. We find really, really cool projects that are built using AutoGen, CREW, and also AgentOps, and we do little mail-in packs. So if you're interested in this, again, it's two lines of code, just add agentops.init. If you're using any LLM provider in Python, if you're using any of the major frameworks, again, we'd really just recommend giving it a shot, see the dashboard. It will help you out a lot. And if it doesn't, just let us know and we'll very quickly iterate. And you will still also get some swag out of it. So to get started, please, we have a QR code that goes straight to agentops.ai. For the duration of the hackathon, you will have unlimited credits. You will not get any crazy unwanted bills. And again, like the installation pattern, it's very no-brainer. Two lines of code, you will not get any crazy unwanted bills. And again, like the installation pattern, it's very no-brainer. Two lines of code, you pip install it, and then you just agent ops that in it with an API key. So that is it from me. I'm looking forward to the next speaker. If you have any questions, you have any ideas, what are agents, what can they do, what is possible, just come speak with me. We're very open. We have an enormous community on Discord. If you have any questions about that, leave a question on Discord. And yeah, great hacking, and good luck to everyone. So, yes, I think I liked a lot about all the partners that we have for this hackathon, is that you don't really have to choose one or the other to build something. You can build an agent using agent ops with Cray. You can power Cray using maybe Solar Pro or any of the Lama 3.1 models, which I think is pretty cool. And talking about Lama models, I would like to focus our attention to our next speaker and give a big round of applause for Katie Watson, Privacy Policy Manager at Meta. Hello, hello. Thank you all so much for having me. I know that we are getting late in the evening, so thanks in advance too for your attention. I appreciate it. All right, I'm going to load this up. Okay. Perfect. Well, we might all just have to bear with my slideshow not operating because I didn't log into the Wi-Fi. And that's okay. We'll do it like this. Okay. Lovely to meet you all. My name is Katie Watson Jordan and I am a Product Strategy and Governance Policy Manager at Meta. What that means is a variety of things, but what it means today for this audience is that I am the co-lead of our Lama Impact Grant Program. And I am super excited to talk to you guys about that grant program and specifically about the ways that hopefully some of you all will be interested in applying to the program. So by way of introduction, I'm sure this comes as no surprise, but AI has been a critical technology for Meta for over a decade. We've been super invested in AI in a variety of formats since 2006 when AI first started to power our feed services. And since then, a couple moments of real sort of dedicated resourcing and inspiration for us came in 2013 with the foundation of FAIR and then in 2016 with the evolution and resourcing into PyTorch. The thing that we are most excited about today and that I'm chatting with you all about though is the evolution of Llama. So back in 2022 we started with Opt 170, and that was just for the research community. But we had such a great response to it that we built on it, and we introduced LLAMA1 in 2023, in early 2023. That was also just for the research community. Since then, we have released LLAMA2, CodeLLAMA, LLAMA3, LLAMA3.1, and those have all been open to all bearing certain eligibility. So this is something that we are super excited about and we have been really, really pleased in the uptick of sort of interest in this model, the utility of this model. And as of earlier this quarter, it has been, LAMA3 has been downloaded, or excuse me, the various iterations of LAMA have been downloaded over 300 million times. And as of earlier this year, we found that there's already 16,000 plus derivative models across LLAMA. And that number actually came before we released LLAMA 3 and LLAMA 3.1. So we're optimistic that it is quite a bit higher and that my slide is out of date. So with that sort of interest and with that kind of background, you know, LLAMA is, I hope, doing a lot for all of you all in lots of different ways but we have a real interest in figuring out how can this tool be something that actually is is truly beneficial to society how can we use this in fields that really need attention and investment and so we launched with Lama to last year the inaugural Lama impact grants so this was a process by which we had an open call and we asked folks to apply under three tracks they were education environment and open innovation and of about 800 applications from people in over 90 countries we selected 20 finalists for this process across those three tracks and one of the things that we actually found was super interesting was that within open innovation the vast majority of applications were actually focused on healthcare. And that was super, super exciting to us because there's so much that can be done in that space. And it seems like a real indicator of the kinds of places where people are willing to sort of invest resources and time and research. So we took our 20 finalists around the world and we brought them together for a refinement workshop. Come back to these in a second. To a refinement workshop at our offices in New York. These folks got together, they spent time with our technologists, they spent time with our policy team, our comms team, our various groups that could help them really work on the holistic aspects of their application, and they spent time with each other, learning from each other and the way that they were approaching these similar problems in really different ways. So as a result of that refinement workshop, we got really excited about the kinds of things that can happen when we get people in a room together exactly like this. And I'll highlight our sort of finalists for a second because one of the things that was really exciting here is that several of these finalists ultimately ended up combining their applications and doing things together because they found that doing things as a team, doing things especially in an open source environment as a team is really beneficial and can help you to do way more really good work than if you were doing it by yourself. So that continued to get us really excited about that kind of opportunity. And so we went back through the applications and we found 70 what we were calling runners up for this program. And we invited those folks out back to Menlo Park and we had our runners up for this program. We invited those folks out back to Menlo Park and we had our first global Lama community summit. This was a space where we had our engineers come and talk to people. We had developers with Lama doing demos and we spent a lot of time learning from each other about the kinds of things that were being done with Lama. That led us to launch another much faster tracked program. This is the Llama Impact Innovation Awards. And these were just open a couple of months ago. Application was really short from June to July. And it was specifically focused on the global south to figure out, you know, a lot of investment is happening in this space in the U.S. and Canada and in Europe. But what is going on already in the global south to solve some of the problems that we think are really addressable with open source AI? This kind of prize was only available to organizations that were already building and had something that we could help them build off of. And we're very excited that both the winners of that grant and our large-scale initial inaugural grants are going to be announced in two weeks. And we are very excited to tell everybody about the organizations that we've been working with. So that brings me to why I am here today and what we are working on next. A few weeks ago we launched the LLAMA 3.1 impact grants and essentially what we did is we've taken all the things that we've learned about those last several grant program structures and tried to create a more holistic and more on-the-ground kind of package of programming. So anyone is still welcome to apply to our global call for proposals. There are some criteria in terms of what kinds of impactful programs we're looking for, but it's still up to $500,000 in grants, and the eligibility criteria are pretty broad, and they're on our website. There are some things that are going to receive special consideration, and that is things like you using LLAMA 3.1? Are you showcasing new capabilities of it? Are you lowering barriers for other developers to help use these tools? And I've heard from several speakers that you are already doing that in really cool ways. So I don't know, maybe take a look at the grant website. But and then also are you addressing some of the topics that we think are most ripe for sort of social impact economic development and things like that around the world with your programs. So that's very traditional to our first program. We're really excited. The thing that I think is important for this group is that we're also doing a series of global events. And so these are things exactly like this hackathon, which we're super, super excited to be a part of and just really appreciate you all inviting us to be here. It's really, really exciting for us. And if you, by nature of participating in this event, if you use LLAMA in your program, we're inviting you to apply to a specialized track that is just for folks who have participated in something like this and will have a $100,000 prize per region. So in this case, that'll be just the U.S. and Canada and it will only be participants of this kind of event. Right now we're planning events such as this one. We'll have another in San Francisco likely later this fall. We're doing something in Montreal and then potentially in Austin, Texas. So if you happen to be in any of those places, please also check the website and see if you can sign up there. Similarly to the larger kind of grant call, we are still focused on social impact or on economic development. So if you can use Lama in some way that you can show is beneficial for other people, it's beneficial for society, it's beneficial for the ways that other people can also build using Lama, we really want to hear from you. And so for that reason, we're very excited to sponsor a specific prize for a Lama winner tonight and hope that you all will use this as an opportunity to make it full screen yes I also I'm gonna come up to uh I know my slideshow buttons not working because I didn't log on to the Wi-Fi and I'm very sorry about that but I hope that this is better and I'll pull up a URL in a second too so yeah we want to use that as sort of an entryway into this grant program. So please do consider applying. Really quickly, here's the evaluation criteria that's going to be applied to all the applications to the grant program. This is also on the website, so no need to spend too much time on it now. But it's all things that are, I hope, fairly straightforward and that you'll be really well prepared to speak to by nature of participating in this event. If you want to learn more, especially about the events, if you want to participate in more than one, you can send an email to this email address. We also are going to have a calendar up with the links to register for all these various events within the next week, so feel free to just check the website. feel free to just check the website and then we have two webinars going on later in the month or in October where you can come and just ask questions especially if you already are developing your application and want a little bit of guidance so please apply by November 22nd in order to be eligible for this hundred thousand dollar prize and when you get on to the application it'll just say like did you participate in a regional event just put the name of this event in there and that will make you eligible for entry into that prize and so with that I'm happy to take questions afterward I know there's more speakers but just want to say a huge thanks to everybody for all the work that you're doing with these models and in this ecosystem and we're very very excited to see all the things that you all build. Thanks. Thank you so much. Appreciate it. Okay. Thank you very much. And yes, so to reiterate what was already said, participating in this hackathon makes you eligible to unlock up to $100,000 through LAMA Impact grants. I really encourage everybody to read more about LAMA Impact and all their different events. But yeah, now to our next speaker, coming directly from Composio, Soham Ganatra, co-founder and CEO. Please, everybody, a big round of applause. Soham Ganatra, Co-founder and CEO. Please, everybody, a big round of applause. Thank you. Check. Hi, folks. I'm coming directly from India to present Composio. Composio is a very simple way to integrate tools with agents. Like so many folks before me spoke about what agents are and how they are going to change the world. The idea is that agents still need tools to change the world, right? They need tools to interact with the world. And what Composio does is a simple repository or a library to manage tools for all your agents. We do integration, we do authentication, we do runtime execution of every single tool and allow you to, in a very simple sort of one line of code, integrate more than 100 different sort of tools like Salesforce, HubSpot, System Manager, File Manager, directly with your agents. We are optimized for function calling. Let me quickly start. So what are we building? Right? So the idea is like Zapier existed before to connect different softwares and pull all of them together. Then needs a sort of similar kind of framework or a layer to do the same for agents or LLMs. And currently creating those tools is a very time consuming job and that requires a bunch of optimization to be done again and again. What we do is we provide a very simple SDK that allows you to sort of authenticate your agents. So, you know, when you are sending an email, you know you are sending the right email from the right account. We do function calling optimization. So we are increasing the reliability of the agents. Because again, agents are not really in production. Everyone here knows that. And so improving reliability helps a lot. And then we do secure execution. Because again, you can't really execute the code anywhere. And that could lead to a bunch of sort of security issues happening. So we provide a runtime environment for all the tools. We are compatible across the chain. So think about any single LLM. Think about any major agentic framework. We support all of them. We support Langchain, Lama Index, Krui, Autogen. We support Gemini, Cloud, OpenAI, Lama, right? We support every single LLM with function calling. We support close to 100 different tools. So we support HubSpot, Salesforce, any CRMs, any file manager. We also support browsers, so you can literally build your own browser controlling agent using us. You can build a Mac controlling agent. We support a bunch of system tools. And we are also extendable, so you can create your own tool and add it and basically publish it for anyone to use in Composio. We are open on GitHub. So you can literally just contribute your tools there. We have a lot of different cool examples folks have built using Composio. So we have an example of scheduling assistant out there. Newsletter summarizer out there. Feedback collection mechanism, wherein you can collect your feedback and like paste it on Slack or create an issue on Linear. All of these are open, by the way, the code is out there. We have a PR agent, wherein like every time you create a PR, you wanna review it, like you can do that using Autogen, right? We have a software engineering agent, which is literally like you just create an issue and the agent will automatically spawn and essentially end up committing some amount of sort of code and like creating a PR. We have all these examples out there. You can literally just go to the links, compose your hyphen py examples, git.new, compose your hyphen gs examples. Yeah, another thing, we also support Python and JS. So it doesn't matter which language makes you more comfortable. You can use all the tools in both the languages. Here's a very simple sort of code snippet showcasing how Composer could be used. So in a single line of code, you can literally give any action that you want. So here is a line of code giving OpenAI Assistant a tool saying, hey, I want to sort of allow this agent to start a GitHub repo. And that's it. Like, once you do that, you can literally pass that on to your sort of tools in any agentic framework that you are using. Give your sort of natural language execution task. And that's it. The tool sort of starts executing. We also do triggers, by the way. So triggers are one of the most important things that you want in production, which is like, when do I want my agent to execute? So let's say you want to build an agent that executes every time you get a new email. Easy. You literally just create a listener. You create a callback around that listener. You mention the trigger, saying, hey, I want to execute this agent every time a new email comes. It is like Gmail new email, Gmail messages. And that's it. The callback function is called. Your agent gets executed. And voila, like agent works with listener in the sort of play. We also do runtime environments. So we do local. We support Docker. We support E2B. We support Fly.io. So we support a bunch of different sort of runtime environments currently. In summary, we do a bunch of these things. So we do authentication management. We have a lot of tools. We do optimizations on tools. So you can clearly see 30% increase in reliability when you create your own tool versus directly using Compose.io. We have RPA tools. We have API tools. We have a bunch of different sort of tools. And we have SDK along Python and TypeScript. This is basically the summary of it. AI Agents built with Composio and AI Agents built without Composio. It's free to use. Check out git.news slash Composio. Check out docs.composio.dev, at.composio.dev, and there's a $250 price for the best thing that is built using Composio. Thank you. Thank you. So, thank you, everyone. Just want to add to that that another benefit of being part of the hackathon is that everybody will get access, full access to Composio. This access will be granted using the emails that you use to register for the event. And one last benefit for being part of this hackathon is that AIML API is giving away a thousand keys with $20 worth of API credits so you guys can use different models. The promo code is sfagentslab. So yeah, I mean, that's all in terms of speakers. Now I would like to do some final remarks about the platform itself and how does it work. And also I would like to bring your attention to the fact that we at Lablab have prepared some tutorials and some materials for you to use all the technologies with our partners. So I can show you that real quick. We have tutorials from LAMA, Composio, AIML, Together AI, how to use solar, building agents with CREAI, and there's many, many other tutorials that you can find in the platform. Some of them include notebooks or boilerplates so everybody can jump start their projects right away. You can find that in lablab.ai.t. Now in here there's a hackathon guide. If you scan the QR code, you're going to see the different steps that you will need to follow to participate in the hackathon. Tomorrow morning we will be hosting a small overview of the platform, how to create a team in the platform, how to create a team in the platform, how to register, and everything you need to know. Right now, if you want to take a look beforehand, please scan that QR code. There's also some content in additional tips on how to be successful in a hackathon. So if anybody wants to scan that. And yes, in here, there's a quick start on MindsDB. You can also find these in the screens outside. And also, follow our social media, lablabai everywhere. So thank you, everybody, for your attention, for being here. Thank you to all of our speakers. Everybody, a round of applause to all of our speakers, please. So right now we are hosting a workshop, Crew AI and Composio. It's upstairs, you can access it via Zoom link. You should have the information on Discord. And we will be doing team matchmaking and networking until 9 p.m. So thank you, everybody. Thank you. Thank you. Gracias. | AI Agents Hackathon with Lablab and MindsDB - Kickoff Stream | 7,414 | lablab ai | 20240914 | Join us today for our AI Agents Hackathon!
We are excited to welcome speakers from Together.ai, Upstage, AgentOps. CrewAI, Composio, Autogen and Meta.
Each will bring unique insights and expertise to help you learn, collaborate and create innovative AI-driven solutions. Don't miss out on this incredible opportunity! | 2024-09-13T21:16:56.271119 |
https://www.youtube.com/watch?v=NaT5Eo97_I0 | . Hello everyone. In this video, we'll be building a multi-modal AI agent with RAC capabilities, leveraging NVIDIA's GPU accelerated models and open-source technologies. Let's start with a quick demo of our chatbot in action. As you can see, our chatbot can quickly process and respond to the question about the data provided. So, in this tutorial, we'll cover integrating a pipeline that accepts multimodal kind of data that includes text and vision. As part of the process, we'll cover how to set up vector database, generate embeddings, and how to use GPU-accelerated models. Now let's break down how we built this. Before we dive into the specifics of our chatbot, it's important to understand the ecosystem we are working with. NVIDIA provides a full- full stack ecosystem for large language models that integrate seamlessly with popular open source tools. This ecosystem offers flexibility and performance allowing developers to switch between different components as needed. NVIDIA not only provides proprietary tools but also contributes significantly to the open source community. Tools like Nemo for building and training large language models, Triton for inference serving, are prime examples of NVIDIA's contribution to open source. These tools integrate well with the broader AI ecosystem, providing developers with powerful GPU accelerated options for building LLM applications. NVIDIA Nemo goes beyond just a training framework. It includes open source components that enhance the capabilities for developers. Two notable open source components are Nemo Data Curator that simplifies the complex process of data curation. It helps extract, deduplicate, and filter information from large amounts of unstructured data, ensuring high quality, relevant data sets for training. The second component is Nemo Guard Rails. This component implements safety measures and controls for model output. It allows for developers to add guardrails that prevents inappropriate responses, enhancing the reliability and safety of AI application. developers to add guardrails that prevents inappropriate responses, enhancing the reliability and safety of AI application. You can learn more about this at github.com slash nvidia slash nemo. Let's quickly understand what RAG or retrieval augmented generation is. So RAG announces large language models by giving them access to external up-to-date knowledge. This process allows our chatbot to provide responses that are both general and specific to our patent data combining retrieval and generation for more accurate and contextual answers. Let's explore how we are extending RAG to handle various types of data or what we call a multimodal RAG pipeline. Now there are various ways to handle multimodal data, but in this case, we are grounding all the modality into a single one, which is text for this specific application. So we have introduced a specialized vision language model to process visual data. So in this case, one VLM is handling general images while the other vision language model is specifically tuned for charts and plots. Once the conversion of information is done, rest of the pipeline remains the same as a traditional rack pipeline. For our implementation of this application, we are using cutting edge tools from NVIDIA and the open source community. So let's break down the components of this specific pipeline. So to build our knowledge base, we first have to convert all the documents into text form. And to do that, we'll use vision language models as shared before. So for general image understanding, we'll be using Niva 22B model, which is NVIDIA's fine-tuned variant of Lava. variant of Lava. For charts and plots, we are employing dplot by Google, which is specialized in understanding graphical data, including charts and plots. For this specific example, we'll have some mix of documents, which is PDF and images that contain some of the blogs and slides from NVIDIA. So this data fuels our chatbot's knowledge base. For vector storage and similarity search, we'll use Milvus, a GPU-accelerated vector database that can handle large-scale similarity searches with impressive speed. So using GPU indexing and querying requires numerous query vectors or exceptionally high request pressure for maximum throughput and reduced latency compared to CPU indexing. For embedding, we'll use NV embed, which leverages NVIDIA's GPU acceleration to quickly transform text into high dimensional vectors. The GPU acceleration in both NV embed and Milvus significantly enhances the performance of our chatbot, allowing for faster indexing and querying of our knowledge base. For the actual question answering, we'll use a large language model, specifically LAMA3 from Meta, the 7TB instruct variant, which will be accessed through NVIDIA's NimAPI. So this gives us the power of state-of-the-art language models, which is GP optimized access through an API. Finally, we'll use Lama Index to orchestrate the entire process from query processing to retrieving relevant information and generating responses. We will also use Streamlit to create user-friendly interface for our chatbot where the user can directly upload or just provide the directory path of their documents and then simply query or ask question based on those documents. Now we have learned what the pipeline is about, now let's dive into the code. Before we dive into the individual script, let's take a quick look at our project structure. Our project consists of three main Python scripts. First one is app.py which is the main application file that sets up the streamlit interface and manages the overall flow of our multimodal rack system. streamlit interface and manages the overall flow of our multimodal rack system. Second is documentprocessors.py which is responsible for processing various documents types like PDFs, PowerPoint presentation, and images. Then comes the utils.py which contains utility functions for image processing, API interaction, and text handling. In addition to this, Python files are projects also include several directories, such as VectorStore, which stores all the vector database and related files. In VectorStore, we have image references and PPT references, which stores image extracted or converted PowerPoint slides and its related files. This structure allows us to efficiently manage different types of data and maintain a clean separation of concerns in our code. Before we dive deep into our React QnA chat application code, let's generate an API key for NIMS by going on to build.Nvidia.com. And we'll select Lama3 as our LLM model for the application. Here you get various options to try out the model or just boilerplate code to get this model or NIM, optimize NIM onto your application. But here we are trying to generate an API key. So just click generate and there you have it. Just copy this API key and you can use wherever you are setting the environment variable. But in my case, I'll set this on my terminal. Assign that API key to a variable named nvidia underscore API underscore key as an environment variable and you can do that with this command and press Enter. Let's start with utils.py which contains various utility function for our multimodal project. Here we are importing necessary libraries and setting up our NVIDIA API key. This is an alternative way from what we did in the terminal, but you can choose either one of those, whatever is better for you. And then now we load image processing function, which can handle image conversion to base 64 format, which is necessary for sending images to NVIDIA NIM APIs. Then we use image analysis function, which help us identify and process the graphs or charts in our images. We use NVIDIA's NIM APIs where the model are hosted to generate a description of these visual elements. And now we are using these functions to interact with NEM API to describe images and process graph which are crucial for our multimodal understanding capabilities. Next we'll create some text processing functions to process text from PDFs, allowing us to extract context around images and tables and group text blocks efficiently. And now we'll do some file handling, which handles saving uploaded files, which is important for processing user provided documents. So that covers the main utilities in our utils.py. These functions form the backbone of our image and text processing capabilities. Now let's move on to the document processor.py. Now let's go through the document processor.py file, which contains functions for processing various types of documents. So first we start by importing necessary utilities and notice that we also importing functions that we just defined in the utils.py. Now we'll create a function that is crucial for extracting content from the PDF files, including text, images, and tables. Now we write function for handling the extraction of table and images from the PDF files including processing them and storing them. Now let's work on the PowerPoint processing and here we are writing four different functions that work together to process PowerPoint files, converting them into PDFs and then to images while also extracting text and notes. And now write the main function, which handle the loading and processing of various file types, whether they are individual files or entire directories. So that covers the main component of our document processor dot py file. Now let's move on to the main script, which is app dot py, which includes all the GUI and function calling for all these scripts that we have just written. Now let's move on to app dot py, which is the main application file for our project. We start by importing necessary libraries and set up our streamlet page. Notice how we are using Lama index components and our custom modules. Then we set up our embedding models and language models using NVIDIA's name integration with Lama index. And then, uh, we create the index, which uses the vectors database using Milvus and stores all our document embeddings. Now this is the main application where we set up a streamlit interface. It provides option to upload or directory processing and sets up the chat interface. And now this section handles the chat interface, processes user queries and displays responses. So that covers the main component of our app.py files and all the other Python scripts. Now let's try running this code on terminal. Before we run the code, let's create a conda environment and install all the dependencies that we need for this project. First, we'll create a conda environment using conda create command. I'll name this environment as gpurag and I'll have Python 3.0 so it will create the environment. Now let's activate the environment that we just created by using conda activate and the environment name. Now we are into the environment. Let's install all the required dependencies by using pip install method and passing in the requirements.txt file which includes all the dependencies. Now dependencies are installed. Let's try running this app using Streamlit. To do that, you mainly use streamlit run and app.py. It should generate a port number where you can use to look at the web app. Here it has given us the local URL, network URL, and external URL as well. It's 8501.0.1. So let's try to go into the browser and see what's there. So this is how the UI looks like, where you have an option to upload files or enter a directory path where it will process all the documents present in that directory. So in this case, I'll upload mix of documents, which is a PDF of a blog from NVIDIA developer, which includes images as well as graph doing various performance comparison or the speed up. And then I'll also add an image or a slide about Tensorati. So let's try importing those files so once the documents are uploaded to the server i can process these files where it is generating embeddings and creating a vector database where you can query or ask questions about these documents that you just uploaded. So it may take some time or while based on the size of the documents. Now that the index is created, let's try asking questions based on the document we just uploaded. So first question I'll ask is like, what is the variant about Lama that is the variant about llama that is the fastest. So based on the chart in this blog, it answered correctly which is 7bq4. Now let's ask how does CUDA graph lead to substantial performance? performance. So again it used the context and then let's wrap it up by asking who's the author which is Alan Gray based on the doc we just uploaded. So there you have it. Users can now upload documents of various input types including text and vision, ask question and receive answer based on the content of those documents in real time. So in this tutorial, we have covered how to build a multimodal AI-based RAG application for enhanced understanding using GPU accelerated components from NVIDIA and open source tools. Feel free to experiment with the provided scripts and explore further. For more information and resources, check the links in the description below. | Building Multimodal AI RAG with LlamaIndex, NVIDIA NIM, and Milvus | LLM App Development | 1,001 | NVIDIA Developer | 20240903 | This video explains how to create a multimodal AI retrieval-augmented generation (RAG) application, including the following steps:
1. Document processing: Convert documents into text form using vision language models, specifically NeVA 22B for general image understanding and DePlot for charts and plots.
2. Vector database: Explore the power of GPU-accelerated Milvus for efficient storage and retrieval of your embeddings.
3. Inference with Llama 3: Leverage the NVIDIA NIM API Llama 3 model to handle user queries and generate accurate responses.
4. Orchestration with LlamaIndex: Integrate and manage all components seamlessly with LlamaIndex for a smooth Q&A experience.
Learn more with this notebook: https://github.com/NVIDIA/GenerativeAIExamples/tree/main/community/multimodal-rag
Join the NVIDIA Developer Program: https://nvda.ws/3OhiXfl
Read and subscribe to the NVIDIA Technical Blog: https://nvda.ws/3XHae9F
#llm #llms #llama3 #llamaindex #nvidiaai #nvidianim #langchain #milvus
NVIDIA NIM, code review, LangChain, llamaIndex, Llama 3, Milvus, NIM APIs, Mixtral, NeVA/DePlot | 2024-09-18T09:13:09.871065 |
https://www.youtube.com/watch?v=jF43_wj_DCQ | Welcome back to the PyTorch training video series. This video is about the fundamentals of model training in PyTorch. In past videos, we've discussed building models with the neural network layers and functions of the Torch.nn module, the mechanics of automated gradient computation, which is central to gradient-based model training, and using TensorBoard to visualize training progress and other activities. In this video, we'll be adding some new tools to your inventory. We'll get familiar with the dataset and data loader abstractions and how they ease the process of feeding data to your model during a training loop. We'll discuss specific loss functions and when to use them. We'll look at PyTorch optimizers, which implement algorithms to adjust model weights based on the outcome of a loss function. And finally, we'll pull all of these together and see a full PyTorch training loop in action. Efficient data handling in PyTorch is achieved via two main classes, the data set and data loader. The data set is responsible for accessing and processing single instances of your data from your dataset. There are a number of datasets available in the PyTorch domain APIs, and you can make your own datasets using provided subclasses or by subclassing the dataset parent class yourself. The data loader pulls instances of data from the dataset, either automatically or with a sampler that you define, collects them in batches and returns them for consumption by your training loop. The data loader works with all kinds of data sets, regardless of the type of data they contain. The PyTorch domain APIs, TorchVision, TorchText, and TorchAudio, give access to a collection of open, labeled data sets that you may find useful for your own training purposes. TorchVision contains a broad array of datasets labeled for classification, object detection, and object segmentation. It also contains the convenience classes Image Folder and Dataset Folder, which allow you to easily create a dataset from images or other data accessible on your file system. See the documentation for more details on these classes. Torch Text offers datasets labeled for a variety of classification, translation, and analysis tasks. Torch Audio gives access to datasets labeled for transcription and music genre detection. Most of the time, you'll know the size of your dataset and be able to access arbitrary single instances of it. In this case, it's easy to create a dataset. Just subclass torch utils data dataset and override two methods. Len to return the number of items in your dataset and get item to access data instances by key. If the key is a sequential integer index, your dataset subclass will work with the default data loader configuration. If you have some other sort of key, such as a string or file path, you'll need to set up your In the Manager Index, your dataset subclass will work with the default data loader configuration. If you have some other sort of key, such as a string or file path, you'll need to set up your data loader with a custom sampler class to access instances of your dataset. See the documentation for more details in this advanced technique. If you don't know the size of your dataset at runtime, for example, if you're using real-time streaming data as an input, you'll want to subclass TorchUtilsDataIterableDataSet. To do this, you need to override the iter method of the IterableDataSet parent class. Be aware that you'll have to do a little extra work to cover the case where multiple workers are asking for data instances from your IterableDataSet. The documentation has example code that demonstrates this. The documentation has example code that demonstrates this. When you make your own dataset, you'll often want to split it into subsets for training, validation, and final testing of your model. The TorchUtilsDataRandomSplit function allows you to do that. When creating a data loader, the only required constructor argument is a dataset. The most common optional arguments you'll set on a data loader are batch size, shuffle, and num workers. Batch size sets the number of instances in a training batch. Determining your optimal batch size is a topic beyond the scope of this video. You'll commonly see this be a multiple of 4 or 16, but the optimal size for your training task will depend on your processor architecture, available memory, and its effect on training convergence. Shuffling will randomize the order of instances via index permutation. Set this to true for training so that your model's training will not be dependent on the order of your data or the configuration of specific batches. This flag can be left to its default of false for validation, model testing, and inference. NumWorkers sets the number of parallel threads pulling data instances. The ideal number of workers is something you may determine empirically and will depend on details of your local machine and access time for individual data instances. Other data loader configuration arguments that you'll see for more advanced cases include a custom sampler class for those cases when your data set is indexed by something other than sequential integers, and timeout, which will be especially important for iterable data sets backed by real-time data streams. And as always, see the documentation for more details. If you will need to transfer your data batches to GPU during training, it is recommended to use pinned memory buffers to do so. This means that the memory buffers underlying your tensors are in page-locked memory, which makes for faster host-to-GPU data transfer. Notes on this important best practice are linked from the interactive notebook accompanying this video. The data loader class makes it easy to do this automatically by setting its pin memory to true when you create the data loader. For this video, we'll be using the Fashion MNIST data set, which contains image tiles of garments, each labeled with one of ten classes, shirts, jackets, shoes, etc. The code in this cell will create dataset objects for separate training and validation data splits, and download the images and labels if necessary. Next, it will create appropriately configured data loaders, so that we don't bother shuffling our validation split. We'll also define the class labels we'll be training against and report the data set sizes. Note that it may take a few minutes to download the data set depending on your network connection, but you only have to do that once. We'll follow the practice of visualizing the output of our data loader to make sure it's what we expect. And sure enough, here are our pictures and labels. So let's move on. Our model for this example is a variant of the Linnet 5 image classifier, which should look familiar if you've watched previous videos in this series. It contains convolutional layers to extract and compose features from the images and a set of fully connected layers to perform the classification. PyTorch includes a broad array of commonly used loss functions suitable for a variety of tasks. These include functions like mean squared error loss for regression tasks, color-backed Leibler divergence for comparisons of continuous probability distributions, binary cross entropy for binary classification and cross entropy loss for multi-class classification tasks. All loss functions compare the output of your model to some label or expected set of values. For our classification task in this video, we'll use cross entropy loss. We'll call its constructor with no arguments, but this particular loss function can be configured to rescale individual class weights, ignore certain classes when computing loss, and more. See the docs for details. In the cell shown here, we'll create our loss function, create some ersatz values for outputs and expected values, and run the loss function against them. Note that the loss function will return a single value for the whole batch. PyTorch optimizers perform the task of updating learning weights based on the backward gradients of the loss function. For more information on backward gradient computation, see the relevant video earlier in this series. PyTorch provides a variety of optimization algorithms, including stochastic gradient descent, AdaGrad, Atom, LBFGS, and others, as well as tools for further refinements, such as learning weight scheduling. The full breadth of optimization algorithms is beyond the scope of this video, but we'll discuss a few features that are common to most PyTorch optimizers. The first commonality is that all optimizers must be initialized with the model parameters. This is best done by calling the parameters method on the model object, as shown here. These are required for every optimizer, because these are the weights that get updated during the training process. This brings up an important point when using PyTorch optimizers. Make sure that your model parameters are stored on the training process. This brings up an important point when using PyTorch optimizers. Make sure that your model parameters are stored on the right device. If you're doing your training on the GPU, you must move your model parameters to GPU memory before initializing your optimizer. If you don't do this, you won't see your loss decreasing over time because your optimizer will be updating the wrong copy of the model's parameters. Most gradient-based optimizers will have some combination of the following parameters. A learning rate that determines the size of the steps your optimizer takes. A momentum value, which causes the optimizer to take slightly larger steps in the direction of strongest improvement over the last few time steps. A weight decay value can be provided to encourage weight regularization and avoid overfitting. Other parameters are usually coefficients or weights specific to an algorithm. For our example, we're going to use simple stochastic gradient descent with learning weight and momentum values specified. Note that the optimal values for these arguments, called hyperparameters, are difficult to know a priori and are often found through grid search or similar methods. Hyperparameter optimization is a topic we'll cover in a later video. If you're working through the interactive notebook accompanying this video, take the time to try different values of the specified parameters to see their effect on the training process. You can also try different optimizers to see which gives you the best accuracy or fastest convergence. Now we have all the pieces we need. A model, a data set wrapped in a data loader, a loss function, and an optimizer. We're ready to train. Along the way, we're going to visualize our training progress with TensorBoard. Here is a function to perform training one epoch, that is, one complete pass over the training data. In this function, enumerate over batches of data provided by the training data loader. Batches are of the size we specified when initializing the data loader, in our case, four. For each batch, we break out the input tensors and the labels. Next, we zero the learning gradients. We tell the model to provide a set of predictions for the input batch. We compute the loss, that is, the difference between the predictions and expected values, and compute the backward gradients of the loss function over the learning weights with the backward call. We tell the optimizer to take a step, adjusting the learning weights based on the gradients we just computed. Finally, we tally the running loss. Every thousand batches, we log the average loss per batch. We also report this value to TensorBoard for graphing. The average loss for the last thousand batches is returned from this function for validation purposes. Next, we'll loop over a number of epochs. For each epoch, we will set the model to training mode, that is, with computation tracking turned on so we can compute backward gradients. We'll train one epoch and record the average loss per batch that it reports. We set the model to inference mode, that is, with computation tracking turned off, since we don't need it for the validation steps below. We do inferences and compute losses for the validation dataset and compute the average loss per batch. We report the average losses for both training and validation, both printing it directly and logging it to TensorBoard. Finally, if this validation loss is the best we've seen for the model, we save the model's state to a file. So let's run this and watch a single epoch. We'll start TensorBoard and see what it reports. And as we would want it to, the loss is decreasing monotonically. Let's watch a few more epochs. It looks like the training and validation losses are diverging. And we see that reflected in the graph. Let's continue making a nice round 10 epochs. It looks from the printed stats that the training loss is settled just above 0.2, but the training and validation losses are still divergent. And that's borne out visually as well. So it looks like our model has converged on its best possible accuracy, at least with these hyperparameters, but we appear to be overfitting to the training data. This may be a sign that our model is overspecified with respect to the the complexity of the dataset or that the dataset is not large enough to infer the general function our model is trying to simulate. In any case, tracking stats, performing consistent validation, and tracking the output visually allowed us to identify an issue to investigate. We've also saved our best-performing model's parameters to a file for further examination. It's worth taking some time to experiment with changes in the model and the optimizer parameters to see how the training results change for a relatively simple case like this. Watch for changes in convergence time, model accuracy, and performance versus the validation set. Model training and the optimization of the training process are deep topics, and the documentation at pytorch.org contains a wealth of helpful information for model training with PyTorch. The Tutorials section of pytorch.org has information on a breadth of training topics, including training techniques such as transfer learning and fine-tuning for leveraging existing trained networks, training generative adversarial networks, reinforcement learning, and Torch.Distributed, PyTorch's framework for distributed training for when the scale of your dataset or your model necessitates training on a cluster of computers. The PyTorch documentation includes full details of the tools we covered in this video and more, full details of the training we covered in this video and more. Full details of the training optimizers and associated tools such as the learning scheduler, full details of the available loss functions, information on the data set and data loader classes, including guidance on making custom data set classes, documentation of torch.distributed and the distributed RPC framework, and complete information on the datasets available in TorchVision, TorchText, and TorchAudio. | Training with PyTorch | 897 | PyTorch | 20210412 | This video covers the tools you'll use to train your PyTorch deep learning model, including The Dataset and DataLoader classes, which ease moving your data from storage into active memory for learning; the suite of loss functions available in PyTorch; PyTorch optimizers, which encapsulate algorithms for adjusting learning weights; and the structure of a basic PyTorch training loop.
Download the Model Training PyTorch notebook here: https://pytorch-tutorial-assets.s3.amazonaws.com/youtube-series/Video+6+-+Model+Training+with+PyTorch.ipynb | 2024-09-19T12:07:02.048158 |
https://www.youtube.com/watch?v=Dk88zv1KYMI | Welcome to the next video in the PyTorch training series. This video will talk about deploying your PyTorch model for inference in production. In particular, this video will talk about putting your PyTorch model in evaluation mode, converting your model to TorchScript and performing inference, using TorchScript with C++, and deploying your model with TorchServe, which is PyTorch's model-serving solution. No matter which deployment method you use, the first thing you should always do is put your model into evaluation mode. Evaluation mode is the opposite of training mode. It turns off training-related behaviors that you don't want during inference time. In particular, it turns off autograd. You may recall from the earlier video on autograd that PyTorch tensors, including your model's learning weights, track their computation history to aid the rapid computation of backward gradients for learning. This can be expensive in terms of both memory and compute and is not something you want to inference time. Eval mode also changes the behavior of certain modules that contain training-specific functionality. In particular, dropout layers are only active during training time. Setting your model in eval mode makes dropout a no-op. Batch norm layers track running stats on the computed mean and variance during training, but this behavior is turned off for eval mode. Next, let's look at the procedure for putting your model in evaluation mode. First, you'll want to load your model. For a Python-based model, that will involve loading the model's state dictionary from disk and initializing your object with it. Then, you call the eval method on your model, and you're done! Your model has now turned off training-related behaviors for inference. It's worth noting that the eval method is actually just an alias for calling the train method with an argument of false. You may find this useful if your code already contains a flag that indicates whether you're doing training or inference. Once you're in eval mode, you can start sending your model batches of data for inference. In the rest of this video, we're going to talk about different methods for deploying your model for inference, but for all of them, making sure your model is in evaluation mode is your first step. So what is TorchScript? It's a statically typed subset of Python for representing PyTorch models. And it's meant to be consumed by the JIT, the PyTorch Just-In-Time Compiler, which performs runtime optimizations to improve the performance of your model. It also allows you to save your model and weights in a single file and load them as a script module object that you can call just as you would your original model. So how do you use TorchScript? Build, test, and train your model in Python as you normally would. When you want to export your model for production inference, you can use the torch.jit.trace or torch.jjet.script calls to convert your model to TorchScript. After that, you can call the.save method on your TorchScript model to save it to a single file that contains both the computation graph and the learning weights for your model. The just-in-time compiler executes your TorchScript model, performing runtime optimizations such as operator fusion and batching matrix multiplications. You can also write your own custom extensions to TorchScript in C++. The code on the right shows what TorchScript looks like, but in the general case you won't have to edit it yourself. It's generated from your Python code. Let's walk through the process of using TorchScript in more detail. The process starts with the model you've built in Python and trained to the point of readiness for deployment. The next step is to convert your model to TorchScript. There are two ways to do this, torch.jit.script and torch.jit.trace. It's important to note the differences between the two techniques for converting your model to TorchScript. torch.jit.script converts your model by directly inspecting your code and running it through the TorchScript. Torch.Jet.Script converts your model by directly inspecting your code and running it through the TorchScript compiler. It preserves control flow, which you'll need if your forward function has conditionals or loops, and it accommodates common Python data structures. However, due to limitations of Python operator support in the TorchScript compiler, some models won't be convertible using torch.jet.script. Torch.jet.trace takes a sample input and traces it through your computation graph to generate the TorchScript version of your model. This doesn't suffer the operator coverage limitations of torch.jet.script, but because it only traces a single path through your code, it won't respect conditionals or other control flow structures that might cause variable or non-deterministic runtime behavior. It's also possible to mix tracing and scripting when converting a model. See the documentation for the Torch.Jet module for notes on mixing the two techniques. It's worth looking at the docs to see the optional arguments for script and trace. There are extra options for checking the consistency and tolerances of your TorchScript model. Now we'll save our TorchScript model. This saves both your computation graph and your learning weights in a single file, which means you don't have to ship the Python file with your model's class definition when you want to deploy to production. When it's time to do inference, you call torch.jet.load on your model, and feed it batches of input in the same way you would the Python version of your model. Everything I've shown you up to now has involved manipulating your model in Python code, even after you've converted it to TorchScript. There are situations and environments though, where you may need high throughput or real-time inference and would like to do without the overhead of the Python interpreter. It may also be the case that your production environment is already centered around C++ code, and you'd like to continue using C++ as much as possible. You may recall from an earlier video in this series that the important tensor computations in PyTorch happen in LibTorch, a compiled and optimized C++ library. PyTorch also has a C++ front end to this library. This means that you can load your TorchScript model in C++ and run it with no Python runtime dependencies. The first thing you'll need to do is to go to pytorch.org and download the latest version of LibTorch, unzip the package, and place it where your make system can find it. This slide shows a minimal CMake file for a project using LibTorch. Note that you'll need to be using C++14 or higher to make use of LibTorch. In Python, you'd import Torch, use torch.jit.load to bring your TorchScript model into memory, and then call your model with an input batch. The process is not so different in C++. First, include torch-script.h. This is your one-stop include for working with TorchScript in C++. Next, declare a TorchJetScript module variable, then use TorchJetLoad to load it into memory. To get predictions from your model, call its forward method with an appropriate input. Here we've created a dummy input with Torch1s. You'd be bringing in your own inputs of whatever size your model requires. Once you have your output predictions as a tensor, you can manipulate them with the C++ equivalents of the tensor methods you're used to in PyTorch's Python front-end. The PyTorch.org Tutorials section includes content walking you through setting up a C++ project as well as multiple tutorials demonstrating aspects of the C++ front-end. Setting up a production model serving environment can be complex, especially if you're serving multiple models, working with multiple versions of models, require scalability, or want detailed logging or metrics. TorchServe is the PyTorch model serving solution that covers all these needs and more. TorchServe loads instances of your model or models in individual process spaces and distributes incoming requests to them. It has a number of features to make it useful for creating ML-based web services. It has data handlers covering common use cases, including image classification and segmentation, object detection, and text classification. It allows you to set version identifiers for models, and you can manage and simultaneously serve multiple versions of a model. It can optionally batch input requests from multiple sources, which can sometimes improve throughput. It can optionally batch input requests from multiple sources, which can sometimes improve throughput. It features robust logging and the ability to log your own metrics. And it has separate RESTful APIs for inference and model management, which may be secured with HTTPS. I'll wrap up this video by walking through setting up and running TorchServe with one of the examples available at github.com slash pytorch slash serve in the examples folder. We'll set up a pre-trained image classification model for inference. First, let's install TorchServe. I'll demonstrate the process for setting it up on a Linux or Mac system, but TorchServe also works on Windows if that's your preferred server environment. First, I'm going to create a new conda environment for TorchServe also works on Windows if that's your preferred server environment. First, I'm going to create a new conda environment for TorchServe. I'm going to clone the source repository because it has convenient scripts for correctly installing TorchServe dependencies. TorchServe dependencies. When you run the dependency install script on a machine with NVIDIA GPUs, you may need to specify what version of the CUDA drivers you have installed. Details are in the install procedure described in TorchServe's README on GitHub. Since I'm installing on a Mac, I can skip that. Now with the dependencies installed, I can either install from source or use pip or conda. I'm actually installing two programs, TorchServe and the TorchModelArchiver, which we'll get to in a minute. If you're installing with conda, don't forget to specify the PyTorch channel with "-cpytorch". Next thing your TorchServe environment needs is a model store directory. All your models served by TorchServe are stored in this folder. You can name it anything you like, but I'm going to keep that simple. Next, we'll need a model to serve. anything you like, but I'm going to keep that simple. Next, we'll need a model to serve. TorchServe expects models to be packaged in a model archive, which contains your model's code and weights along with any other files needed to support your model. For example, in a natural language application, you might have embeddings or vocabularies that you need to package with your model. A model archive is created by the model archiver, which was the package I installed alongside TorchServe above. First, we'll need to download some trained model weights. Next, let's create a model archive from these weights. Taking these arguments one at a time, every model has a name here, DenseNet 161. A model needs a version number. Here we just went with 1.0. We're going to be using a Python-based model. So we use the model file flag to bring in the Python file containing the model class. The serialized file argument specifies the file containing the model weights. If we were loading a TorchScript model, we'd skip the model file argument and just specify the serialized TorchScript file here. We're going to bring in an extra support file, a JSON file containing mappings of the model's trained category IDs to human readable labels. Finally, every model archive needs a handler to transform and prepare incoming data for inference. I'm going to use the built-in image classifier handler, but it's also possible to write your own handler and specify that file here. Now you can see we have a.mar file. This is our model archive. It belongs in the model store, so let's put it there. Now let's start TorchServe. We'll do so with four arguments. The start flag should be self-explanatory. By default, TorchServe stores its current configuration and loads its last config on startup, and the NCS flag suppresses this behavior. The model store flag lets us specify our model store folder, and optionally, we can tell TorchServe to start with the model loaded. We'll specify our new model archive for DenseNet 161. TorchServe puts out a lot of helpful information, all of which is also saved in log files. Let's have a look at the Logs folder now. Note that a log directory has been created alongside our model store. And here you can see we have logs for all TorchServe's behavior and metrics. Now that TorchServe is running, let's do some inference. We'll grab a sample image from the source repo for our input, and then we'll call curl. On the TorchServe side, the default image classifier model takes care of unpacking the image and converting it to a tensor, feeding it to the model, and processing the output. This shows a simple case of using the TorchServe inference API over HTTP, but you can also access it via gRPC or use the KF Serving API used by Kubeflow on Kubernetes. And here we have the top five classes identified by the model. If we want to learn about the status of the server or manage which models we're serving or how many worker processes are devoted to each worker, we can use the management API. Above, use the prediction API on its default port of 8080. The default for the management API is port 8081. Let's use this curl command to see how the server reports what models it's serving. The models endpoint enumerates models being served, which right now is just our DenseNet model. Let's get a little more detail on it. And here you can see it specifies things about this particular model, including how many workers are spun up, etc. We can be more specific if we have more than one version of the model by adding the version number to the URL. This shows the default configuration for a served model with 12 workers running. You can also use the Management API to alter that configuration. So let's change the number of workers. So I set both the Min and Max workers to 4. And now if I ask for the status of our model again, we should see the number of workers has changed. The Management API lets you register new models from a local model archive or from a URL. It lets you unregister models or set the default version of a model to serve or get the status of a model or models that you're serving. Finally, we can stop TorchServe with the stop flag. The TorchServe GitHub repo also has walkthroughs and examples for many common tasks, including specific server management tasks, setting up HTTPS, writing a custom handler, and more. And as always, everything I've described here and more is documented fully in the documentation and tutorials at pytorch.org. | Production Inference Deployment with PyTorch | 940 | PyTorch | 20210412 | After you've built and trained a PyTorch machine learning model, the next step is to deploy it someplace where it can be used to do inferences on new input. This video shows the fundamentals of PyTorch production deployment, including Setting your model to evaluation mode; TorchScript, PyTorch's optimized model representation format; using PyTorch's C++ front end to deploy without interpreted language overhead; and TorchServe, PyTorch's solution for scaled deployment of ML inference services. | 2024-09-19T12:09:53.692425 |
https://www.youtube.com/watch?v=IC0_FRiX-sw | Hello, my name is Brad Heinz. I'm a partner engineer working with the PyTorch team at Facebook. In this video, I'll be giving you an introduction to PyTorch, its features, key concepts, and associated tools and libraries. This overview assumes that you are new to doing machine learning with PyTorch. In this video, we're going to cover an overview of PyTorch and related projects, tensors, which are the core data abstraction of PyTorch, AutoGrad, which drives the eager mode computation that makes rapid iteration of your model possible. We'll talk about building a model with PyTorch modules. We'll talk about how to load your data efficiently to train your model. We'll demonstrate a basic training loop. And finally, we'll talk about deployment with TorchScript. Before we get started, you'll want to install PyTorch and TorchVision so that you can follow along with the demos and exercises. If you haven't installed the latest version of PyTorch yet, visit pytorch.org. The front page has an install wizard shown here. There are two important things to note here. First, CUDA drivers are not available for the Mac. Therefore, GPU acceleration is not going to be available by a PyTorch on the Mac. Second, if you're working on a Linux or Windows machine with one or more NVIDIA CUDA compatible GPUs attached, make sure the version of CUDA toolkit you install matches the CUDA drivers on your machine. So what is PyTorch? PyTorch.org tells us that PyTorch is an open source machine learning framework that accelerates the path from research prototyping to production deployment. Let's unpack that. First, PyTorch is software for machine learning. It contains a full toolkit for building and deploying ML applications, including deep learning primitives such as neural network layer types, activation functions, and gradient based optimizers. It has hardware acceleration on NVIDIA GPUs, and it has associated libraries for computer vision, text and natural language, and audio applications. TorchVision, the PyTorch library for computer vision, text and natural language, and audio applications. TorchVision, the PyTorch library for computer vision applications, also includes pre-trained models and package data sets that you can use to train your own models. PyTorch is built to enable fast iteration on your ML models and applications. You can work in regular idiomatic Python. There's no new domain-specific language to learn to build your computation graph. With AutoGrad, Pytorch's automatic differentiation engine, the backward pass over your model is done with a single function call and done correctly no matter which path through the code a computation took, offering you unparalleled flexibility in model design. Pytorch has the tooling to work at enterprise scale with tools like Torch Script, which is a way to create serializable and optimizable models from your PyTorch code, TorchServe, PyTorch's model serving solution, and multiple options for quantizing your model for performance. And finally, PyTorch is free and open source software, free to use and open to contributions from the community. Its open source nature fosters a rich ecosystem of community projects as well, supporting use cases from stochastic processes to graph-based neural networks. The PyTorch community is large and growing, with over 1,200 contributors to the project from around the world and over 50% year-on-year growth in research paper citations. PyTorch is in use at top-tier companies like these and provides the foundations for projects like Allen NLP, the open-source research library for deep learning with natural language, FastAI, which simplifies training fast and accurate neural nets using best modern practices, ClassyVision, an end-to-end framework for image and video classification, and Captum, an open-source extensible library that helps you understand and interpret your model's behavior. Now that you've been introduced to PyTorch, let's look under the hood. Tensors will be at the center of everything you do in PyTorch. Your model's inputs, outputs, and learning weights are all in the form of tensors. Now if tensor is not a part of your normal mathematical vocabulary, just know that in this context we're talking about a multi-dimensional arraydimensional array with a lot of extra bells and whistles. PyTorch tensors come bundled with over 300 mathematical and logical operations that can be performed on them. Though you access tensors through a Python API, the computation actually happens in compiled C++ code optimized for CPU and GPU. Let's look at some typical tensor manipulations in PyTorch. The first thing we'll need to do is import PyTorch with the importTorch call. Then we'll go ahead and create our first tensor. Here I'm going to create a two-dimensional tensor with five rows and three columns and fill it with zeros. I'm going to query it two-dimensional tensor with five rows and three columns and fill it with zeros. I'm going to query it for the data type of those zeros. And here you can see I got my requested matrix of 15 zeros, and the data type is 32-bit floating point. By default, PyTorch creates all tensors as 32-bit floating point. What if you wanted integers instead? You can always override the default. Here in the next cell, I create a tensor full of ones. I request that they be 16-bit integers. And note that when I print it without being asked, PyTorch tells me that these are 16-bit integers, because it's not the default and it might not be what I expect. It's common to initialize learning weights randomly, often with a specific seed for the random number generator so that you can reproduce your results on subsequent runs. Here we demonstrate seeding the PyTorch random number generator with a specific number, generating a random tensor, generating a second random tensor, which we expect to be different from the first, reseeding the random number generator with the same input, and then finally creating another random tensor, which we expect to match the first, since it was the first thing created after seeding the RNG. And sure enough, those are the results we get. First tensor and the third tensor do match, and the second one does not. Arithmetic with PyTorch tensors is intuitive. Tensors of similar shapes may be added, multiplied, etc. And operations between a scalar and a tensor will distribute over all the cells of the tensor. So let's look at a couple of examples. First, I'm just going to create a tensor full of 1s. Then I'm going to create another tensor full of 1s, but I'm going to multiply it by a scalar 2. And what's going to happen is all of those ones are going to become twos. The multiplication is distributed over every element of the tensor. Then I'll add the two tensors. I can do this because they're of the same shape. The operation happens element-wise between the two of them, and we get out now a tensor full of threes. When I query that tensor for its shape, it's the same shape as the two input tensors from the addition operation. Finally, I create two random tensors of different shapes and attempt to add them. I get a runtime error because there's no clean way to do element-wise arithmetic operations between two tensors of different shapes. Here's a small sample of the mathematical operations available on PyTorch tensors. I'm going to create a random tensor and adjust it so that its values are between minus 1 and 1. I can take the absolute value of it and see all the values turn positive. I can take the inverse sign of it, because the values are between minus 1 and 1, and get an angle back. I can do linear algebra operations like taking the determinant or doing singular value decomposition. And there are statistical and aggregate operations as well, means and standard deviations and minimums and maximums, etc. There's a good deal more to know about the power of PyTorch tensors, including how to set them up for parallel computation on GPU. We'll be going into more depth in another video. As an introduction to AutoGrad, PyTorch's automated differentiation engine, let's consider the basic mechanics of a single training pass. For this example, we'll use a simple recurrent neural network, or RNN. We start with four tensors, x, the input, h, the hidden state of the RNN that gives it its memory, and two sets of learning weights, one each for the input and the hidden state. Next, we'll multiply the weights by their respective tensors. mm here stands for matrix multiplication. After that, we add the outputs of the two matrix multiplications and pass the result through an activation function, here hyperbolic tangent. And finally, we compute the loss for this output. The loss is the difference between the correct output and the actual prediction of our model. So we've taken a training input, run it through a model, gotten an output, and determined the loss. This is the point in the training loop where we have to compute the derivatives of that loss with respect to every parameter of the model and use the gradients over the learning weights to decide how to adjust those weights in a way that reduces the loss. Even for a small model like this, that's a bunch of parameters and a lot of derivatives to compute. But here's the good news. You can do it in one line of code. Even for a small model like this, that's a bunch of parameters and a lot of derivatives to compute. But here's the good news. You can do it in one line of code. Each tensor generated by this computation knows how it came to be. For example, I2h carries metadata indicating that it came from the matrix multiplication of wx and x, and so it continues down the rest of the graph. This history tracking enables the backward method to rapidly calculate the gradients your model needs for learning. This history tracking is one of the things that enables flexibility and rapid iteration in your models. Even in a complex model with decision branches and loops, the computation history will track the particular path through the model that a particular input took and compute the backward derivatives correctly. In a later video, we'll show you how to do more tricks with AutoGrad, like using the AutoGrad profiler and taking second derivatives, and how to turn off AutoGrad when you don't need it. We've talked so far about tensors and automatic differentiation and some of the ways they interact with your PyTorch model. But what does that model look like in code? Let's build and run a simple one to get a feel for it. First, we're going to import PyTorch. We're also going to import torch.nn, which contains the neural network layers that we're going to compose into our model, as well as the parent class of the model itself. And we're going to import torch.nn.functional to give us activation functions and max pooling functions that we'll use to connect the layers. So, here we have a diagram of LENET5. It's one of the earliest convolutional neural networks and one of the drivers of the explosion in deep learning. It was built to read small images of handwritten numbers, the NNIST dataset, and correctly classify which digit was represented in the image. Here's the abridged version of how it works. Layer C1 is a convolutional layer, meaning that it scans the input image for features it learned during training. It outputs a map of where it saw each of its learned features in this image. This activation map is downsampled in layer S2. Layer C3 is another convolutional layer, this time scanning C1's activation map for combinations of features. It also puts out an activation map describing the spatial locations of these feature combinations, which is downsampled in layer S4. Finally, the fully connected layers at the end, F5, F6, and Output are a classifier that takes the final activation map and classifies it into one of 10 bins representing the 10 digits. So how do we express this simple neural network in code? Looking over this code, you should be able to spot some structural similarities with the diagram above. This demonstrates the structure of a typical PyTorch model. It inherits from torch.nn.module. And modules may be nested. In fact, even the COM2D and linear layers here are subclasses of torch.nn.module. Every model will have an init, where it constructs the layers that it will compose into its computation graph, and loads any data artifacts it might need. For example, an NLP model might load a vocabulary. A model will have a forward function. This is where the actual computation happens. An input is passed through the network layers and various functions to generate an output, a prediction. Other than that, you can build out your model class like any other Python class, adding whatever properties and methods you need to support your model's computation. So let's instantiate this and run an input through it. So there are a few important things happening here. We're creating an instance of limit. We are printing the net object. Now, subclass of torch.nn module will report the layers it has created and their shapes and parameters. This can provide a handy overview of a model if you want to get the gist of its processing. Below that, we create a dummy input representing a 32x32 image with one color channel. Normally you would load an image tile and convert it to a tensor of this shape. You may have noticed an extra dimension to our tensor. This is the batch dimension. PyTorch models assume they are working on batches of data. For example, a batch of 16 of our image tiles would have the shape 16 by 1 by 32 by 32. Since we're only using one image, we create a batch of one with shape 1 by 1 by 32 by 32. We ask the model for an inference by calling it like a function, net input. The output of this call represents the model's confidence that the input represents a particular digit. Since this instance of the model hasn't been trained, we shouldn't expect to see any signal in the output. Looking at the shape of the output, we can see that it also has a batch dimension, the size of which should always match the input batch dimension. Had we passed in an input batch of 16 instances, output would have a shape of 16 by 10. You've seen how a model is built and how to give it a batch of input and examine the output. The model didn't do much though, because it hasn't been trained yet. For that, we'll need to feed it a bunch of data. In order to train our model, we're going to need a way to feed it data in bulk. This is where the PyTorch dataset and data loader classes come into play. Let's see them in action. So here I'm declaring matplotlib inline, because we'll be rendering some images in the notebook. I'm importing PyTorch, and I'm also importing TorchVision and TorchVision transforms. These are going to give us our data sets and some transforms that we need to apply to the images to make them digestible by our PyTorch model. The first thing we need to do is transform our incoming images into a PyTorch tensor. Here, we specify two transformations for our input. TransformsToTensor takes images loaded by the pillow library and converts them into PyTorch tensors. Transforms.Normalize adjusts the values of the tensor so that their average is 0 and their standard deviation is 0.5. Most activation functions had their strongest gradients around the zero point, so centering our data there can speed learning. There are many more transforms available, including cropping, centering, rotation, reflection, and most of the other things you might do to an image. Next, we're going to create an instance of the CIFAR-10 data set. This is a set of 32 by 32 color image tiles representing 10 classes of objects, six of animals and four of vehicles. When you're in the cell above, it may take a minute or two for the data set to finish downloading for you. So be aware of that. So this is an example of creating a data set in PyTorch. Downloadable data sets, like CIFR 10 above, are subclasses of Torch, Utils, Data, DataSet. Data set classes in PyTorch include the downloadable data sets in TorchVision, TorchText, and TorchAudio, as well as utility dataset classes such as TorchVision.datasets.image folder, which will read a folder of labeled images. You can also create your own subclasses of dataset. When we instantiate our dataset, we need to tell it a few things. The file system path where we want the data to go, whether or not we're using this set for training, because data sets will be split between training and test subsets, whether we would like to download the data set if we haven't already, and the transformations that we want to apply to the images. Once you have your data set ready, you can give it to the data loader. Now, a data set subclass wraps access to the data and is specialized the type of the data is serving. The data loader knows nothing about the data, but organizes the input tensors served by the data set into batches with the parameters you specify. In the example above, we've asked a data loader to give us batches of four images from train set, randomizing their order with shuffle equals true, and we told it to spin up two workers to load data from disk. It's good practice to visualize the batches your data loader serves. Running the shell should show you a strip of four images, and you should see a correct label for each one. And so here are our four images, which do, in fact, look like a cat, a deer, and two trucks. We've looked under the hood at tensors and autograd, and we've seen how PyTorch models are constructed and how to efficiently feed them data. It's time to put all the pieces together and see how a model gets trained. So here we are back in our notebook. You'll see the imports here. All of these should look familiar from earlier in the video, except for torch.optim, which I'll be talking about soon. The first thing we'll need is training and test datasets. So if you haven't already, run the cell below and make sure the dataset is downloaded. It may take a minute if you haven't done so already. We'll run our check on the output from the data loader. And again, we should see a strip of four images, plain, plain, plain ship. That looks correct. So our data loader is good. This is the model we'll train. Now, if this model looks familiar, it's because it's a variant of Lynette, which we discussed earlier in this video. But it's adapted to take three-color images. The final ingredients we need are a loss function and an optimizer. The loss function, as discussed earlier in this video, is a measure of how far from our ideal output the model's prediction was. Cross entropy loss is a typical loss function for classification models like ours. The optimizer is what drives the learning. Here we've created an optimizer that implements stochastic gradient descent, one of the more straightforward optimization algorithms. Besides parameters of the algorithm, like the learning rate and momentum, we also pass in net dot parameters, which is a collection of all the learning weights in the model, which is what the optimizer adjusts. Finally, all of this is assembled into the training loop. Go ahead and run this cell, as it'll take a couple of minutes to execute. So here we're only doing two training epochs, as you can see from line one. That is two complete passes over the training data set. Each pass has an inner loop that iterates over the training data, serving batches of transformed images and their correct labels. Zeroing the gradients in line 9 is a very important step. When you run a batch, gradients are accumulated over that batch. And if we don't reset the gradients for every batch, they will keep accumulating and provide incorrect values and learning will stop. In line 12, we ask the model for its actual prediction on the batch. In the following line, line 13, we compute the loss, the difference between the outputs and the labels. In line 14, we do our backward pass and calculate the gradients that will direct the learning. In line 15, the optimizer performs one learning step. It uses the gradients from the backward call to nudge the learning weights in the direction it thinks will reduce the loss. So the remainder of the loop just does some light reporting on the epoch number and how many training instances have been completed and what the collective loss is over the training epoch. So note that the loss is monotonically descending, indicating that our model is continuing to improve its performance in the training dataset. As a final step, we should check that the model is actually doing general learning and not simply memorizing a dataset. This is called overfitting, and will often indicate that either your dataset is too small and doesn't have enough examples or that your model is too large. It's over specified for modeling the data you're feeding it. So our training is done. So anyways, the way we check for overfitting and guard against it is to test the model on data it hasn't trained on. That's why we have a test data set. So here I'm just going to run the test data through. We'll get an accuracy measure out. 55%. Okay. So that's not exactly state of the art, but it's much better than the 10% we'd expect to see from a random output. This demonstrates that some general learning did happen in the model. Now when you go to the trouble of building and training a non-trivial model, it's usually because you want to use it for something. You need to connect it to a system that feeds it inputs and processes the model's predictions. If you're keen on optimizing performance, you may want to do this without a dependency on the Python interpreter. The good news is that PyTorch accommodates you with TorchScript. TorchScript is a static, high-performance subset of Python. When you convert a model to TorchScript, the dynamic and Pythonic nature of your model is fully preserved. Control flow is preserved when converting to TorchScript, and you can still use convenient Python data structures like lists and dictionaries. Looking at the code on the right, you'll see a PyTorch model defined in Python. Below that, an instance of the model is created, and then we'll call torch.jit.script my module. That one line of code is all it takes to convert your Python model to Torch script. The serialized version of this gets saved in the final line, and it contains all the information about your model's computation graph and its learning weights. The TorchScript rendering of the model is shown at the right. TorchScript is meant to be consumed by the PyTorch Just-In-Time compiler, or JIT. The JIT seeks runtime optimizations, such as operation reordering and layer fusion, to maximize your model's performance on CPU or GPU hardware. So how do you load and execute a TorchScript model? You start by loading the serialized package with torch.jit.load, and then you can call it, just like any other model. What's more, you can do this in Python, or you can load it into the PyTorch C++ runtime to remove the interpreted language dependency. In subsequent videos, we'll go into more detail about TorchScript, best practices for deployment, and we'll cover TorchServe, PyTorch's model-serving solution. So that's our lightning-fast overview of PyTorch. The models and datasets we used here were quite simple, but PyTorch is used in production at large enterprises for powerful, real-world use cases, like translating between human languages, describing the content of video scenes, or generating realistic human voices. In the videos to follow, we'll give you access to that power. We'll go deeper on all the topics covered here with more complex use cases like the ones you'll see in the real world. Thank you for your time and attention, and I hope to see you around the PyTorch forums. | Introduction to PyTorch | 1,412 | PyTorch | 20210416 | In the first video of this series, we give a broad overview of the parts of the PyTorch toolchain, including: Tensors, automatic gradient computation, model building basics, data loading abstractions, model training, and deployment for inference. This video is meant as a survey, with each topic being covered in more depth in subsequent videos.
Download all notebooks here: https://pytorch-tutorial-assets.s3.amazonaws.com/youtube-series/video1.zip
Download individual notebooks here:
1. Tensors - 04:45 to 07:50
https://pytorch-tutorial-assets.s3.amazonaws.com/youtube-series/video1/1+-+PyTorch+Tensors.ipynb
2. Autograd - 08:00 to 9:50
3. A simple model - 10:00 to 14:00
https://pytorch-tutorial-assets.s3.amazonaws.com/youtube-series/video1/2+-+A+Simple+PyTorch+model.ipynb
4. Datasets - 14:00 to 17:10
https://pytorch-tutorial-assets.s3.amazonaws.com/youtube-series/video1/3+-+Dataset+and+DataLoader.ipynb
5. Training loop - 17:10 to 21:00
https://pytorch-tutorial-assets.s3.amazonaws.com/youtube-series/video1/4+-+A+Simple+PyTorch+Training+Loop.ipynb | 2024-09-19T12:12:40.130993 |
https://www.youtube.com/watch?v=r7QDUPb2dCM | Welcome. In this video, we're going to do a deep dive on PyTorch tensors. In a PyTorch deep learning model, all of your data, inputs, outputs, learning weights, these are going to be expressed as tensors, multidimensional arrays that can contain floating point, integer, or boolean data. In particular, in this video, we're going to go over some of the ways to create PyTorch tensors, how to use tensors in mathematical and logical operations alone or with each other, methods for copying tensors, how to move to GPU for hardware acceleration, manipulating tensor shapes, and the PyTorch NumPy bridge. If you haven't already, I recommend going to the PyTorch examples repo and downloading the interactive notebook that goes with this video. Okay, so in the first cell here, we'll import PyTorch. We're also going to import Python's math module to use some constants it has. First thing we're going to go over is creating tensors. So here we have the very simplest way to create a tensor, the torch.empty call. The torch module has multiple factory methods that will let you create tensors with and without initial values and whatever data type you need. This is the most basic way to allocate a tensor, torch.empty. Here it's going to create a 3x4 tensor. And we can see that the object itself is of type torch.tensor. Now, when you run this cell, you may see random-looking values in the output. That's because torch.empty just allocates memory and does not write any values to it. So whatever happened to be memory at the time you allocated this tensor is what you're going to see here. One quick note about tensors and their dimensions and terminology. Sometimes when we have a one-dimensional tensor, we'll call it a vector because it's just an ordered tuple of dimensions or of coordinates. Likewise, a two-dimensional tensor is often referred to as a matrix, and anything larger we'll always call a tensor. Now, more often than not, you'll want to initialize your tensor with some value. Common cases are all zeros or all ones or random values. And the Torch module provides factory methods for all of these. So here, if we run the cell, you get the things that you might expect from the method name. You get a 2x3 tensor full of zeros, a two by three tensor full of ones, and then tensor full of random values between zero and one. Now, speaking of the random tensor, you might've spotted the call to torch.manual seed right before instantiating that tensor. So what's that about? Now, initializing tensors, such as your model learning weights with random values is very common. But often, you will want your results to be reproducible, especially if you're working in a research setting. So PyTorch gives you a tool for doing that, the manual seed call. Any time you call manual seed with a particular integer seed, you will reinitialize your pseudo random number generators and get the same results again when you call them. So here in the following cell as an example, we call manual seed, we call torch.rand, we get some values out, we call torch.rand again and get some values out. Then when we call manual seed again and do those two torch.rand calls, we'll see both times they yield the same values. So this is how you make sure that identical computations that depend on random numbers will provide identical results if you need that reproducibility. So often when you're performing operations on two or more tensors, they will need to be of the same shape. That is, having the same number of dimensions and the same number of cells in each dimension, or the same extent in each dimension. All the factory methods I've shown you on the Torch module so far have corresponding methods appended with underscore like. And when you pass in a tensor as an argument to empty like or zeros like or one of these other methods you will get out a tensor initialized as you specify but of the same shape as the tensor you passed in as an argument so here we run the cell and we can see that our initial tensor was two by two by three and even though we specified no shape for the other tensors they all will also come out two by two by three and initialized in the way you'd expect when we want to find out the shape of a tensor we can always query its shape property and this will give us back a list of the dimensions and their extents now the last way to create a tensor that we're going to cover is to specify its data directly from a PyTorch collection. So here, if you look at these examples, we have a nested array, and we have a tuple, and then we have a tuple that contains a tuple in a list. And when we call torch.tensor with any of these collections as an argument, we get back a new tensor that is initialized with the data we specified. So here you can see in all three cases, we've gotten back a tensor that is of the shape and containing the data that we expect. So torch.tensor creates a copy of the data. This is important to know. The underlying memory representation of a Python list is not the same as the underlying memory representation of a tensor, so we always copy that data when we're creating a new tensor and initializing it with data in this way. Now, I mentioned earlier that tensors can have a floating point or integer or boolean underlying data types. The simplest way to specify your data type is to do it at creation time. So here in this cell I'm creating an int 16 and a float 64 and you'll see A when I print it out is a set of ones represented as 16-bit integers and you can see none of the ones have that little decimal point after them which is Python's subtle signal that we're dealing with an int rather than a float. We also could see that because we overwrote the default data type, the default is a 32-bit floating point, when we print the tensor, PyTorch helpfully reports to us that this is the underlying data type of that tensor. Likewise, when we do a 64-bit float. The other way to change the data type of a tensor or to move it to a new tensor with your required data type is with the to method. So here I'm calling b.2 and saying I would rather have this data as 32-bit integers. And if you look closely at the values of b and c when they're printed out, the values of c are just the values of b truncated to make them integers. And if you look closely at the values of b and c when they're printed out, the values of c are just the values of b truncated to make them integers. So it's a float to int conversion there. The other thing you may have noticed here is that here I specified the dimensions of the tensor as a tuple. Canonically, PyTorch expects a tuple for the tensor's dimensions. But when the dimensions are the first argument of a method, it lets us cheat a little and just put in a series of integers. But here, to make the code a little more readable, I've separated out the tensor's shape as a tuple. The data types you can use are Boolean, five types of ints, and four types of float. Let's look at basic arithmetic first and how we can make tensors interact with scalars. Now, if we run this cell, see, let's look at the first line. Here we're going to create a tensor full of zeros, and we're going to add the integer 1 to it. So what does that mean to add an integer to a tensor? Well, here we're going to be doing this operation element-wise over every element of the tensor. So every zero in that tensor should have a one added to it. And if we look at our output, that is, in fact, what we see. Likewise, with multiplication, division, subtraction, exponentiation with integer or floating point powers. exponentiation with integer or floating point powers. I'll also note that because the binary operation between a tensor and a scalar puts out a tensor of the same shape you had originally, you can chain together these arithmetic operations intuitively. And you can see that in the line where we created the threes. Now doing these same arithmetic operations with two tensors behaves sort of intuitively like you'd expect. So we take our our twos, our little two by two tensor full of floating-point twos. We're going to use the exponentiation operator. Now we're going to specify the powers one, two, three, and four. And so here the mathematical operation is going to be done element-wise between corresponding elements of each tensor because they're of the same shape. And so if we run this cell you can see that in fact powers of 2 are in the first tensor. We've added two tensors ones and fours to get fives and if we multiply threes and fours we get twelves. tensors 1s and 4s to get 5s, and if we multiply 3s and 4s, we get 12s. A key thing here is that all the tensors that we've shown you in these examples of these tensor binary operations are of identical shape. So we can see when we run this cell that when we try to do an operation with two tensors of different shape, we get a runtime error, even though these two tensors have the exact same number of cells, there's no natural way to map between the two. So in the general case, your tensors will have to be of the same shape. There is one important and useful exception to that, and that is what we call broadcasting. Here is an example. So I've created a random tensor, two rows by four columns, and I'm multiplying it, here you can see, by a tensor with one row and four columns. We actually get out something like what we expect. So we see our random output in the first print statement and our second print statement shows all of that doubled. Now, how did we do this? How did we multiply two tensors of different shapes and get an intuitive result? So broadcasting is a way to perform an operation between tensors that have specific similarities in their shapes. So here, in the cell previous, the one row four column tensor was multiplied element wise by each of the two four column rows of the random tensor. So this is an important operation in deep learning. One common example is using batches of inputs. So your PyTorch machine learning model will, in the general case, not expect a single input for either training or inference, but will expect a batch of inputs. So here, applying an operation to each instance in the batch separately, but returning a tensor of the same shape is what you'd expect. So here we have a random tensor. We had two rows of random values. We multiplied by one row of twos, doing each row individually. And that's akin to the batch operation, that we're performing some operation on each segment of a tensor separately. There are rules for broadcasting. The first one is that no empty tensors. So every tensor must have at least one dimension. And then there are some rules for the relationship between the dimensions and extents of the two tensors that you want to perform an operation on. So when we compare the dimension sizes of the two tensors going from the last to the first, we have to have either each dimension must be equal or one of the dimensions must be of size 1 or the dimension doesn't exist in one of the tensors. Here are some examples that show the rules that I just described. It's probably easier to look at these than to try to reason them out. So we start with a tensor full of 1s. It's a three-dimensional tensor with four layers, three rows, and two columns. And we will multiply that by a random 3x2 tensor. If we look at the output of that, we can see that we've multiplied our random tensor by each of the four layers of our original tensor full of ones. And so what we say, that operation is broadcast over those layers, over that first dimension. Likewise, in the following line, here we multiply a times another random tensor to get c. This time we're doing a 3 by 1 tensor. And so what does that give us? This follows the rules, because in the last dimension, one of the dimensions is one, in the second dimension they match, and then the first dimension is absent in one of the tensors. The output there looks like this. So if we think of our random tensor that went into making C as a three element column vector, tensor that went into making C as a three element column vector. What you can see in the output here, when we multiply it by a bunch of ones, is that every three element column in our output tensor is the same. So we broadcast this operation over every three element column in our tensor. Likewise, in the final example, multiplying a random 1 by 2 tensor times our tensorful of 1s, that does something akin to the previous time, except now instead of every three element column having the operation performed on it, now every two element row has the operation performed on it. Now there's a PyTorch documentation note on this topic of broadcasting, and I urge you to read it if you are interested in more details. Now to give you an idea of some operations that will break the rules and not work, all these lines should give you a runtime error. So in the first case, trying to create B, we always compare the dimensions last to first, and B's last dimension is 3, or has an extent of 3. A's has an extent of 2. Those don't match. We can't broadcast the multiplication here. Likewise with C, the last two dimensions are 2 and 3 instead of 3 and 2. They're different. That won't work. And in the final example, when we try to create an empty tensor and broadcast an operation over one with dimensions, that doesn't work. We can't do it with an empty tensor. Now, PyTorch tensors have over 300 mathematical operations that you can perform on them. Here are a few examples from the major categories. In the first section, we just have some common functions that you might use for manipulating numbers, absolute value, ceiling, floor, and a clamp, which sets min and max values for your tensor. And all those will act on every element of the tensor. Likewise for trigonometric functions. So here I've created a tensor full of angles and I want to get the sine of those angles, and then get the inverse of that sine. And you can see from running the cell that we get back what we expect. We can do bitwise logical operations on either Boolean or integer tensors. Here I've got two integer tensors, and I'm performing a bitwise XOR on them. And we can see that it does exactly what you'd expect if you were doing like a bitwise XOR in C, for example. We can do comparisons of tensors as well. So we'll get a tensor where you specify some data, we'll get a tensor full of ones, we'll test their equality. We can see because the tensor D, its first value was one, but all the rest were different. We can see we got a true and three falses there, which was exactly what we'd expect. There are also a number of reduction operations that you can perform on a single tensor. So, for example, here we can take the maximum of a tensor. No matter how large the tensor, this is going to give us back a single value, or a tensor with a single value. If you want to extract that value from that one element output tensor, we use the.item call. And if you look at the output from these reduction ops, first we get back a tensor with our value in it, and then after the dot item call, we've actually extracted the value. You can also do means, standard deviations. There are convenience methods for performing arithmetic operations, including all of the elements of the tensor. So here with the dot prod call, we're taking the product of all numbers in the tensor. And we can also, you know, it's another example, get all the unique elements of a tensor. And all these behave more or less as you'd expect. Of course, linear algebra is at the heart of a lot of what we do in deep learning. So there are a lot of vector and matrix and linear algebra operations. So, for example, I'll create two vectors that correspond to x and y unit vectors. I'll create two matrices, one of which is just random and one of which is going to be three times the identity matrix. We can do some things with them. So torch.cross gets a cross product between the two vectors. So if we cross the y unit vector with the x unit vector, in that order, we should expect back the negative z unit vector, which is, in fact, what we got. We can do a matrix multiplication of our two matrices. So we have our random matrix. And then when we multiply it by three times the identity matrix, we should expect that we get back a matrix that is about three times the value of our input. And in fact, we see that. And you can do more advanced complex operations like singular value decomposition as well. And this is just a very small sampling of the 300-odd mathematical and logical operations associated with PyTorch tensors. I urge you to look at the documentation to understand the full inventory. Now, sometimes if you're doing a computation with two tensors, you'll see they're intermediate values of some kind. You may not need those intermediate values when you're done. It can be a nice optimization to be able to recycle that memory if you need a tensor of the same size and data type as the intermediate one you're going to throw away. So as an example of that, here again, I'm going to create a tensor full of angles. I'm going to get signs from them. And you can see when we run this cell and check the output that we have A, that's our angles, we have our signs of those angles. And then if we look back here, we can see A has not changed. So here we see torch.sign gave us back a new tensor and left the old one in place. But because we're acting on a single tensor here, and we don't need the input values, we could just put the outputs in that tensor itself. The underscore on a method like sign means that you're altering the tensor in place, that tensor you're putting in as an argument. So now if we do the exact same thing, B is a tensor containing the same angles that A did, and we do the same operation, we do a sign on it, we can see there's our initial angles, there's the output of that sign operation. But this time B has changed. We told it that we wanted to use B's memory for this, and it was of a compatible data type and size. And so B was altered in place. And if you wanna do this with binary arithmetic operations, there are functions for you that behave similarly to the binary PyTorch operators. So here we'll create two by two matrices, A and B. We can look at their values before. Now we'll call the in place addition method. And you can see here that now A has changed. Methods that cover a binary operation, the calling tensor will be the one that is changed in place. And so likewise here, when we do the same for B, if we square the random contents of B, but do it with the mull underscore, we'll get back exactly what we expect. Note that these in-place arithmetic functions are methods of the torch.tensor objects not attached to torch module like a lot of other functions. The calling tensor, as I said, is the one that gets changed in place. There's another option for placing the result of a computation in an existing already allocated tensor. Many of the methods and functions we've seen so far, including the creation methods for tensors, have an out argument that lets you specify a tensor to receive the output. If the out tensor is the same shape as the output and the correct data type that matches the output data type, this can happen without a new memory allocation. So if we run this cell, we'll create tensors A and B, which are two by two random matrices, and then C, which is a two by two matrix full of zeros. And we'll use the Python ID call to get the ID of that object. We'll print it out. We will do a matrix multiplication between A and B, and we'll specify C as that optional out argument. And then if we look at our next print of C, we'll see that it's changed. It's no longer zeros. So C was the same size as both A and B. It was the same data type, 32-bit floating point, PyTorch's default. And so when we do that multiplication, specify C to receive the output, we see that it does. We also assign the output as a return value to another label, D. And if we look, we'll see that C and D are actually the same object. This assertion didn't fire. So C and D are the same object. And we can also see via an assertion that C's ID did not change. We're dealing with the same object in memory. So I just wanted to give you a tangible example of how all that works. And it works for creation calls too. So when we call torch.rand with an optional out argument, again, as long as the shape and data type are what we want, we will get back a tensor in that same object. So we've seen how to create and manipulate tensors, but what about copying them? Now, tensors are like any object in Python, meaning that if you assign it to a variable, that variable is just a label for the object. You're not making a copy of the object. Create a tensor full of ones in this cell. We'll assign it to A. We'll say B equals A. And if we look, when we change a value of A and print B, the value within B has changed as well. So these are just two labels for the exact same object. What if you need a separate copy of the data? It may happen if you're building a complex model with multiple computation paths, and so you want to have separate copies of the input to pass to different portions of the model. So in this case, you would use the clone method. So we're going to do something very similar here. We're going to create a 2x2 matrix full of ones. We're going to say b is the same as a now, but now we're going to clone a instead of just doing the assignment. We can verify that these are in fact different objects in memory with the assertion, and we can verify that the contents are the same with the torch.eq call. And when we change A, again, we can verify that B has not changed when we print it out. So there is one important thing to be aware of using clone, which is that if your source tensor has autograd enabled, then so will the clone of that tensor. We're going to cover this more deeply in the video on autograd, but if you want a light version of the details, here it is. So as an example, let's say you have a complex model with multiple computation paths in its forward method, and the original tensor and its clone, or its copy, are going to contribute to the model's output. Then in order to enable model learning, you want autograd turned on for both tensors. If your source tensor has autograd enabled, which it generally will if it's a set of learning weights or it's derived from a computation involving the weights, then everything has autograd enabled already and you'll get the result you want. On the other hand, perhaps you're doing a computation where neither the original tensor nor its clone need to track gradients. In that case, as long as the source tensor has autograd turned off, you're good to go. There is, of course, a third case. So imagine you're performing some computation in your model's forward function where gradients are turned on for everything by default, but you want to pull out some values midstream to generate metrics. And you want those to be separate from the data that's being acted on. So in this case, you wouldn't want the cloned copy of your source tensor to track gradients. It degrades performance and doesn't actually do anything for you in this example use case. So for this, you can use the detach method on the source tensor. So if we run this, we'll see what we'll create our 2x2 tensor of random values. We will set requires grad equals true. So now every computation subsequent to A will have its history tracked, so we know where it came from and can compute backward derivatives. And that happens with clone. So here we clone b from a, and when we print b, we see that grad function equals clone backwards. That's telling us b is tracking its history. Now instead, if you look at the line where we create c, we say a.detach.clone, and then when we print c, we get the same data, except that we don't get the history attached. If we print a, we'll see that that detach call did not actually alter A at all. It basically just says do everything as if Autograd were turned off. One of the core advantages of PyTorch is hardware acceleration. If you have a CUDA compatible NVIDIA GPU and the driver is installed, you can radically accelerate your performance for both training and inference. Everything we've done so far has been on CPU. By default, when you create a tensor, it's instantiated in CPU memory. So how do we move to the faster hardware? First things first, we should check whether that faster hardware is available. And we do that with torch.cuda.isavailable. So if I run the cell, you should see it will tell us whether or not we have a GPU available on this device. Once you determine that one or more GPUs are available, you need to put the data someplace the GPU can see it. Your CPU works on data that lives in your machine's RAM. Your GPU also has dedicated memory attached to it. Whichever device you want to perform your computation on, you must move all the data needed for that operation to memory accessible by your target device. Now that we know that we have one or more GPUs available, we run. Now there are multiple ways to get your data onto your target device, but the easiest way is at creation time. You can see here, when we have a CUDA GPU available, we will create a tensor with an optional argument. It says device equals CUDA. When we create a tensor that we want to move to a GPU, we'll use the optional device argument on the factory method. All the factory methods I showed you for creating tensors will take this device argument. And here we're putting in the string CUDA to say, we'd like to move this tensor into memory accessible by the GPU. When you print the tensor, you'll see that it reports that the tensor is living on the GPU device. You can also query the number of GPUs. And if there is more than one, you can specify them by index with a colon after the CUDA string. So CUDA colon 0, CUDA colon 1, et cetera, would be the strings you put in as your device argument. As a general engineering practice, it's generally not considered good to specify things with magic constant strings all over your code. So a better practice is to, at the beginning of computation, choose which device you want on whether or not we have a GPU available or not, is either torch.device CUDA or torch.device CPU. Once we have the handle to that device, we can pass that in as the optional argument to creating a tensor, as shown in the last couple of lines there. That's creating a tensor. What if you have an existing tensor? It's living in memory for one device and you want to move it to the other device. How do you go about that? Well, in this cell, we'll demonstrate you can create a tensor in CPU, for example, and you want to move it to GPU. You can either put in the string CUDA or CUDA colon zero, CUDA colon one, or you could pass in a handle to a device that you have retrieved earlier. And you just pass it into the two method like so. This is the same two method that lets you change the data type of a tensor. If you wanted to change both the data type and the device, you would have to specify the names of the arguments. So dtype equals torch.float16, device equals myGPU. But that's how you move all your tensors, learning weights, everything from CPU memory to GPU. Sometimes you'll have a tensor and you'll need it to be in a different shape. So we're going to look at a couple of common cases and the tools PyTorch offers to help you handle that. So one case where you might need to change the number of dimensions is when you're passing a single instance of input to your model. PyTorch models expect batches of input rather than single instances. So for example, if we had an image classification model that took in a three-color image, 226 by 226 pixels, each input image would be represented as a 3 by 226 by 226 tensor. Your model is expecting a shape of n times 3 times 226 times 226, where n is the number of images in the batch, which might be, for example, 8 or 16 while you're doing training. But let's say you're doing training. But let's say you're doing inference one at a time. How do you make a batch of one? We can do that with the unsqueeze method. So we start with a random tensor meant to represent our input, the 3 by 226 by 226 image representation. And then we're going to call unsqueeze 0 and get that tensor and check its shape. And we'll see it's changed to 1 by 3 by 226 by 226. So we added a dimension at the beginning. That's what the 0 on unsqueeze says is we want this new dimension to be the first dimension, the one at index 0. That's unsqueezing. What do we mean then by squeezing? Here we're taking advantage of the fact that any dimension of extent 1 does not change the number of elements in the tensor. So for example here if we create C which is a 1 by 1 by 1 by 1 by 1 tensor, when we print it we see it only has one element and a lot of square brackets. So continuing the example above with our image classifier, let's say the model's output is a 20 element vector for each input. You then expect the output to have a shape of n by 20, where n is the number of instances that were in the input batch. So as many input instances as you put in, you want to have that many predictions coming out. That means for our single input batch, we'll get an output of shape 1 by 20. So what if you want to do non-batch computation with that output? So I think it's just expecting a 20 element vector. For that, we have squeeze. So what's happening here? Here we created a random 1 by 20 vector, again meant to stand in as our output tensor. We can check its shape and verify that it is 1 by 20, and then we can call squeeze zero on it. And so what that's saying is we want to take that dimension of extent one and squeeze it away. We call that, we look at the shape of b, following we can see it's just a 20 element tensor. Now this can only be done with dimensions of extent 1. So in the following stanza with the variables C and D, we create a random PyTorch tensor, and then we try to squeeze the first dimension of it. If you check the shape of the output of squeeze in that case, you'll find it's the same shape you started with. We didn't lose a dimension as we intended because there is no way to do that without destroying data in this case. So squeezing and unsqueezing will only work with dimensions of extent 1. Another place you might use unsqueeze is to help with broadcasting. If you recall earlier, we had some code where we were demonstrating broadcasting, where we took a 4x3x2 tensor, multiplied it by a 3x1 tensor, and the result, once we had the dimensions aligned, was that every three-element column in our original tensor had the same operation applied to it, the same multiplication. Now, what if, instead of 3 by 1, we just had a three-element column vector that we wanted to broadcast some operation over A. If we look at the next cell, we can see that if we just look at A and B as they're created right away, broadcasting can't happen there. The trailing dimensions don't line up. So what do we do? We use unsqueeze in this cell to create an extra dimension of extent one. And then when we multiply the random three-element vector against the larger tensor, we can see every three-element column in the tensor has the multiplication operation broadcast over it. So this can be a way to manipulate dimensions to get broadcasting to work for you without having to transpose dimensions on either of your tensors. Squeeze and unsqueeze methods also have in-place versions, like we saw earlier with the math methods. If I have one input instance and I want to make a batch of one, instead of calling unsqueeze, I can call unsqueeze with the underscore and do the operation in place. Now sometimes you'll want to change the shape of a tensor more radically while still preserving the number of elements in the tensor and their contents. In one case where this happens is again, taking the example of an image classifier. It's common in such models for the beginning of the computation to involve convolutional layers, and the end, the classification piece, to involve fully connected or linear layers. Now the convolutional layers, when they're working with images, will usually put out a three-dimensional tensor. It will have some horizontal and vertical extent meant to map the detection of features onto the image spatially. And then it will have a depth as well. And that will be the number of features that that convolution kernel has learned to recognize. The fully connected layers that follow, though, are expecting just a one-dimensional vector. So how do we translate between these two cases where we have an output vector that becomes an input vector, but it needs to change shape, but keep the same number of cells? Well, one way we can do that is with the reshape method. So here we'll create a 6 by 20 by 20 tensor that's a stand-in for our convolutional layer output, and we will reshape that into a one-dimensional tensor with 6 times 20 times 20 elements that stand in for the input into our fully connected layer. Now, when it can, Reshape will actually put out a view on the original tensor. So instead of creating a new tensor object with new memory allocation, it'll create a new tensor object addressing the same memory underlying the first tensor. So this is important by the way, if you use reshape and it feeds you back a view of the original tensor, changes in the source tensor will be reflected in the new tensor unless you clone it. And there are conditions beyond the scope of this introduction where reshape has to return tensor with the data copied. For more information, there's a documentation on the topic, which I urge you to read. The last topic I want to cover on this introduction to tensors is the data portability between NumPy tensors and PyTorch tensors. Now, in the section above, we mentioned briefly that PyTorch's broadcasting semantics are just like NumPy's. But the connection between the two libraries goes even deeper than that. If you have existing machine learning or scientific code with data stored in NumPy ND arrays, you may wish to express that same data as PyTorch tensors, whether to take advantage of PyTorch's GPU acceleration or its efficient abstractions for building deep learning models. It's easy to switch between NumPy ND arrays and PyTorch tensors. So in the first cell here, I'll be importing NumPy, and we'll create a NumPy array, a two by three matrix full of ones. Now, to express that as a PyTorch tensor, we call torch.fromNumpy with the NumPy array as an argument. We get back a tensor and we print it out. We will see that it's the same shape. It contains the same data and even goes so far as preserving the 64-bit floating point data type, NumPy's default. The conversion is just as easy the other way. So here we'll create a random PyTorch tensor and we'll call it.numpy on it and we'll get back a NumPy in D array. Now it's important to know these converted objects are using the same underlying memory as their source objects, meaning that changes to one are reflected in the other. So when I run the final cell, what you'll see is change the value of one element of a NumPy array, and we see that reflected in the PyTorch tensor that we made from it. Likewise, when I change a value in the PyTorch tensor we made, it's reflected in the NumPy tensor we created from that. So again, if you have code already that's manipulating your data in NumPy, moving over to PyTorch is a breeze. That is our first deep dive on tensors. The topics we covered today and the documentation at pytorch.org should be all you need to get going on the videos later in this series, but as well on your own work within PyTorch. Thank you for listening. | Introduction to PyTorch Tensors | 2,353 | PyTorch | 20210416 | Tensors are the fundamental data abstraction within PyTorch. This video covers everything you'll need to get started using PyTorch tensors, including: How to create and copy tensors, performing math & logic operations on tensors, doing tensor computations on a GPU, reshaping tensors, and NumPy compatibility of PyTorch tensors.
Download the Tensors notebook here: https://pytorch-tutorial-assets.s3.amazonaws.com/youtube-series/Video+2+-+Tensors.ipynb | 2024-09-19T12:17:22.008610 |
https://www.youtube.com/watch?v=OSqIP-mOWOI | Hi, and welcome to the next video in the PyTorch training series on building models with PyTorch. Specifically in this video, we're going to discuss the module and parameter classes in PyTorch, which encapsulate your machine learning models and learning weights respectively, and how they work together. Common neural network layer types, including linear, convolutional, recurrent, and transformer networks, other layers and functions, such as batch norming, dropout, and activation functions and loss functions. Model building in PyTorch is centered around two classes in the torch.nn module, the module class and the parameter class. The module class encapsulates models and model components such as neural network layers. The parameter class is a subclass of Torch.tensor that represents learning weights. Modules and parameters work together. When a parameter is assigned as an attribute of a module, the parameter object gets registered with that module. If you register an instance of a module subclass as an attribute of a module, the parameter object gets registered with that module. If you register an instance of a module subclass as an attribute of a module, the contained module's parameters are also registered as parameters of the owning class. This might be simpler with an example. Let's have a look at this small model. It has two fully connected neural network layers, an activation function in between them, and a softmax at the end. This model shows the common structure of a PyTorch model. First, note that it's a subclass of torch.nn.module. There's an init method that defines the structure of the model, the layers and functions that make it up. There's also a forward method, which composes those layers and functions into the actual computation. When we create an instance of this model and print it, we see that not only does it know its own layers and the attributes they're assigned to, but also the order in which we registered them. When we print out just one of the layers, we get a description of just that layer. Our tiny model and our linear layers are all subclasses of torch.nn.module, so we can access their parameters through the parameters method. Here, we've printed out the learning parameters for the whole model, and port only the linear2 layer. Note that the tensors, making up linear2's parameters, are the same as the last parameters of the whole model. The model registers parameters of submodules it owns recursively. This is important because the model has to pass all of these parameters to the optimizer during training. Next, let's take a look at some of the layer types available in PyTorch. PyTorch has classes encapsulating the common layer types used in modern machine learning models. The most basic type is the fully connected or linear layer, which we saw in the example above. This is a layer where every input influences every output, hence calling it fully connected, and that influences to a degree determined by the layer's weights. If a layer has m inputs and n outputs, its weights will be an m by n matrix. As a simple example, here's a tiny linear layer that takes a three element input and yields a two element output. And there's a random three element vector we'll use as input. Passing this input gives us a two element output vector. If you go ahead and do the matrix multiplication of x times the weights and you add the two biases, you should get the output y. Also, note that when we print the parameters, it lets us know that they require gradients. That is, they're tracking computation history so we can compute gradients for learning. Parameter is a subclass of Torch.Tensor, but this default behavior of setting autograd to true is different from what the tensor class does. Linear layers are widely used in deep learning models. One common place you'll see them is at the end of classifier models where the last layer or last few layers will be linear layers. Convolutional layers are meant to address data that is strongly correlated in space. They're common in computer vision models, where they can be used to detect close clusters of interesting features and compose them into larger features or recognized objects. They come up in other contexts too, such as NLP applications. Often the intent of a word is influenced by the words near it. In an earlier video, we had a look at Linnet 5. Let's take a closer look at how this computation is structured. Linnet 5 is meant to take in black and white 32x32 pixel image tiles of handwritten numbers and classify them according to which digit is represented. Looking at the first convolutional layer in the model, we can see his arguments are 1, 6, and 5. The first argument is the number of input channels. For us, that's going to be 1 because a black and white image only has one channel of data. The second argument, six, is the number of features we want this layer to learn. So it can recognize up to six different arrangements of pixels in the input. And finally, the five is the size of the convolution kernel. You can think of this like a window that gets scanned over the input, collecting features inside this 5-pixel window. The output of this convolutional layer is an activation map, that is a spatial map of where it found certain features. The second convolutional layer is similar. It takes the first layer's output as input. So that's why its first argument is 6. We recognize 6 features in the first comnet, and we take those as our 6 input channels to the next convolutional layer. This layer we're asking to learn 16 different features, which it makes by composing the features from the first layer, and we're only using a 3-osing the features from the first layer. And we're only using a three element window for the convolution kernel. After the second convolution layer has composed its features into a higher level activation map, we pass the output to a set of linear layers that act as a classifier, with the final layer having 10 outputs representing the probabilities that the input represents one of the 10 digits. PyTorch has convolutional neural network layers for one, two, and three dimensional inputs. There are more optional arguments that you can look up in the documentation, such as stride length and padding. Recurrent neural networks are neural networks designed for use with sequential data, such as a string of words in a natural language sentence or a string of real-time measurements from an instrument. An RNN does this by keeping a hidden state. This acts as a sort of memory for what it's seen so far in the sequence. The internal structure of an RNN layer or its variants, the long short-term memory or LSTM, and the gated recurrent unit or GRU is pretty complex and outside the scope of this video. We can show you what it looks like in action with this long short-term memory based part of speech tagger. The constructor has four arguments. It has the size of the input vocabulary, that is the size of the entire inventory of words it's meant to recognize. Each of these words is represented as the index of a one-hot vector in a vocab size dimensional space. The tag set size is the number of tags that you want the model to recognize an output. Embedding dim is the size of the embedding space for the vocabulary, and embedding maps of vocabulary down to a lower dimensional space where words with similar meanings are close together in that space. And hidden dim is the size of the LSTM's memory. The input will be a sentence with words represented as indices of one-hot vectors. The embedding layer will map these down to the embedding DIMM dimensional space. And the LSTM takes the sequence of embeddings and iterates over it, yielding an output vector of length hidden DIMM. The final linear layer acts as a classifier, applying log softmax to the output and the final layer converts the output into a normalized set of estimated probabilities that a given word maps to a given part of speech tag. If you'd like to see this network in action, there's a tutorial on it on pytorch.org. Transformers are multi-purpose neural networks, but we see them very often these days in natural language applications following the success of BERT, which is a transformer model. Now, a discussion of transformer architecture, which is kind of complex, is outside the scope of this video. But know that PyTorch has a transformer class that allows you to define the overall parameters of a transformer model. That is the number of encoder and decoder layers, the number of attention heads, dropout and activation functions, etc. You can even, using the PyTorch transformer class, build the BERT model from this single class with the right parameters. PyTorch also has classes to encapsulate the individual components of a transformer, such as the encoder and decoder and the layers that make them up. There are non-learning layer types that perform important functions in models. One example is max pooling and its twin min pooling. These functions reduce a tensor by combining cells together and assigning the maximum value of those input cells to the output cell. This is one of those things that's probably easier explained by example. So if you look closely here, we have a 6x6 matrix, which we use Maxpool to reduce to a 4x4 matrix, the four elements of it, each one contains the maximum value of a 3x3 quadrant from the input. Normalization layers recenter and normalize the output of one layer before feeding it to another. Centering and scaling intermediate tensors inside your computation has a number of beneficial effects, such as letting you use higher learning rates without problems of vanishing and exploding gradients. Running a cell above, you've added a large scaling factor and an offset to a random input tensor. You should see that the input tensor's mean is somewhere in the neighborhood of 15. After we run it through the normalization layer, you can see that the values are all smaller and grouped around zero. In fact, the mean of this should be very small. This is good because a lot of activation functions, which we'll discuss in a little bit, have their strongest gradients near zero, but they sometimes suffer from vanishing or exploding gradients for inputs that drive them far away from zero. Keeping the data centered around the area's steepest gradient means that learning will tend to happen faster and converge faster, and higher learning rates will be feasible for your training. Dropout layers are a tool for encouraging sparse representations in your model, that is, pushing it to do inference with less data. Dropout layers work by randomly setting parts of the input tensor to zero during training. Dropout layers are always turned off for inference. This forces the model to learn how to do inference against masked or reduced input data. So as an example, if I take a random input tensor and I pass it through a dropout layer twice, you should see that I get the same input tensor back with some random elements set to zero. Dropout layers help a model learn sparse representations by pushing it to do inference with less input data. Dropout layers work by randomly setting parts of the input tensor to zero during training. Dropout layers are always turned off for inference. This forces the model to learn against a masked or reduced dataset. So, as an example, I'll create a random input tensor and pass it through a dropout layer. And I'll do it twice. And you'll see that there are some zeros and there are some values, but the values are always identical. It's randomly setting the zeros throughout the tensor. You can use the optional p argument to set the probability. Here we set it to 40 percent. The default is 0.5. The final ingredients we need to build our models are activation functions and loss functions. Activation functions are part of what make deep learning possible. If you recall the linear layer example earlier, it was just a simple matrix multiplication to take an input vector and get an output vector. And if we stack many such layers together, no matter how many layers we do, we can always reduce that to a single matrix multiplication, which means that we can only ever simulate linear equations with our machine learning model. This is where activation functions come in. By inserting a nonlinear activation function between layers, we develop the ability to simulate nonlinear equations. Torch.n.module offers all the major activation functions including the rectified linear unit in its many variants, hyperbolic tangent, hard hyperbolic tangent, sigmoid, etc. It includes other functions such as softmax that are most useful at the output stage of a model. PyTorch has a variety of common loss functions including mean squared error, It includes other functions such as softmax that are most useful at the output stage of a model. PyTorch has a variety of common loss functions, including mean squared error, which is the same as the L2 norm, and cross-entropy loss and negative likelihood loss, which are useful for classifiers, and others. | Building Models with PyTorch | 848 | PyTorch | 20210416 | PyTorch comes with a rich set of tools for building machine learning models. This video will explain the Module class, which is the parent class of all PyTorch models; the Parameter class, a special kind of tensor representing learning weights in a Module; an overview of some of the common neural network layer types; and a sampling of other layer types and functions that are useful for building ML models.
Download the Building Models notebook here: https://pytorch-tutorial-assets.s3.amazonaws.com/youtube-series/Video+4+-+Building+Models+in+PyTorch.ipynb | 2024-09-19T12:20:35.286292 |
https://www.youtube.com/watch?v=6CEld3hZgqc | If you haven't already, you'll need to set up a Python environment with the latest versions of PyTorch and TensorBoard. The commands on screen show how to do that for Conda and Pip. We'll also be using matplotlib to manipulate images. Once you have the dependencies installed, you can run the companion notebook for this video in the environment you set up. For this model, we're going to be training a simple neural network to recognize different articles of clothing. We'll visualize data elements directly, track the success of the training process, we'll use TensorBoard to look under the hood at the model itself, and we'll do a more advanced visualization of the dataset as a whole and its internal relationships. For a dataset, we'll use Fashion MNIST. This is a set of small image tiles that depict various garments, classified by the type of garment depicted. For a model, we'll use a version of Linnet5 tweaked to accommodate the FashionMNIST dataset. We'll start by importing the libraries we need and the SummaryWriter class from torch.utils.tensorboard. This is the class wrapping the TensorBoard support in PyTorch and will be your primary interface for interacting with TensorBoard. It's good practice to visualize your training data prior to feeding it to your model, especially with computer vision tasks. Let's set up our data set. We'll use TorchVision to download training and validation splits of the data set. We'll talk more about validation shortly. And we'll also set up data loaders for each of the data set splits and define the classes against which we're classifying. Let's visualize a few instances of the data set. We'll use an iterator to pull out a few instances of the data and create a matplotlib helper function to batch them together in a grid. Let's show them in the notebook. So how do we add this to TensorBoard? It's a one-liner to write the data to the log directory. Note that we also call flush on our summary writer object. This makes sure that everything we've logged through the writer has been written to disk. Now let's switch to a terminal and start TensorBoard. We'll copy the URL that the TensorBoard command line gave us and look at the Images tab. Note here that the image we've added has a header containing the label we applied when we saved the image to the TensorBoard log directory. Next, we'll use TensorBoard to help assess our training process. We'll graph the accumulated training loss for regular time steps and compare it to the loss measured against a validation data set. For background, here's a brief aside on what we're doing and why. If you took a math class, it's likely that you'd be given homework problem sets to solve. After a number of homework sets, you'd be given an exam. The exam problems would be similar in nature but different in their specifics to the homework problems you've seen already. This is intended to make sure you learn the content of the class and not just memorize the homework problems. Similarly, we can use a validation dataset, that is, a portion of the total dataset not used for training, to see whether our model is learning generally or whether it's overfitted to the training data, akin to memorizing the training instances instead of modeling the general function we're trying to optimize the model for. Let's set up a training loop with validation checks and graph the results. Here we have a training loop. You can see that at the top of the code, we've declared a variable to accumulate the measured loss of the model's predictions, which will report every thousand training steps. We'll also be doing a separate loss check against the validation dataset. For tracking and comparing two different quantities, we use the add scalars call on Summary Writer, which allows us to add a dictionary containing multiple scalar values, each with distinct tags that get their own line on the graph. Let's run the cell and see what that looks like. Switching over to TensorBoard and looking at the Scalars tab, we can see that our loss is decreasing monotonically over the training run. This is a nice reassurance that the training is working. But are we overfitting? Looking at the graph, we can see that the validation and training curves are converging nicely. Next, let's use TensorBoard to better understand our model and how data flows through it. To do this, we'll use the add graph method on our summary writer. This method takes as arguments the model and a sample input that will be used to trace data flow through the model. We'll run the cell and switch over to TensorBoard. And going to the graphs tab, we can see a very simple graph showing the model with inputs going in one side and output submitted from the other. Of course, we'll want more detail in this, and we can get it by double-clicking on the model node in the graph. And here we can see a graph containing all of our layers and arrows indicating how data flows through them. Note that because the model uses the same maxpool object twice, the second convolutional layer appears to be embedded in a loop. But as you can see from the code, the flow is more linear than that. We've already used TensorBoard to display visualizations of instances of our data, but what about the whole dataset? An embedding is a mapping of instances from a higher dimensional space to a lower dimensional one. This is a common technique in NLP. If you have a 10,000 word vocabulary, represented by one-hot vectors, your words are unit vectors in a 10,000-dimensional space. If you train an embedding layer that maps these vectors to a lower-dimensional space, relationships can emerge. For example, the new vectors for words like good, excellent, and fabulous will tend to be clustered in that lower-dimensional space. In our case, our 28x28 image tiles can be thought of as 784 dimensional vectors. We can use the Summary Writer's Add Embedding method to project this down to an interactive 3D visualization. Here's a bit of code to select a random sample of our data, label it, and project it. Note that as always, we use the flush method to ensure that all our data is written to disk. Switching over to TensorBoard, we can see on the Projector tab a 3D visualization of our new embedding. Zoomed out, we can see some large structures, some arcs within the 3D space. Zooming in on some of these structures, we can see that some of these arcs have clustered similar garment types. Zoom in on your own sample of the data and see if you can identify patterns in how different types of garments are clustered in this 3D projection. For more information on PyTorch's TensorBoard support, you can visit the PyTorch documentation at pytorch.org for a full documentation of torch.utils.tensorboard.summarywriter. The PyTorch tutorials section at pytorch.org has tutorials on using TensorBoard. And the TensorBoard documentation, of course, has more detail about TensorBoard itself if you want a deeper view of what the summary writer is doing under the hood. | PyTorch TensorBoard Support | 432 | PyTorch | 20210412 | TensorBoard is a well-known tool for visualizing the activity of machine learning models and training progress. PyTorch provides support for TensorBoard use, with no dependency on other ML frameworks. This video will show you how to visualize graphs of training progress, input images, model structure, and dataset embeddings with TensorBoard and PyTorch.
Download the TensorBoard Support notebook here: https://pytorch-tutorial-assets.s3.amazonaws.com/youtube-series/Video+5+-+Tensorboard+Support+in+PyTorch.ipynb | 2024-09-19T12:22:31.804597 |
https://www.youtube.com/watch?v=M0fX15_-xrY | Hi, and welcome back to the PyTorch training video series. In this video, we're going to cover Autograd, which is PyTorch's tooling for rapidly and dynamically computing the gradients that drive backpropagation-based learning in your machine learning model. In particular, we're going to go over what Autograd does for you and why it makes PyTorch so flexible and powerful for machine learning? We'll walk through a basic code example to give you a feel for what AutoGrad is doing under the hood. Then we'll see AutoGrad's role in a training loop. After that, we'll talk about why and how to turn the AutoGrad feature off and on for a particular tensor or a particular context. We'll see the AutoGrad profiler in action and we'll look at the AutoGrad high-level API that was released with PyTorch 1.5. The Autograd feature of PyTorch is a large part of what makes PyTorch a fast and flexible framework for building deep learning projects. It does this by easing the computation of the partial derivatives, also called gradients, that drive backpropagation-based learning. I'm not going to belabor the math here, although if you'd like a refresher, go ahead and download the notebook and follow along in detail. The important concept here is that when we're training our model, we compute a loss function, which tells us how far our model's prediction is from the ideal. We then need to find the partial derivatives of the loss function with respect to the model's learning weights. These derivatives tell us in what direction we have to adjust the weights in order to minimize the loss function with respect to the model's learning weights. These derivatives tell us in what direction we have to adjust the weights in order to minimize the loss. This involves the iterative application of the chain rule of differential calculus over every path through the computation. AutoGrad makes this computation faster by tracing your computation at runtime. Every output tensor from your model's computation carries with it a history of the operations that led to it. This history allows the rapid computation of derivatives over the graph, all the way back to your model's learning weights. In addition, because this history is gathered at runtime, you'll get the correct derivatives even if your model has a dynamic structure with decision branches and loops. This offers a lot of flexibility over tools that depend on analysis of a static computation graph. Let's have a look at a simple example of autograd in action. First, we'll import PyTorch and matplotlib so we can graph some stuff. Next, we'll make a one-dimensional tensor holding a bunch of values between 0 and 2 pi, and we'll add the requires grad equals true flag. Note that when we print A, Pytorch lets us know that A wants gradients computed on any calculation it's involved in. Now we'll perform a computation. Here we'll just take the sign of all the values in A and we'll graph that and that looks right. If you notice the calls that attach here, I'll be covering those later in the section on turning autograd on and off. So printing the tensor B, we see that PyTorch tells us it has a grad function. This means that b came from a computation where at least one of the inputs required the calculation of gradients. The grad function tells us that b came from the sine operation. Let's perform a couple of more steps. We'll double b and add 1 to it. When we do this, we see that the output tensors again contain information about the operations that generated them in the Grad function property. By default, AutoGrad expects the final function in a gradient computation to be single-valued. This is the case when we're computing the derivatives over learning weights. The loss function has a single scalar value for its output. It doesn't strictly have to be single value, but we'll go over that in a bit. Here we'll just sum the elements of the tensor and call that the final output for this computation. We can actually use the grad function property of any output or intermediate tensor to walk back to the beginning of the computation history, using the grad functionction's Next Functions property. Here, you can see that D knows it came from an addition operation, which knows it came from a multiplication operation, and so on back to A. A does not have a Grad Function. It is an input or leaf node of this computation graph, and so represents the target variables for which we want to compute the gradients. So we've looked a little at the history tracking, but how do we actually compute gradients? That's easy. Just call the backward method on the output tensor like so. Looking back over the computation, we have a sine, the derivative of which is cosine. We have the multiplication by 2, which should add a factor of 2 to our gradient, and the addition, which should not change the derivative at all. Graphing the grad property on A, we see that, in fact, the computed gradients are twice the cosine. Be aware that gradients are only computed for inputs or leaf nodes of the computation. Intermediate tensors will not have gradients attached after the backward pass. So we've peeked under the hood at how AutoGrad computes gradients in a simple case. Next we'll examine its role in the training loop of a PyTorch model. To see how AutoGrad works in your training loop, let's build a small model and watch how it changes over a single training batch. We'll define and instantiate a model and create some stand-in tensors for the training input and ideal output. You may have seen that we did not specify requires Grad equals true for the model's layers. Within a subclass of torch.nn.module, the gradient tracking is managed for you. If we look at the layers of the model, you can see the randomly initialized weights and that they have no gradients computed yet. You might have noticed there's a grad function on the weights I sampled. There's no grad function on the weights themselves because they're a leaf node of the computation graph, but the slice operation counts as a differentiable operation, so my little slice of the weights is a grad function indicating that it came from the slice. So let's see how this changes after one training batch. For a loss function, we'll use the square of the Euclidean distance between our prediction and our ideal output. We'll also set up a basic optimizer using stochastic gradient descent. Note that we initialize the optimizer with the model's learning weights or parameters. The optimizer is responsible for adjusting the weights. So what happens when we call loss.backward? We can see that the weights have not changed, but that we do have gradients computed. Again, these gradients guide the optimizer in determining how to adjust the weights to minimize the loss score. In order to actually update the weights, we have to call optimizer.step. And we can see that the weights have changed. This is how learning happens in your PyTorch models. There's one more important step in the process. After you call optimizer.step, you need to call optimizer.zerograd. If you don't, the gradients will accumulate over every training batch. For example, if we run a training batch five times without calling zerograd, you can see the gradients turn up mostly with much larger magnitudes because they were accumulated over each batch. And you can see that calling zero grad resets the gradients. If your model is not learning or training is giving you strange results, the very first thing you should check is whether you're calling zero grad after each training step. Sometimes you'll want to control whether gradients get tracked for a calculation. There are multiple ways to do this, depending on the situation. The easiest is just to set the requires grad flag directly, like so. And we can see that b1 has a grad function, but b2 does not, because we turned off history tracking in a prior to computing b2. If you only need autograd turned off temporarily, you can use the torch.nograd context manager. When we run this cell, we can see that history is tracked for all computations except the one in the nograd context. nograd can also be used as a function or method decorator, causing history tracking to be turned off for computations inside the decorated function. there's a corresponding context manager torch.enablegrad for turning autograd on in a local context. it may also be used as a decorator. finally, you may have a tensor tracking history, but you need a copy that doesn't. In this case, the tensor object has a detach method, which creates a copy of the tensor that is detached from the computation history. We did this above when we graphed some of our tensors. MapPlotLib expects a numpy array, but the implicit tensor and numpy array conversion is not enabled our tensors. MapPlotLib expects a NumPy array, but the implicit tensor and NumPy array conversion is not enabled for tensors tracking history. Once we make our attached copy, we're good to go. There's one more important note about Autograb mechanics. You have to be careful about using in-place operations on tensors tracking gradients. Doing so may destroy information you need to correctly do your backward pass later. In fact, PyTorch will even give you a runtime error if you try to perform an in-place operation on an input tensor that requires gradients. AutoGrad tracks every step of your tensor computation. Combining that information with time measurements would be useful for profiling gradient-tracked computations, and in fact, this feature is part of Auto tensor computation. Combining that information with time measurements would be useful for profiling gradient track computations. And in fact, this feature is part of AutoGrad. The next code cell shows basic usage of the profiler. The AutoGrad profiler can also group results by code blocks or input shape and can export results for other tracing tools. The link documentation has full details. PyTorch 1.5 saw the introduction of the AutoGrad high-level API, which exposes some of the core operations underlying AutoGrad. In order to explain this best, I'm going to go into some more mathematical depth on what AutoGrad is doing under the hood. So say you have a function with n inputs and m outputs. We'll say y equals a function of x. The complete set of partial derivatives of the outputs with respect to the inputs is a matrix called the Jacobian. Now, if you have a second function, which we'll call l, which equals g of y, that takes an m-dimensional input that is the same dimensionality as the output of our first function, and returns a scalar output, you can express its gradients with respect to y as a column vector. It's really just a one-column Jacobian. To tie this back to what we've been talking about, imagine the first function as your pie torch model, with potentially many inputs and many learning weights and many outputs, and the second function as a loss function, with the model's output as input and the loss value as the scalar output. If we multiply the first function's Jacobian by the gradient of the second function and apply the chain rule, we get another column vector. This column vector represents the partial derivatives of the second function with respect to the inputs of the first function, or in the case of our machine learning model, the partial derivative of the loss with respect to the learning weights. Torch.autograd is an engine for computing these vector Jacobian products. This is how we accumulate the gradients over the learning weights during the backward pass. This is how we accumulate the gradients over the learning weights during the backward pass. For this reason, the backward call can also take an optional vector input. The vector represents a set of gradients over the output tensor, which are multiplied by the Jacobian of the autograd trace tensor that precedes it. Let's try a specific example with a small vector. If we tried to call y.backward now, we'd get a runtime error, and a message that gradients can only be implicitly computed for scalar outputs. For a multidimensional output, AutoGrad expects us to provide the gradients for those three outputs that it can multiply into the Jacobian. Note that the output gradients here are all related to the Jacobian. Note that the output gradients here are all related to the power of the 2, which we'd expect from the repeated doubling operation in the previous cell. There's an API on AutoGrad that gives you direct access to important differential matrix and vector operations. In particular, it allows you to calculate the Jacobian and Hessian matrices of a particular function for particular inputs. The Hessian is like the Jacobian, but expresses all partial second derivatives. Let's take the Jacobian of a single function and evaluate it for two single element inputs. If you look closely, the first output should equal 2 times e to the x, since the derivative of e to the x is the exponential itself, and the second value should be 3. Now, you can of course do this with higher order tensors. Here we've computed Jacobian of that same adding function with a different set of inputs. There's also a function to directly compute the vector Jacobian product if you provide the vector. Autograd's JVP method does the same matrix multiplication as VJP with the operands reversed. The VHP and HVP methods do the same for the vector Hessian project. For more information including important performance notes, see the documentation for the new AutoGrad functional API. | The Fundamentals of Autograd | 841 | PyTorch | 20210416 | Autograd is the automatic gradient computation framework used with PyTorch tensors to speed the backward pass during training. This video covers the fundamentals of Autograd, including: the advantages of runtime computations tracking, the role of Autograd in model training, how to determine when Autograd is and is not active; profiling with Autograd, and Autograd's high-level API.
Download the Autograd notebook here: https://pytorch-tutorial-assets.s3.amazonaws.com/youtube-series/Video+3+-+Autograd.ipynb | 2024-09-19T12:26:41.622836 |
https://www.youtube.com/watch?v=oQxQIXf0kfQ | Neo4j 5, ushering in a new era in graph technology that is transforming the way we use connected data. And with this latest update, Neo4j is more powerful than ever. You want enhanced performance? Neo4j is unmatched in the market. Neo4j was already 1,000 times faster than relational databases. And with version 5, Neo4j widens the performance lead with new features like K-Hop query optimization, graph pattern matching, and granular indexes. You want your data to scale? With 5, Neo4j has unbounded scalability. Autonomous clustering lets you scale out horizontally with less manual overhead and better use of computing resources. And when your data sets lie in multiple databases, Fabric instantly helps you create a composite graph of insights hidden in disparate silos. You want agility? Neo4j 5 has unparalleled operational agility. Neo4j fits your IT strategy, whether it's on-premise, cloud, hybrid, or multi-cloud. And with Neo4j Ops Manager and a continuous release support model, it's easier than ever to manage your database in the cloud. Neo4j 5 helps you achieve unbounded performance, scale, and agility. Find out more at neo4j.com. The New Year Is Coming ДИНАМИЧНАЯ МУЗЫКА For For over a decade, Neo4j has been helping the world make sense of data. Today, Neo4j, the leader in graph database and analytics technology, is unlocking new possibilities in generative AI. Together, Neo4j and large language models allow you to use a public, pre-trained LLM with your own data, subject to your privacy controls. Reduce hallucination through a powerful combination of deterministic facts and probabilistic conclusions. And enhance explainability and transparency through explicit knowledge representation for getting transparent reasoning and explainable AI. Neo4j's scalable and flexible technology seamlessly integrates with generative AI frameworks like Langchain, Vertex AI, OpenAI, and beyond, democratizing access to the world's information while setting a new standard for AI accuracy, transparency, and explainability. Unlock new possibilities in generative AI today at Neo4j.com. There are many types of databases today. The tried and tested relational database, the new and popular DocumentDB, and many others. And all of these have their strengths. But one place they all fall short is in handling complex connections between your data. Sure, any database may be able to return friend of a friend queries of 3 or 4 degrees, but what if you need 20 degrees and need it quickly? That's where Neo4j comes in. Unlike other databases, Neo4j Graph Database doesn't need to compute the relationships between your data at query time. The connections are already there, stored right in the database. Because of this, queries of deeply connected data are orders of magnitude faster. We get used to the limitations of the tools we work with. Neo4j blows those limits wide open, enabling queries never before imagined at speeds never thought possible. That's why Neo4j has become a key technology driving business-critical applications for hundreds of Fortune 500 companies, government agencies, and NGOs. Learn more at neo4j.com. Are you interested in graph data science but don't know where to begin? The Neo4j Sandbox is the best way to get started and learn graph data science basics with interesting preloaded datasets like fraud detection, contact tracing, and Twitter trolls. It's easy to jump right in. follow our guides and explore visualization with Bloom. There's no download required and you can get up and running in less than a minute. Try it for free today. Ketutak សូរបស់ពីបានប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្� Graph databases have been the fastest growing database category for the past decade. The reason for this is simple. Graph databases are highly optimized to understand and work with the complex relationships between your data. Unlike other kinds of databases, a native graph database doesn't need to search for data connections at query time. Those relationships are already there, stored right in the database. Because of this, queries of large, complex, connected data are orders of magnitude faster. A graph database doesn't necessarily need to replace your current database. By adding graph capabilities to your current infrastructure, you can keep the technology you already have but greatly improve it with the power to uncover rich data connections, reduce time to market, and run faster queries. Taking advantage of the connections that already exist in your data enables you to get ahead of the competition. Every business needs to leverage data relationships and leverage them faster and more efficiently. Graph databases deliver those capabilities. The speed and efficiency advantage. Graph databases deliver those capabilities. The speed and efficiency advantage of graph databases has driven new, real-time applications in fraud detection, customer 360, machine learning, and knowledge graphs, just to name a few. Because of this, graph databases have become a key technology creating competitive advantage for hundreds of Fortune 500 companies, government agencies agencies and NGOs we asked 100 senior tech executives CIOs CTOs and chief data officers what they need to bridge data silos, boost AIML projects, and open up new revenue streams. A massive 88% of them said the same thing, knowledge graphs. But why? Leaders know the value of data, keenly aware that it holds the answers to their most pressing business questions. They need insights to improve decision-making and enhance business performance. Enter Knowledge Graphs. A Knowledge Graph is a map of everything an organization knows about a given topic. Knowledge Graphs help business leaders by functioning as a non-disruptive insight layer on top of their existing data landscape. Knowledge Graphs drive intelligence into data to significantly enhance its value, without changing any of the existing data infrastructure. Knowledge Graphs enhance existing technologies by providing better data management, better predictions, and better innovation. Last but not least, Knowledge Graphs lend themselves to multiple use cases like fraud detection, digital twin, patient journey, bill of materials management, recommendations, and IT operations across many industries, including financial services, healthcare, life sciences, manufacturing, supply chain, and retail. The C-suite is waking up to the possibility of knowledge graphs. Shouldn't you? hello good morning good evening good afternoon and thank you for joining this Neo4j Live today. Well, actually it's not a Neo4j Live, sorry. It's Road to Notes. So we are starting with Road to Notes today. This is the first day of our series heading up towards Notes in 7th of November. So I'm very much looking forward to these workshop sessions that are taking place basically from tomorrow. We will do from today, we'll do a fundamentals course this week, another fundamentals one next week and further training sessions over the next coming weeks until November. So it's great to have you here. It's great to do this workshop today with you all. Before we dive in, I'd like to share a few things with you. So as you see, if you joined in BigMarker today, | Road to NODES: Neo4j Fundamentals | 7,972 | Neo4j | 20240919 | Not familiar with Neo4j? This workshop provides an introduction to show you the basics of Neo4j by explaining the core concepts of graph databases, such as nodes, relationships, properties, and labels.
Neo4j Fundamentals: https://graphacademy.neo4j.com/courses/neo4j-fundamentals/
Movie Data https://dev.neo4j.com/ga-movie-data
Aura Console https://dev.neo4j.com/ga-aura
Graph Databases Book: https://neo4j.com/books/graph-databases/
Sizing Calculator: https://neo4j.com/hardware-sizing/
#neo4j #graphdatabase #workshop #cypher #datamodeling | 2024-09-19T12:42:06.279037 |
https://www.youtube.com/watch?v=Ma4KYIDKBtA | Hello everybody, good morning, good afternoon, good evening and welcome to Neo4j Live. Today, transforming engineering constructions and architecture with GraphRack and overall a nice conversation about AI, GenAI and graphs in general, I think, with my dear friend and former colleague Stefan Wendin. Hi Stefan, how's it going? Yeah, amazing. Nice seeing you former colleague, Stefan Wendin. Hi, Stefan. How's it going? Yeah, amazing. Nice seeing you, man. Such a long time. But I feel like yesterday, when I heard your voice, it's like, boom, it just took me back. Super nice. Super nice. So many good memories. Exactly. Good memories, good times. So, yeah, it's great to have you here today, Stefan, and to talk a little bit about Graph and also about AI and all things that happened in the past a little bit. today so um more more talking and less uh hands-on or or or slides but nevertheless i'd like to keep it as interactive as possible so if you have any any watches live in chat right now keep keep us keep us uh focused so if you see something i want to know more a little bit more have i have an own idea have something to comment on then please just type type it away and I'll read out your comments or questions at any time. So we can make this a virtual conversation between you and I and obviously with everybody else on the internet. So that would be great to expand the floor a little bit. Yeah, no, sounds good. So I always love to derail whatever thought might arise in our head and our plans. It's always good to just throw in a curveball here and there. After all, that's the real graph, right? The other one is just a set of tables and all the documents with plans. Nevertheless. Exactly. nevertheless exactly so um maybe before before we go into into the topic we have maybe maybe stefan do you want to want to share a few words who you are what you do and and the usual question is probably a little bit not as not as necessary i guess but you know what brings you to graph is is maybe uh maybe a little longer story i don't know if we have time for that today, but maybe share a few words still about you and what you currently do. Yeah, yeah. So I came from NEO, as you know, but maybe not everybody knows. So I run the innovation lab among many things. After a couple of years, five, six years, I moved on. And nowadays, what I tend to do is, where the main concern was to figuring out the space with graphs, which I love, but also in combination with other AI things, such as language models and so. So I am now doing two big assignments on 50-60%. You can also already know that I'm bad at math or have more hours than most people, which I do have. But one part is that I work as head of AI at Turians, an old school engineering company. So not the software engineering that the world we live in, the one that builds houses, architecture, structural engineering, a super nice company with the goal of creating better societies. And what could be better than super many hard mathematical problems that needs data with awesome people, right? So I run that. In that we're also building a GraphRag function, which we can return to. And then on the other one, which I haven't actually told anyone yet. So this is the first. You do first. Yeah, exactly. Here it is. I have the email address already set up. That's how far fresh this is. So I also work at AI Sweden, which is Swedish National Center for Applied Artificial Intelligence. Oh, very cool. Congratulations. Yeah, thank you. Helping bridge in like the science part with the applied part. And I think that's what I do the best. I read a bunch of papers, try to figure them out, ask even more stupid questions, and then I go prototype. And then I try a bunch of papers, try to figure them out, ask even more stupid questions, and then I go prototype, and then I try to kind of simplify that into something that we can repeat. So, yeah, that's a little bit what I do. Let me know. You know me. If you ask too much, I'm going to keep talking about myself now for the remaining of the time, so maybe just shut up. Yeah, no, that's good.'s that's great and that's that's cool uh cool too so you have these two these two you know jobs basically to to look at this and and you see basically firsthand so that's that's i guess where we where we started the conversation a little bit i i saw you post a lot and you just said this yourself you read and then you summarize things and then you bring the information the the the the knowledge you get you gathered across uh to your teams and i i i read a couple of your blog posts um you you posted on linkedin very actively um towards when this all journey i topic basically kicked off towards towards the end of last year and throughout this year and um I think that's really great that you have this inside view, basically. You come from a graph database company. You know the ins and outs of Neo4j. You see now the application of the technology in an AI space. And you bring that on into technically open and interesting companies. So I think that's a pretty cool story and a pretty cool... Yeah, I remember. I tried to figure out when it was. It must have been around like, what can it be, like 2020 or something, I had a conversation with, I think, Alicia or Jose about this idea of transformers and how, or if they were, then graph neural networks and not. And a bunch of months later, then Shetana Joshi wrote a very interesting paper and an article about this saying that they could be explained by the same kind of thing. And I usually use that analogy. I remember talking about this with Peter Velocovit at the Neo4j office for a podcast that never happened. But nevertheless, we talked about it, how almost graph-neural networks and transformer structure together are like, you know, beyond boy and John McAroe. Now, you know that I'm a seventies kid growing up, but what they did, they pinned against each other as mortal, like enemies. They hated each other, which was not the case, right? They were friends. They just have very different ways of doing things. But what they did, they lifted tennis. different ways of doing things but what they did they lifted tennis so even if you were not interested in it it actually elevated it up so that you got more air time and this is equally how i see now the benefits of graphs and the transformer structure together because they complement each other and build something much larger than the individual pieces and help out with shortcomings on on both sides and i think that's that's where technology becomes awesome because literally this is just how people are right if i'm good at x i find a person that are good at x and then we're still just no X. But if I find a person that know Y, we can take X and Y and we can compute set or whatever we like to do. Right. So I think that's the beauty of these. But yeah, very, very cool. Yeah, absolutely. That's super cool. And maybe we can stay a little bit in the of months um to before we dive in what you're currently doing because i think that was that was super interesting you you summarized a bunch load of articles and i can i can um i can link one um in in chat as well that that i found was very interesting that started in um you posted at the end of october called the future of ai in business unifying knowledge graphs and large language models. I guess that was kind of like a little bit of the kickoff of your, of your posting journey, I would say. How, how, how much, I mean, I guess it didn't surprise you, but how much were you, were you surprised? Maybe if at all of, of how, how, how much it took off within yeah that weeks almost yeah that's also yeah because like i i post the reason why i post is for me like uh it comes from one of my old mentors right so it comes from there's one thing to understand it by reading but then when you have to rewrite it in your own words that's a whole other level of understanding it. And then to publish it and put it out there, that's the next level because somebody is going to go poke on you. You got this wrong. You're an idiot. You're X, Y, Z. It's like, okay. So then I started just posting for myself, kind of, you know, I'm not good at that kind of imposter syndrome kind of thing. But I think that one was, so in the beginning it was my normal i have a fairly okay network right it's the same usual people that love the same usual thing if i post about a game theory the strategic side of my network was gonna love it if i post about something else the other side of my network i love it but i think what is also interesting with this uh almost like uh ironically or ironically, or it's on such a meta level, because I said that it unified. And what I also saw was that the business side of my people and the engineering side of my network, they also unified in sharing this. So I think that post actually literally kind of double exploded because it's also by the one sharing and commenting and reaching out about it also made it actually true in that sense so they almost like validated the the thesis in in that sense so yeah it was it was super cool and uh and really i think because i posted a couple of shorter versions on the similar topic i think it wasn't anything super particular about this but i think it was the timing all of a sudden people just got it right they got the graph part they got the transformer or llm part and then it's like wow i can actually visually now see how these two come together and i think that's one of those kind of i remember me and alessandro we talked about when we run the innovation lab uh very often we worked with very non-technical people right so then we saw the the light go on we call it the graph epiphany when they finally got the kind of graph thinking part right right? And all of a sudden you could just see like, zoop, they got it. And I think that article was that specific time. And then of course, as with everything, of course, now the topic has become super huge. And I think the great part is now looking upon how do we even make the smarter ones, right? By using also those. But I think that was one of the tipping points, one of my favorite ones also that people really liked and also added me. So that was fun. Yeah. Yeah. And I think that's great. I think probably you're right in saying that this is about this time when then everybody, technical developers, engineers were looking into it for a longer time, but also then the more the business people, everybody wanted to do something with the technology. More or less everybody understood that this is a chatbot system. I think we are beyond this now. I mean, we know it can be much more, much more, much more, it can be achieved with using this technology, but I think that's where it all kicked off. So. Yeah. I'm thinking actually, like now when hearing us talking about, but I think that's where it all kicked off. I'm thinking actually, now when hearing us talking about it, I think what was needed for it to actually reach the business people were that they could actually play around. So it's almost like this kind of, a lot of people go, AI revolution. I'm like, it's a fucking, oh, sorry, I cursed. Can I curse in here? I got just excited. I'm sorry. Sorry on the behalf of Neo4j here. No, but what happened is like, you know, you put a chat window on that AI structure. All of a sudden, everybody were allowed. So it's a mass adoption game, right? It's a UX revolution, rather. And because of that, and how bad GPpt 3.5 were early on like it was amazing but it was also not good i think like accuracy was 48 at at max 48 which is every other which is a drunk uncle at midsummer it's just made up right but because of that they could also now really understand that it needed something else to kind of ground itself to actually do that so i think that's also what happening because theoretically they wouldn't have listened i'm pretty sure right it would be very easy a very too complicated conversation but all of a sudden they have real life experience about hallucinations about frustration of not no i didn't tell you to do that go there do you want to go over here no no no it was here it was here right now they had that and then they could actually now see it if i invest in this that can help me ground this and secure this from from these things and which is very cool so it also helped a lot in conversation on you know what type of rag system you would like to build and how to make it better and so on so before it's seemingly impossible to get people interested in this topic but now it's like literally they do understand it or they want to understand it at least and spend time but sorry for interrupting you man i just got all fired up about the topic because i think yeah it is equally important to understand like why we change and why these things happens because that's what i look on the larger scale because then this is where societies are being transformed by technology it's just not a bunch of people playing around with technology because technology is awesome, which is also super cool and fun. But that's not the life-changing experience that we now see. So super cool. Yeah. And then with the title of this session, maybe we divert a little bit more into that direction. We said transforming engineering, construction, and architecture with RAG. I mean, that is, I think, probably more connected to your current role at Torrance. How do you see that? And was this a longer process that already kicked off a while ago or was this something that like really likes speed lanes over the past couple of months when when the technology was like approachable usable and like okay yeah we we need to do this now and this makes so much more sense yeah no it's it's a longer project so i started out that i worked with the innovation team uh and coaching then them on my thoughts. I actually contacted them about another gig, which was outside of the AI thing. But then they asked me, like, can you help us to scope out? How would the AI program look like? And I've worked very closely with Robert, who run Next, which is the next kind of phase of Therians, right? We worked on that. And then we come to the conclusion, we started to see like more and more this space evolved into, you know, okay, we can now actually chat with documents, but we have this kind of problematic part that we cannot share data. I mean, they build infrastructure, which is highly classified, which you cannot put outside on your own service and so on. So there's a lot of security things here. So then very early on, I started to draft this idea of using the graph with my previous knowledge. This is long before the paper from Microsoft that came out, I think, was one of the first ones on the title-linked graph rack. So way before that, I started drawing out things i sculpted out so finding a base version which we are now having in place and then building the kind of graph rack as a complement to that so i had a couple of wild ideas and synced them actually with jesus for the one listening it's another Neo4j person. I think they already know him about all of the amazing series you have done together. So they should know him. If not, go checking out. So I sounded with him and it sounded like a good idea. And then we start pitching that. So that led, then we got financing for the project. I started to build out the team. So it has been a kind of longer more kind of academic application process in in the back end but since i think we started at the top of the year really building things which is fun because i mean you can have a perfect idea on paper, and then you start building, and it's like, oh, yeah, there was a little bit of difference here. And I think what is interesting with Therese is that they have so many departments. It's a large company, right? You have people building bridges, roads, infrastructure. They have environmental people calculating all these things. So there's a lot of interconnected things. And already there you see a graph, right? Because like building a new city, you have roads, you have subways, you have buses, you have waters, you have electricity, ventilation, heating. It's like a hyper level graph already there. So I think the tricky part is like how can we make an architecture which is fast enough and super accurate and can handle contextual information in that sense because very often if if you're building these things you refer to legal and regulations which are you know you have a code for if you're building a road this is how the road should look and then you refer to that code and that code then refers to how another code and so on and so on and so on so there's a lot of linking already on that level as well which is first of what i've like this is 100% a graph this is not just similar information it's hyper connected in its in its. So that's what we are building. So we run parallel to a team. One is building out the actual interface kind of thing, and with a classic kind of, we call it vanilla rag. The credit for this goes to one of the Klarna engineers that stated it. It's a little bit disturbing to use it, but it's also funny. So we use it. We use it with a very much more kind of smile on the face. But yeah. So it's fun, but it has been a longer journey. And I think what also is kind of helpful here, because I think this is also something I do see a lot. Maybe you see it also in your side. We're not super far apart. But I mean, at least here in the Nordics, we have this face of the death of the POC almost. It's like, you know, the board told the executives like enough of your POCs. You need to get something in production and start making money. So this is the year of all hands on deck, all dead POCs goes on the side. Let's just build something, get money and learn by doing. It's not about fiddling around in the basement anymore. It's about getting into production, making money, learning by doing so, which is the kind of cool part. Yeah, and then and you know yeah produce something develop something that makes sense and that is actually generating uh something interesting so um yeah that's that's cool in the beginning you said um security uh you know deterrence is is having all the data not available publicly. And there's also this discussion about using an LLM with your data makes it, you don't want to upload your whole knowledge you have or be accumulated to a public space, which you don't have no control over anymore, the way it goes and what the system does with it. How did you tackle that? Was that a big problem in the beginning to convince people to use that technology also um i mean it's still sensitive so uh for those and i mean uh if you take the large infrastructure and like uh these kind of thing like traffic work at the one doing all the roads and stuff i think we will never be allowed to even put it in the cloud, if I've been honest. Maybe I am exaggerating here. But for those, there will always have to be a special occasion. This would be if you would work with any agency, I guess, the same thing and so on. So for those, always kind of hard. And what we're trying to do is now rather have the mix. So we have the RAC site with some sort of security and so on. But what we're also looking into and I'm looking into is also this kind of on-device smaller model connected internally. So you never actually go outside of your own network, so to say. So you stay within the realm of your firewalls. So I think that's also how these also can be handled going forward. And I mean, this is, I guess, the reason for what people have been ranting about today. I mean, Apple going hand in hand with OpenAI and Elon Musk is ranting about, they're just going to take your data, which is obviously true that they're going to do it. And that's the reason why they're doing it. So, I mean, they have been doing it ever since, I guess. Exactly. But that raises a point, right? And something we also talked about very often, because if you think about security, is it about being secure enough? I cannot access systems, which is how we used to look upon things, right? I have no access to that system. I have access to that system. The problem with this, even putting in a graph, is that you know this for sure but I mean if I have the graph I'm now allowed to traverse this meaning that I can actually understand things that I should not be able to understand and that's why I love the fine grain security thing with the graph because then I can explain it I can make sure to control it on that level that I really need so a lot of that is the important part, sitting down and saying and showing, this is how this is done. And now I can actually show it to people, which is also one thing which I really like about the graph, because it lends itself naturally to talk about data in a way much better than, I mean, you can't really show up like, look at this vector here. It looks so safe. What do you think about it? It just looks like an abstraction. I don't know. We joked about this in another call today. Is there anyone that can read vectors? And we're like, there's this guy on LinkedIn. He's doing all the different architecture by hand. So he's doing computation of graph neural network, but he does it by hand, AI by hand or something it's called. It's super funny. I really recommend you to check it or don't check it because I did, then I ruined the weekend that I should have spent with my girlfriend. I'm sitting there trying to calculate the different stuff by hand. Super nerdy, but a very fun tip. Yeah, that's cool. I mean, I cannot even imagine somebody doing vectors or making sense out of a list of vectors. That is, you know... It doesn't make sense, right? But the graph does, and I think that's the beauty of this. And that you can talk about these highly secure things because it's very easy to say, ah, but you have access to the system. Now you have access to all the answers that can be produced. But this becomes this kind of narrative of implicit explicit, right? One, you have access to, but then you can start inferring,ring ah this together with this actually means this and you don't have access or right to access that part the the answer on it so it's almost on a node node-based security in that sense uh which is at least on a query level so that's one one one of the things i really like it also allows to sit down and talk about these things much more easily. But it was a very hard topic. And I mean, for a lot of these old companies, it's almost like, if you think about it, it's like, you know, it took a very long time for them to ease into digitalization. And maybe they haven't fully emerged already in that. And now we have a second wave coming. And this wave, it's much bigger, right? And the same thing goes. It's like, get your data in order, for God's sake. Don't bring it. I don't want to see a big tank ship with crappy data. Just give me a small set that actually works, that people love to use. Exactly. I think that's a good uh a good thing we always keep saying here and i think that's that's ever since it's neo4j i said okay yeah let's let's let's pick a pick a small small set of data you can oversee you understand what's going on here and then you you you transform that into a graph and see if if that if that makes it easier easier, makes it better for you. Yeah, totally. Are you in the position to share a couple of things? What the transforming or what the engineering, construction, architecture transformation, I guess, in your current role, what does it transform, actually? Yeah, I think we can talk about a couple of ones, some which is somewhat sensitive. But if you think of... We don't have to do those. No, let's not do those, then I get fired. Then I have to come back to Neo. You would regret this. At least someone would. But nevertheless, I think like the classical thing here, like we talked about this, is it's highly regulated what you can do and what you can build, right? So the first part is this extreme Wikipedia lookup, right? Every document connected to every other document in that structure. So meaning you can imagine doing that this is this is a full library so you're building something you have to go and looking on you know tens or a hundred different documents at the same time when writing this manually in by hand and then you repeat that for every single time which is hello it doesn't make any sense right and this is exactly what what will be one of those so building that kind of sidekick i usually make fun of this because we named it sven and it's because of the founder right but i always make this story because my middle name is sven so i make this weird narrative that that I'm a narcissistic idiot. Or maybe I am. Who knows? That's exactly what the narcissistic idiot would say, I think. But nevertheless, I think that first transformative part is that part. The interesting part here is that this is a space. I mean, they sell consultancy by the hour. So it's actually not beneficial to solve it faster so i think that the bigger harder figure here to figure out is they have to go from selling hours into selling value which is a whole other game so to go back this is almost like you know neo4j on-prem you know you sell the large thing you have the capacity you need it so you don't hit the roof but now you go to the cloud which is a completely other game it's much easier to deploy and take it down and so on so that's a whole other value game right so we need to be better at that so i think this is very similar to to that time and i think this is something we see very often repeating so I think that's one of the most obvious ones. Then we have something which is very similar to amazing Neo4j project, which is the KSAT stuff by David Misa. So we're looking upon talents and so on and trending talents, topics, skills, technologies, and these things and how they move over time. So this is the secondary one, which is not super sensitive. I cannot tell you what we're going to do with that, but you can imagine what we're going to do. Literally solving things much faster and much better and helping people to do good things with those. I think those are the very interesting ones. Then we're doing a lot which is very heavy on the gnn side which is predictions of things let's see how i can say this without putting too much information out so imagine you needed to compute something which is super hard like weather or vibrations and things like sound waves that do different things. So doing a lot of modeling on these things using GNNs as well. So that's also pretty neat. It's very little kind of of the LLM hype in that we only use the LLM for that case, actually as an interface. Right. So we have the tools in the background running the computation, but then we just use the language model as some sort of interface, which I think it also does a very, very interesting job in doing, which is very fun also coming from a UX perspective in looking at this. So you can imagine with that also you got the structural engineering. I'm building a house. It needs to hold X floors. It needs to hold X people. And I want it to look like this. How can I now build it in a most safest way that is the most environmental friendly that also looks the way I want it because i mean it's not easy to build something very secure it's just very expensive and it's also very bad for the like environment to add this you know steel concrete and these things that are already very bad on a large scale so if we can take that down we can actually make the world a lot more the way we want it. So I think that's also one of the really big ones. That's a lot of more computational kind of thing in that. But in that case, then we do a lot of the computation with GNNs and stuff. And then we just used our language model as the interface for that. So we have the overarching kind of model, then it has the router, then it goes down to different kind of rag structures with the graphs. And then we have the function and tool calling for the specific tools for these computations and stuff. So actually turning out pretty neat. But then again, I mean, the transformation part here is that we're literally redoing everything that they do in that business, which is going to be very hard for a lot of people, right? So which I think is very kind of interesting in that sense, which also led me to, that I built a couple of my own tools that I use to measure transformation in organizations that I build on Neo, actually, which is kind of fun. Funny, funny enough. Because very often, I mean, we think about change management and transformation as, you know, or a lot of people talk about it or treat it like this. Here's the old, here's the new. You go home on Friday and on Monday we have a new logo and it should be all easy. That's not how it works. We all know this. This is the biggest lie of whatever marketing or whoever branding come up with this idea. That's not how it works, right? I change because you change and i respect you so it's a line of people that influences each other so this is is the classical old way of law of diffusion of innovation is called is the one where the innovators the uh kind of you know yeah that a classical kind of early adapters majority late majority, late majority, laggards and so on. So I built that and there's this great actually YouTube movie that was my inspiration. It's called Leadership Lessons from a Dancing Guy. If you have not seen it, it's like this is everything you need to know about leadership in a very condensed format. So everybody go check that one out. But I think then what I built is literally looking upon this kind of personas within the company so you would have the change makers right you would have maybe a good storyteller somebody that can talk about your very technical product in a way that people without domain knowledge could understand you have the well of knowledge, the person that knows every single thing in the company. You know exactly who I'm talking about now in Neo when I say that name, right? And so on. So we have these personas, right? So what we did, we did a set of town halls, for example, for finding these. So we had open questions. We then narrowed down the behavior types and frustrations and struggles and answers down to these kind of different personas. Then we let people nominate on this. And then we just pull this into a graph. Then we run PageRank on change makers to change makers. So somebody nominated another change maker, right? And then we did a version of article rank and clustering with k-means where we can now start seeing this behavior clusters of things we can now add the other people the traditional hierarchical thing because now we have a graph with everybody that are showing some sort of behavior that we would like and then we add all the people around it, which does not yet show so. But instead of going to those people and saying it, because I don't have any credibility with those, I can identify who sits one degree out and how can I help that person to help that person that can help that person that can help that person. And all of a sudden, we have now a visible way of seeing change in the organization by modeling the organization as a graph rather than the hierarchical tree that we pretend. It's not like I ever went to my boss like, I really want to know how to build this. I better go talk to my boss now. This is not how we do things in Sweden. Definitely not. This is the last person I want to go to. definitely not this is the last person i want to go don't do this don't do that or she right so i think that's also a very cool use case that we actually did on on kind of understanding uh inofficial leaders within organizations because i think that's also something that it's so cool all of a sudden this is super easy it's super simple you see it and you spin this up in i mean it's a matter of hours of a sudden, this is super easy, super simple. You see it, and you just spin this up, and I mean, it's a matter of hours. I mean, the first prototype I did was on the airport bus in South Korea, so from Incheon to Shennan, which is two and a half hours. In that time, I imported all data, spin out the graph, run all the algorithms, and I had my first draft coming from. It's like, how can this even be? It's so easy to just try it. So I think that's, which is also the reason why I ended up in you in the first place, is that easiness of putting it in, seeing doesn't make any sense. It does. And then putting it back out there. So yeah, now I ran that for a very long time. All the questions we're supposed to talk about, but sorry, man. Yeah, no, that's all right. But I think that's a great point you made there i think in in especially with with this transitional period we are in now with new new technology new new ways of working new ways of you know dealing with data or tackling data or or overcoming data challenges you have and and then this is it tends to become quickly us against them kind of like old traditional people to to new technology or something and these are all it it's never really meant to be that way it never has to be like this hard hardcore confrontation it is it is obviously there is there is some some of that around but you you know people can be convinced i think you just talk to them in the right way and if you find the right level and the right approach if you say hey this is for tomorrow we're doing it this way and you don't have to have any say in it well that's going to be a tough sell but this goes back to that book you know Connected by James Fowler right so all of the data pointing towards me says less than the data from my network so if i can understand who is around me and how they will behave that will tell more about any of the data points about me which is exactly this right so by identifying the the people one uh i don't think in that book let's see now i think it's 1.7 degrees out or something so almost two degrees out or two persons out two hops out uh i think that's uh where the biggest impact is and i think i mean if we can already see this we can actually help people helping other people and i think this is this is how normal this is how society has changed we haven't changed in that kind of binary from here this this doesn't work like that that's friction right there so and i and i don't want to be that person so it's a rather interesting way of applying that knowledge uh the irony is then i'm i'm here building a system to build a knowledge system, but then building another system using the graph to understand how does the knowledge system impact the people graph, which is a very cool kind of double kind of use case. But I think, which I 100% going to give credit to Neo. I would never, ever thought about that idea if it wasn't for my time at Neo, because all of a sudden it's like, 100%, this is a connected problem. It has nothing to do that they're in department X and department X as X percent, which is true. Engineering, they're more like this. And, you know, architects, they were more like that. And it's like, yes, but if I can see it on an individual level, I can actually make sure to help the people which is exactly what we want and which is a core human need i mean we want to be here heard we want to be seen and and if that happens we we should be good yeah yeah absolutely that's great and uh i think with with that, it makes a lot of sense to look into that and to try it out and to discuss and find a new way of working into hoping that can then be easier. Do you have anything else you wanted to share about this transformation process in this current work you're doing with Viterians? I think, I don't know, whatever it could be. I mean, we are, for next year, looking for talented people. So I do a shout out for that. If you love AI and graphs, hit me up on LinkedIn. Let's see. AI and graphs, hit me up on LinkedIn. Let's see, maybe we come back here in six months and tell Alexander about the awesome things we have built and do a demo of it. So that's how I think about the transformation. I think that's pretty much it. And I think that it all comes down to this classical kind of thinking. And I think one way that at least how i look upon these things is this idea of you know we often look upon the bigger transformation if we say so uh it's like oh now we have another technology now we need to transform but the important part is i would argue how does that new way of technology and rewrites us so that's one one way and i usually use this analogy of different paradigms, right? So first paradigm would be when we would start writing things down. You know, what's the benefit of this? All of a sudden I know, Alex, you owe me four beers. I can go back and I can claim it, right? And I mean, I usually use this old example of the Uruk society. They have some beer tablet, right? The only problem is it's on clay and it's hieroglyphs. So it's good to store a clay tablet next to beer. No, it's not very good because water and clay, they're not best friends, right? But this is equally true today, right? For the value to be true, you need to write it in a language you understand. You need to store it in a way so it's accessible over time. And I mean, look at COBOL if you're in fintech, whatever. You can't even find your grandpa's developer almost. It's like such an old technology in that sense. So a lot of the cool fintech, they're already backed on this old clay tablet technology with the syntax that looks like i i don't want don't even want to say it but i think that's the first paradigm then we come to the second one right which is the printing press this one would be then you know all of a sudden you can distribute things very easily to a lot of people right so we can take uh luther and the gutenberg bible classic example so for this to be true we have to understand what the previous one did they could write it down in fancy latin on parchment roll they ride on a horse then distributed that to every city took forever right but it was true it took forever what did he do he wrote on plain folksy german that everyone could understand and he made small printed pamphlets that he put on every single door all of a sudden the conversion it's like boom so you can overlay a heat map of printing press versus that type of Christianity in Germany and they 100% align which is super fun. What we have learned from this is then if we are to communicating to one to many meaning unknown people we need to adapt the message accordingly right so in that sense it all makes sense we need to make it available and then readable for the many so we cannot have a complicated Latin system we need to have a language which we can understand. So a simplification of language and data structures equal that. And then we come to the next one, which is of course the kind of graph paradigm, right? So I think that's how I look for those kind of things. Because all of a sudden, when you start looking at this you start understanding that you're not going to have any success with your language model chatbot whatever project if you don't get your basic right do you have readable clean accessible understandable you have relationship you have your feature engineer you have a process to support all of this if not go fix that So you could say that the value of your AI initiative is directly proportional to the quality of your data or your data quality process. But very often, this we don't talk about. We just want to have the latest, latest. Oh, it's boring. Yeah, because the other one is work, right? Yeah. And the other one is work right yeah and the other one is working yes yeah so i usually use this when i talk to boards because everybody's like oh this is fine let's do it so then i i talk through this when i when i meet new companies and i always use this analogy of this i don't know what the the meme is called but it's this uh dog in the house with this house which has the fire around it. And then he says, this is fine. This is the only board that I ever met. It's like, this is fine. It's all there somewhere. I'm like, no, it's not. Because I think this is the real problem. Because then what you need to do is then say, should we clean this? Should we re-engineer our process? Or should we just draw a Should we re-engineer our process? Or should we just draw a line and start building the new? I have no answer on this. And this is a tough call because if change would be easy, that would be leave on Friday, come on Monday, and it's all perfect. There is no perfect answer. And everybody that says so,'re lying but i think thinking more about data quality and finding whatever from every single paradigm in there because without those you're not going to have any value with any of your ai project ever it's just as simple as that that's a good good shout out cool um maybe we spent a couple of minutes on questions. A couple came in from the audience. So I'd like to read out a few to you and then we can obviously, you can take a stab at them. I'd like to start with Nani Sinek. They ask, do you have suggestions what to use to model a graph schema visually and use it as input for your ETL pipelines to create graph and populate your graph with data. I don't know if this goes a little bit too much into data. I started reading another question which I thought it was so I didn't listen. Can we do the question again? I was trying to find it. I heard the name and then I started reading that, but I think he has been doing a lot of them. Yeah, here it is. A little bit further up. Suggestions on what to use to model graph schema visually Yeah, here it is. A little bit further up. Suggestions on what to use to model graph schema visually. And then, I guess to use input tool for your ETL pipelines is maybe a little bit. I can talk a little bit about that. But do you work with any visualization tools? Is that anything you... No, I mean, i was brought up the hard way with the whiteboard uh that that's how i learned doing this and then i just converted but i mean looking on the stuff that are available now in the aura suite you know where you just draw out your graph and then load your stuff in there i think that's one very easy way to do it there's also a very interesting talk i think on what is it now two days on first day uh with with thomas and lama index so there's a lot of these kind of neat ways to auto generate the graph so that's a lot of what we are exploring and i think lot of what we are exploring and i think the best recommendation is for me to to find try a bunch of stuff see which one works for your team and your workflow and then don't try to find a silver bullet i haven't yet not seen any super silver bullet that handle everything and there's so much happening in this space so i would just go for whatever get the job done and make sense for you and your team to talk about these things but i use 90 percent the stuff in aura but i tend to do a lot on my desktop and browser still yeah the hard part is i think it's it's what you learned and then that's what you're going to be doing yeah and so yeah no but they are fairly similar you can you can use them interchangeably i think there's also arrows um which if you just want to if yeah if you want to just draw out the graph and then play around with it without actually loading the data in the first place but you just want to see visually how does it look like and again to what we talked a little bit about earlier when you start with a smaller data set you get you can you can model that out easily and then draw it on a on a even on a whiteboard uh or use use that that arrows and and get started from there and then you know like like you know stefan said take take take what feels natural to you take take what works and and what you what you feel gives you a good good uh good impression and then take what works and and what you what you feel gives you a good good uh good impression and then elaborate on that and and develop um on on that um knowledge basically yeah no totally the another question comes from eric um basically confirmation of if you heard the use case for for graph right correctly. I mean, the ones we're talking about now, I think there's a lot more beyond this session, but I mean, a couple. And then he continues, one is Sidekick for crawling a wiki-like data store. The other is an HR network of behaviors, question mark. And three, creating an interface for tools that model buildings. Yeah, something like that. And I mean, the whole thing is like, the way we look upon it is, or I look upon it is like, I see a lot of the language model as a replacement of an interface for doing other things, right? So that's kind of one thing on the front. And then we're using the graph and the knowledge graph, per se, to handle the very highly connected data within those regulations in the back, right? So we want to be able to traverse this, you know, sometimes even looking on. And that's what we're experimenting a lot. How can we create the feature engineering on those? So I think that's one part. And then using to call, function calling or tool calling to call and do computation elsewhere, and then pull it back in again to the user interface. So a simple user, flexible user interface, because we don't need to have the kind of airplane cockpit, right? And then we actually have the airplane cockpit behind. It's just like, pick, put in whatever you want, outcomes, everything. So we handle all shit in the back end, just as front end should be. And it's like this old saying, I guess, the less design you see, the more design it is. And this is equally true now in this phase, I think. It looks very simple in the front, but what goes on is the combination of them in the back, combining different type of rag, different type of retrieval systems with different type of tools and then created this kind of agentic workflows. You get the perfect answer coming out to you as an individual with references and all parts. But at the end, I mean, that's a bit. So I think it's a good part. And what is interesting is that all of these use cases kind of intertwine in each other. And what they do is then you start to build this knowledge infrastructure because that's what I'm building, right? It's not that I'm building a system here. It's a knowledge infrastructure because then you can use it to different things. You don't have to rebuild it. It's rather the brain that then can be solving all of these other things. So that's how I see it. So I don't look so much on a vertical kind of use case kind of thing. I know that this is how a lot of people start doing it, but I'm trying to kind of use case kind of thing i know that this is how a lot of people start doing it but i'm trying to kind of think about the bigger kind of underlying structure and then take it from there because i know there will be connected use cases everywhere anyhow yeah um another question from alejandro maybe i think you you talked a little bit about this using different rack systems and um did you when when you were driving this did you did you compare different maybe graph rock versus something else or tradition traditional rack is not even is that the thing now i don't know you know we call it vanilla rack no i'm sorry it's super funny to use though. No, but what we did is, I mean, just doing a regular rack, this is like, or naive rack, that's out of the question very often. So the one we're running now is a combination. So for the simple system, we're running like a vector store. Then we're using hide. What is that? Hypothetical document embedding. Yeah. And then we use LLM re-rank. So we actually come up with similar, just like saying to the model, like, okay, imagine that you can answer this question, even if it doesn't know, then take that answer, embed that answer in, retrieve all the information that we now get back because the problem often is that your question are necessarily not similar to the answer so this now it makes up that so we have more retrieval power coming back and then we re-rank that again and therefore we have a much more accuracy so that's our simple version so we did experiment a lot my colleagues also wrote a paper on this which is called aragog the first one is out we can maybe share that in the comments later on on linkedin or something we are now writing the second paper which is on advanced generation of graphs and then retrieval of information from them. So that's the second paper. So I harass everyone that has been doing any work. I pinged the Klarna people, I pinged some old colleagues in Neo, but we are in the first part of that. So we now doing, because the tricky part here is also figuring out the cost layer, right? It's not super hard to do something that is super efficient. If this should be allowed for 10,000 people to use every single minute, it cannot be very expensive. That's going to be, then you get fired quickly. So it's about figuring out, you know, cost versus efficiency balance here. And for which use case needs what. Of course price will drop, but for the meanwhile it's about experiment. But I'm happy to share the papers and how we come to the conclusion. As I said, we use the VectorStore with Hyde and LLM Rerank. That's the one we did. Seems to be working for us very much. But happy to share anything about those if anyone is interested. Yeah, send me those links afterwards and I put them in the YouTube and LinkedIn video description and then you can check it out. All right, maybe one more from Katie. She asks, do you believe knowledge graphs as a business's digital twin will disrupt the traditional monolithic functional landscape will traditional systems like erps have a place in our organization's future landscape i mean uh so will traditional system have any place i mean a lot of this is changing. And if we look upon it the other way, rather, so I usually use this other analogy of this, right? Because as long as we get the data to work, then it works, right? That's the value. So the more we can get it to work, the better it is. So that's why if we have something that always changing all over and all over, is it better to wait and build a perfect structure for something that keeps moving? When do you start building that perfect structure? It will not happen, right? So the problem is that it's always moving. Therefore, I think you need to put your eggs in the basket with more flexibility. That's at least how I do, because then the thing is here that we need to equally rebuild the organization. It is not to build a perfect system. The perfect system without users has zero value. So it's, which is also how I think about this modern type of AI system, right? So I usually say like, imagine having an office. office okay we buy an office it's full of people none of them doing any work does this equal value no that's very expensive shit the same goes for data you put lots of data in a perfect system that is very beautiful if it's not being used it has no value it's a potential value that can be realized. So people and data at work is where you have the value. However, people in synergy also have a greater value and equally so does data. And that is then the argument for the knowledge graph system is then the connectedness of things, right? The ability to connect, the ability to create this agentic workflows and architectures and the fun part here is that they start to look more and more as to how we treat humans and teams in that abstract world so i don't know will it replace it or not i i the only sane answer is we will live and see uh i mean but i will put my bucket in a system that allows me to flexibly, flexibly adapt over time because it's equally important to upskill the team. So they don't have a freaking heart attack when it's time to change. So it's rather of nudging, right? That's how we change this. We try a little with, with somebody I like, how did you love to start loving cilantro why did it all of a sudden stop tasting like soap because you tried with a bunch of friends when you went to New York and you had the tacos at that Mexican restaurant it was pretty cool and you were also a little bit drunk and that it helped out and now you love it right that's how it goes so that's how i think about it i have no right answer on that one but no it's it's it's it's tricky obviously to say but i i mean i think traditional systems had a place for for for a reason uh in the past decades uh as as a pillar of of of of of many many architectures that have many many big companies and enterprises use that so i'm still using it so i think i don't i don't necessarily think there will be it will be a complete disruption and a complete replacement or rot everything out and put put the new stuff in but i think it has to be like you said flexible has to be adopting it has to it has to make way for for new technology and knowledge graphs and digital twins can be exactly that. And if traditional systems can be the baseline but you already have and then you add to it and you make it more flexible and enriched it, you make it more approachable, you open it up basically these sometimes a little bit old, a little bit rigid systems, you open it up to this new technology, new ways of working with the data. I think that's what you all want to achieve. Let's look back on the example I did. We still write things down. It's equally important to write it down in a language we understand. Otherwise, it has no value. The sharing value goes away. So all of these paradigms goes hand in hand. This is that this idea of it's a binary change is a false dichotomy. There will always be an infused version of these. So parts of it will be there. But for the connected part, I mean, it's obvious that it's a graph. I mean, you see it everywhere. And you wouldn't be in this call if you didn't believe it. So let's go for it. Should we do another one or is it timeout? We are at the top of the hour. How fast are we? We are at the top of the hour. It does go quick. Maybe we do one more quickly from Nani Sinek again. Do we have suggestions on which LLM transformer to use for reliable cipher generation? I experienced a lot of mistakes with LLMs writing the right cipher query. Did you ever try that? Oh, yeah, we're doing a lot on this. I think the tricky part with this is to put it in production. I mean, there's one thing to have a sidekick that I built my own GPT, for example, that I run helping me when I write things. I forget to just fact check. I mean, you could do a lot with adding critic layers and so on to get them to be more accurate. I think that's good. The tricky part with the generation part is that this model spaces moves so fast. So when I put something in production and open AI, for example, then go change the model. It doesn't work all of a sudden. For example, Thomas had this example when he built something that worked perfectly fine, and I'm like, it doesn't work. It's broken. It works. And then he checks, it's broken because somebody changed the model in the backend. So now it cannot generate again. So I think that's a tricky part. So I have no real good answer. So on an individual level, just build whatever you do and then use that individual knowledge rather to scale that up. For general things, generating graphs and stuff, you can do it. I mean, the tricky part here again, it's not to find a very expensive flow that can do a good job. The very, very hard part here is, can you make it so that you can use, for example, 3.5 if you're using the OpenAI suite and still get accuracy? Because at the end, it's going to be a cost thing. But what is actually funny on this, I did one of my favorite teaching for Hyper Island. So I usually complain that I'm not doing any technology parts or very little of actual coding. So this year, I decided, or last year, I decided to build a GNN using Neo4j and do a recommendation engine on Spotify, which was fun. I had 50 students, I have four days, and I have two persons that know how to write SQL. You can imagine I was sweating. I'm like, it should technically work out. So I prepared a fake data set that I also messed up. So they had to clean it. They had to normalize it and do all sorts of things. But then they could work with this together. And the fun part, that time, it was trained on an old version of Neo. They were building on a newer version. So they had to go from the documentation, get the syntax, put it in. This is a new syntax. So they had to do the memory prompting, right? So I think instead of just going A to B, find a workflow that works, right? Instead of saying, can I have a model that works? No. Find a workflow that actually works. So maybe you have you add the documentation you add these type of things you see that it's do a lot of errors you go in and specifically uh correct those so that's how i do it and and then you also learn i noticed very quickly what has changed between the versions and why does it do it oh because that's how we did it before which was equally one of my problem coming from neo learning being a little bit too curious poking on everything in the alpha tier oh how does it work how does it work having no idea how to use it honestly and then you know i'm trying to use that workflow now and obviously that's changed since long gone and i keep there's a specific error that I do every single time. And people are like, shouldn't you know this? You did this error already a year ago. I'm like, it's hard coded in my database. I can't get it out. So I think just figure out the workflow rather than finding a specific model for the thing. And then see what you can learn from that. Instead, I think that's much more interesting. But it's a good question. And we're going to see much more on that, I think. Super. Cool. I think we are over time already. So I think we have to wrap it up now. If people want to get in touch with you, then your LinkedIn link is in the video description on YouTube and in LinkedIn as well. So I guess reach out to Stefan for more discussions and more insights. Thank you very much for spending the time with us today, everybody. Thank you for, for, for watching and listening in today. Thank you for your questions. Sorry, we couldn't answer all the questions today. And obviously thank you very much, Stefan, for, for spending the time and, and, and the, oh yeah, cool. This is the link. So I'm, I'm going to add the link into the chat right now for uh the paper that stefan mentioned and i'll add it to the to the other places so um yeah thank you very much stefan um thank you guys if you want to um watch watch more then tomorrow actually we're gonna do our next session and we'll have a life science workshop that's happening tomorrow afternoon Central European time, morning time North America and evening time Asia Pacific. We have a couple of interesting guests from University of Heidelberg, John Snow Labs, KIA Gen and other, AstraZeneca for example. So check that out. That's going to take place tomorrow. Obviously, Nodes is coming up. So the COFO paper closes this weekend. So if you are interested in submitting a session, then please do that. And, you know, we have only a couple more days left, so do it now. There's a lot to fill in. It's very hard. We will submit the GraphRag paper we're writing, so it will be coming, but it's going to be on the very last minute. Yeah, that's fine. Yeah. Cool. That's great. I look forward to your submissions and when everybody else is invited and encouraged to submit. If you want to know more, what's up, what's next, how to proceed, the Graph community is very friendly on the forum and on Discord. Graph Academy is the place for you to learn and start with fundamentals as well as an LLM fundamentals training and other how to work with vectors to stay with the topic on today, but also more cipher data modeling training. So they're all free online. can take that um anytime you want and if you want to play with the product aura free is the place to go um if you just scan this qr code i guess then you can get to our developer landing page and from there you find everything else and yeah with that uh we are we're done for today thank you very much uh everybody for | Neo4j Live: Transforming Engineering, Construction and Architecture with GraphRAG | 3,983 | Neo4j | 20240612 | Discover how to develop a comprehensive knowledge structure using a multi-expert/agent system equipped with advanced tools. Learn about taking an innovative approach, organized within Retrieval-Augmented Generation (RAG) structures, and see how GraphRAG is revolutionizing the fields of engineering, construction, and architecture.
Guest: Stefan Wendin, Tyrens
https://www.linkedin.com/in/stefanwendin/
AI Sweden: https://www.ai.se/en
AI in Business: https://www.linkedin.com/pulse/future-ai-business-unifying-knowledge-graphs-large-language-wendin-fof3f/
ARAGOG: Advanced RAG Output Grading: https://arxiv.org/abs/2404.01037
0:00 Intro
8:42 GraphRAG in Engineering and Construction
14:42 Detailed Use Cases of GraphRAG
27:46 Transforming Business with Graphs and AI
45:22 Q&A
1:04:44 WrapUp
#neo4j #graphdatabase #rag #graphrag #construction #architecture #engineering | 2024-09-20T20:07:49.028371 |
https://www.youtube.com/watch?v=lH75WJgLeoo | you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you you Takk for ating med. Good morning everyone. People hearing me all right? Excellent. um Good, looks like it's working both on Zoom and on YouTube. Let's hope it keeps working this way. And how's the video quality? I know it's pretty decent on YouTube at least. Where I am? I'm at home. Being the geek that I am, I've pretty much been making a streaming setup that might make Twitch streamers envious. So we are talking multiple camera angles here, slide sharing, screen sharing. Let's hope it all works. All right, this camera is out of focus hello That's more like it. I'm trying, I'm trying. I'd say this has been a great opportunity to learn OBS Studio, YouTube live streaming, lots of different things and be able to call it work. Morning. Takk for ating med. Terima kasih telah menonton! For those on YouTube asking, the Zoom session is only for the official course participants at KU. Both of course to give priority to the people who've actually signed up for the course but also to avoid risking running into the limit of having at most 100 participants. Maybe I could do a tutorial on this streaming live on YouTube and simultaneously teaching over Zoom. Well I can do a tutorial on that if it turns out to work. Let's hope it does. It's not too complicated really, it's just a matter of recording everything into OBS Studio which is open source software used by lots of streamers. Then in that one you set up a virtual camera and then you're setting that camera to be your input camera for soon. And inside OBS you can record locally and you can stream to YouTube and other streaming services. There's a bit of software to set up, other than that it's not really too hard. Yeah, thanks Klaus. Hope things have been running smoothly the past couple of days. Also one small thing, because I'm at home and I don't have a dual screen setup. Once I start streaming the slides, I cannot see the Zoom chat. So if you're asking questions there, I unfortunately can't see them maybe I should get them up. Yeah, thanks, Nadja. So Nadja is a postdoc for my group. She's one of the several people who will be helping out today, especially when it comes to the exercises. Good. I see Katerina here as here as well also postdoc in my group so we have a number of people here who can help you with the exercises later and possibly also try to fill in in the chat while I'm giving the presentation because this is quite a bit of multitasking let me just fire up the slide deck here. Okay, I seem to have a bit unstable internet connection, which is scary. I'm seeing OPS connecting and disconnecting, reconnecting to YouTube Live. Nothing I can do about that unfortunately. All right, I guess it's 8.15 and we should get started. So first of all, welcome everyone to this day on Network Biology. The overall plan for the day is that there will be lectures and a few exercises in the morning, also some software demo in the afternoon. We'll really be diving into all the Cytoscape exercises that is much more hands on and interactive. So the morning session, I'll be streaming all of that, not just to Zoom, but also to YouTube as a live stream. I hope to make everything available to people also afterwards, since there are people in other time zones who are interested. Obviously, I doubt people from the US have dialed in at this hour. So the topic of today is network biology and really why do I care about network biology? What is network biology? Very quick introduction here. I'm a group leader at the Novo Nordisk Foundation Center for Protein Research. So that's a center at the University of Copenhagen with funding from the Novo Nordisk Foundation. And for that reason, we're very interested in general in proteins from all kinds of angles, including looking at protein interaction networks. I should also mention that I'm a co-founder of a company called Intomics. I don't say that to plug the company. I say that because some funding agencies think that this might be somehow a conflict of interest. So now you know it. I'm one of the founders, owners, and advisors of Intomics. Both in the academic setting and in industry, we're dealing with a lot of omics data. And generally what happens is that when people do an omics study, what comes out of it is a lot of molecular players. And what you want to understand afterwards typically is how these molecular players interact with each other, understand their interplay. And that's really the core of network biology to me. Using biological networks to understand how things work together inside the cell. And for that networks is a really useful abstraction and it's also a useful abstraction that really lends itself to visualization. Now I'll start out today with a bit of core concepts just to make sure everyone is on the same page and those are sort of the core concepts of working with networks just some terminology and a little bit of background that I want to get out of the way before we dive into the more biologically relevant parts. So when you're talking about networks there are two things you need to know. One is what are nodes, and the nodes in the network, also sometimes called vertices, are the things that are to be connected to each other in the network. That could be proteins, of course, when we're interested in protein interaction networks. It could be diseases, if you want to know how genes and diseases work together, or which diseases show comorbidity. Then you have the other half of the network, so to speak, which are the edges. And the edges is what connects the nodes. So those are the connections between your things in your network, be it proteins or diseases or something else. These edges, there can be several types. They're so-called undirected edges. An undirected edge means that there's no difference between having the edge AB and the edge BA. So for example, A and B binding to each other, that's an undirected edge. The opposite, of course, are directed edges where the direction matters. AB is B is not the same as B, A, and that's what you would often have in things like signal transduction pathways, where obviously A doing something to B like A activating B is not the same as B activating A. Then something we'll work a lot with today is so-called weighted graphs or weighted networks. That means that not all edges are considered equal. By that I mean instead of just there being an edge or not being an edge we have a probability or weight attached to the edge that somehow quantifies how sure we are that there actually is a link between these two things. When people talk about networks you will find that a lot of the literature is talking about things like network topology, so sort of the structure of the networks, talking about things like robustness of networks, whether networks fall apart, how you can use the structure of the network to infer which nodes are the most important nodes etc. And there's a lot of terms there. People talk a lot about network degree and the degree is simply how many connections an edge has. So if something interacts with five other things it has a degree of five. You have methods like centrality. There are a number of different centrality measures. I'm not going to bore you with all of them, but it's again somehow a combination of either how connected the node is or how important the node is to keeping the whole network together. So that can be degree centrality. You can also have closeness centrality, many different measures. Another thing is clustering coefficients, which is sort of if you're looking at a certain node and you looked at its interaction partners, how connected are those to each other. So that gives you an idea of is this node part of a closely connected part within the network, which is relevant later as the name implies when you're doing clustering. So we will be using clustering, but we won't really be talking about clustering coefficients today. People talk a lot about robustness that links with the whole centrality idea. The idea being that if you remove certain nodes from the network, does the network fall apart? I think one important thing to think about here is that whether it makes any sense to talk about robustness or not depends very much on which kind of network you're dealing with. If you're dealing with a physical protein-protein interaction network, it's not clear at all what it means to the cell that things are connected, that there is a path connecting A to B. It is not clear that it is in any way relevant to the cell if the network falls apart, so talking about robustness of such networks is not a terribly meaningful thing to do in my opinion. However for some other networks like a signal transduction pathway it obviously makes sense if you break the signal transduction chain, then you've broken the signaling. So what we'll focus on today is mostly protein networks. And when it comes to protein networks, what people mostly talk about are physical interactions. We do have those in our networks, as I'll get to, but we are not limiting ourselves to physical interactions consisting of A binding to B forming a complex. What instead we are looking into is looking also at functional associations more broadly, so capturing which things work together. So we're trying to link proteins to other proteins if they somehow work together. Now the name of the game here is obviously guilt by association. How can you find out whether two things work together? And for that, I find it illustrative to look at my favorite guilt by association network ever. And this, as you can clearly read on this slide, the nodes are not proteins. They're people. And this is a network that is based on emails. So it's an email network where we're linking people to other people based on who's sending email to whom. And what makes this particular network funny, to me at least, is that it's not just some random email network. It's an email network that was built based on the email sent during the last couple of weeks in the company Enron before the company went bankrupt in an enormous scandal some years ago. And you can read many fun things out of it. If you look at the lower left side of the network, you'll see a bunch of people who are not mentioned. Some of those people were people outside the company who magically managed to sell their stocks in time and I believe to have read that some of those people actually went to jail for insider trading. Just above them you see the whole board of directors who actually compared to many other people were sending remarkably few emails while the entire company was collapsing around them. And if you followed the news back then when Enron went bankrupt, you will know that those people were in fact more busy playing golf than they were managing the company. So they were out on the golf course improving their golf handicap while at the same time the company was crashing around them. But enough about Enron, enough about email networks. Let's get to the core of this, the String database. So the String database is a big database that I'm heavily and my group is heavily involved in developing, together with the group of Christian von Meering at the University of Zurich and Pierre Borg at EMBL in Heidelberg. So the String database is a protein interaction database. It's a database of functional associations. And at the heart of it, you can go to the website stringdb.org. And there you can look up any protein of interest. And you can then find interactions for those proteins. And that can, of course, give you a hint of what a protein might be doing. So if you look up some protein where you have no idea what it's doing, you can see which other proteins it's likely to work together with, and based on that, try to make some qualified guesses as to what your protein might be doing, and go out and validate them in the lab, hopefully. What you can also use it for is to come with not one protein of interest, but a whole long list of proteins from an omics study, query for a network, and that way find out how the proteins that showed up as significantly regulated in your study behave in terms of being functionally associated. Now, the starting point for making the string database is a collection of some 5,090 genomes in the latest version, and these encode a total of 24.6 million proteins and the goal is of course to link all of those to each other with functional associations. So string is a heavily used resource and this is sort of the bragging slide from Google Analytics. We're looking at more than 30,000 users on a typical week and as you can see it's growing over time and the other thing you can definitely see out of this slide is that if nothing else string is very good at detecting when it's Christmas. So at least scientists do tend to take a break over Christmas. We'd like to think of course that the reason why people use string so much is that string works well. The problem with making that statement of course is that when it comes to bioinformatics tools and I guess many other aspects of science, everything works well according to the authors. So you're dealing here with it's so easy to fake your benchmarks intentionally or unintentionally and make yourself look good. So why would you trust my benchmarks of string? And you shouldn't. Thankfully you don't have to because there are independent benchmarks out there. That's one of the advantages of being a heavily used resource. other people actually compare things to your resource. So this is a graph from a fairly recent paper from the group of Triideca, one of the main groups behind the whole Cytoscape tool that we'll look at later. And they were working on building a protein interaction resource putting together lots of different existing resources including string and as part of that they benchmarked how good these different networks were for identifying disease genes both looking at genome-wide association studies and looking at literature-based gene sets and according to both benchmarks String came out as being the best performing network of the many many networks in this graph. And part of that of course is String is great, at least I think so. Another big part of it is of course many of these databases are very limited compared to String because they're focused on physical interaction networks. And if what you're interested in is to find disease genes, limiting yourself to only use physical protein interactions is really going to harm you. So, if you believe me and Troy Heidecker that the string network seems to work really well, the next obvious question is, how did we pull this off? How did we manage to make a network that is this good? And the name of the game here is really data integration. If you want a good network that performs well in these kinds of benchmarks, you need as good coverage as possible of proteins. You need as good coverage as possible in terms of which ones work together. And the first kind of data we integrate is what is called genomic context. And genomic context, that's a whole class of methods that can be used to infer functional associations based on just having a set of genomes. So the easiest to understand of these is the gene fusion method. The idea here is, imagine you have two genes, look at the top row, the red and the yellow gene, and as indicated by the broken line, these are different genes sitting in different places in the genome. If it's a eukaryotic genome with multiple chromosomes, they could even sit on different chromosomes. However, if you go look at the orthologs of these genes, if you do sequence similarity searches and try to identify the likely orthologs in other organisms, you find that in some organisms, the second and the fourth, these two protein coding genes have been fused into a single large protein coding gene that encodes a fusion protein. Now if you think about this for a second from point of view of a cell, would it make any sense to take two proteins that have nothing to do with each other whatsoever and covalently link them together, making them one big protein? And the answer to that is, of course, no, it wouldn't. Why would you link two unrelated proteins? That obviously means that since some organisms actually took these proteins and covalently linked them to make them one big protein, that's a pretty strong hint that these proteins are doing something related. Also in the organisms where they haven't been fused. So that way by looking at evolution, looking at how these genes are organized, in this case fused, in other genomes, we can make inferences about functional associations in our genome of interest. Another example is gene neighborhood, so looking at which genes sit next to each other. And the simplest case of this, of course, is operands. So if you're looking in particular bacterial genomes, you have operands, where several genes sit together as a cluster, which is transcribed as a polycistronic transcript that encodes multiple different proteins. And generally those genes that are transcribed together in a single operon are of course functionally associated because they're always expressed at the same time and therefore needed at the same time. So typically they encode for example different enzymes involved in the same metabolic pathway. Now the problem is if you're looking at just one genome it's not that easy to infer operands because every gene has to sit next to something. In fact it has to sit next to something on both sides and when two genes sit next to each other they have a 50-50 chance of pointing in the same direction. So just having two genes sitting next to each other being transcribed the same direction is not much of a hint that these two are likely to be in an uproom together. However, again, if you use the power of evolution and you look across 5,000 different genomes, you can look at this and say, is it evolutionarily conserved that these genes sit together in what looks like a somewhat conserved operon? And the reason why this works is that if you're looking at something like genomes, then on the timescale of about 100 million years, genes get shuffled around. If you have two genes sitting randomly next to each other not being functionally associated, not being transcribed as a single operon, chances are that if you look in another genome that is a hundred million years away they're not sitting next to each other. So for that reason when you have a big span of organisms over a very long time scale you can make inferences about functional associations. The last method we have in string of the genomic context methods is so-called phylogenetic profiles. This is the hardest one to understand, but it is also by far the most powerful of these methods. The idea here is that you're looking at presence-absence patterns of genes. So here you have three genes, the red, the yellow and the green, and it's a toy example. On the left side, you have a species tree. So you see how these different species are related to each other. And you see that when you look at these genes, the presence absence pattern of these in this toy example is identical. And you also see that like if you look at the top two genomes, one has all three genes, the other has neither. So it doesn't follow the tree. It's not like one of these cases where say gamma protobacteria have these genes and nobody else. You have close neighbors where one species has it, another doesn't. In another part of the tree, one has it, its neighbor doesn't. And to explain that kind of pattern, you need a lot of joint gain and loss events of these genes, which of course is exceptionally unlikely to happen by random chance. So what you do is that you say, well, since it's unlikely to happen by random chance, it presumably didn't happen by random chance, it happened for a reason. And the reason is that these genes are somehow involved in carrying out a common function. If you have all of these genes, you're able to do that function, whatever it is. If you were to lose one of these genes, you're no longer able to carry out that function, at which point you have no evolutionary pressure to retain the other genes, and you're therefore likely to just use them pretty quickly. In the real world, the tree, of course, is not this small. You're looking at 5,000 genomes. That's what gives you the statistical power. On the other hand, the pattern matching is, of course, not this perfect. It's not a perfect match in terms of presence and absence. But this is the idea. The idea is that if genes are systematically present in the same subset of genomes, and those genomes are not closely related to each other, then you likely have some set of genes that are needed together for some function that those organisms are able to do. Of course, you only get so far by only looking at genomes. So if you want a good network, in particular, a good network for looking at something like high-agriculocaries, in particular human, you need to pull in other experimental data than just the genomes. That could be things like gene co-expression. I'm not going to spend time talking about that. But gene co-expression, you could argue, works a little bit like the phylogenetic profiles. In the phylogenetic profiles, we're looking at presence-absence across different genomes. In gene expression, we can look at presence-absence of genes across many, many, many different conditions. So based on RNA-seq experiments or older microarray experiments, we can look at which genes go up and down together across thousands or tens of thousands of different experimental conditions. And if genes systematically go up and down together like that, it's a pretty strong hint that they might be doing something together because they seem to be needed under the same conditions. Another thing that we focus a lot on is physical protein interactions. So as I mentioned earlier, lots of networks are based purely on these. For us, it's just one of many evidence channels. So the physical interactions can come from a wide variety of different screening technologies. I'm not going to go through all of them. There are many out there, but that's not really the topic for a bioinformatics course. I'll just illustrate one of them because we need an example for later in the presentation. And this is the example of a tandem affinity purification followed by a mass spec experiment. And the idea here is pretty simple. So imagine you have a protein of interest. You put a tag on that, which effectively is a molecular handle. You're now able to do an experiment in which you grab that handle, pull down that protein. And when you pull down that protein, it comes down together with whatever else is stuck to it. Now, when you have that, you can then use mass spectrometry to identify what's in the pull-down. You can run around and put handles on lots of different proteins and do lots of pull-downs and then see which things systematically come down together. Based on that, try to infer which proteins are likely in a complex together. I'll get back to that. Lastly, we integrate what we call curated knowledge. Curated knowledge is different from the experimental data in the sense that this is really more of the textbook knowledge. It's not somebody did an experiment and deposited in a database showing that these proteins supposedly interact with each other. We're talking about established knowledge. We know that these things exist. So this includes things like protein complexes. We know there is something called, say, the cohesion complex. We know what the subunits are. This is well established. It's not based on one experiment. Could also be things like pathways. So there are lots of different pathway databases. They were also in Trieidecker's graph. Things like reactome, keg, and so on. And these pathways, of course, tell you how different enzymes work together on different pathways, making reactions with different metabolites. You have other things like signal transduction pathways, knowing which kinases regulate other kinases, so on and so forth. And those of us older than I think most of the participants here had to learn a lot of these metabolic charts by heart for a biochemistry exam and promptly forget them again afterwards. That was already pointless back then. I hope we are not doing that anymore because all of this is, of course, available in computer readable databases. There is no course available in computer readable databases. There is no reason to know it by heart. So we take all of that and we put it together and you have the string database. Except that it's a little bit harder than that. There are a few problems. So firstly there are many databases. We don't get 5,000 genomes from one place. When we integrate 5,000 genomes we have to collect those genomes from multiple different places. There are multiple different repositories for physical protein interactions. There are dozens of pathway databases. If you want a good database with as good coverage as possible, you need to integrate many, many, many databases. These databases tend to come in different file formats. Of course people try to standardize it but there are still many different formats to deal with. And even if people use the same format they likely don't use the same identifiers within that format. So one database is going to use uniprot identifiers for the proteins, another database is going to use NCBI identifiers, another database is going to use uniprot identifiers for the proteins, another database is going to use NCBI identifiers, another database is going to use yet something else. So we need to deal with that. Then the data is of what I very politely refer to as varying quality, which is the nice way of saying that some of the data is really bad. So if you just treat everything as being equal, you're not going to get a good network. You're just going to get a network where you get flooded by false positives. And then lastly, well not lastly, the data are not comparable. You have the problem that how do you compare a pathway to a physical protein interaction screen, How do you compare a pathway to a physical protein interaction screen, to co-expression data, to inferred operands, to phylogenetic profiles? These things are just fundamentally different things. And how can you even compare them? And last but not least, all the data is not in the same species. That's the reason why we have something called model organisms. We do experiments on those model organisms to learn something primarily about human. So if you're interested in some human proteins you don't want to just look at the human data. You want to know what have we learned from mouse, what have we learned from rat, what's available on the yeast orthologs, the Drosophila orthologs. You need to somehow integrate data from all these different species and put it in one big network. So some of this is just hard work. There's not a whole lot to say about it. So you have a lot of databases. Somebody has to download them. They are in different formats. Somebody has to write a lot of parcels. And of course, when databases decide to change formats somebody's going to have to update their parcel. Then they use different identifiers so somebody has to make mapping files. We need to have files that tell us which Uniprot identifiers correspond to which Ensemble identifiers correspond to which NCBI identifiers. We need to have mapping files for that and again making those is just hard work. Where things get a bit more interesting is when it comes to dealing with quality. So there we build what we call raw quality scores and that's what gets me back to the physical interaction screens. Because the idea here is that you develop a raw quality score for each type of data individually that allows you to take this kind of data and rank the interactions coming from it from which ones are most likely to be correct to which ones are least likely to be correct. So you get a sorting of your interactions based on how likely they are to be right. And if we look at tantrum affinity purification followed by mass spec, how could you rank them? Well, let's imagine the evidence landscape here. We're looking at the interaction between the blue and the green protein. And we've done a number of pulldowns. And in one pulldown, we tagged the blue protein. We got the green and a couple of others. Second, we tagged another protein, got both the blue and the green protein in the pulldown. We tagged a third protein, we got the blue in the pulldown, but not the green. And we tagged the green protein and we got a couple of proteins, but the blue was not among them. The real problem here is how do you turn this into a number? And that's not at all clear, but I hope it's clear that the first two pulldowns are positive evidence, and the last two pulldowns are negative evidence. The more often we see these proteins together in pulldowns, the more we will tend to believe that these proteins interact. The less we see them together in a pulldown, the more often you see one but not the other, the less we're going to believe that they are in are complex together. So if you think about it from a statistical standpoint, what you have from a big tandem affinity purification experiment is basically a 2x2 contingency table. So what we have in this 2x2 contingency table is we know how many pulldowns we did in total, we know how many contained the blue, we know how many pulldowns we did in total, we know how many contained the blue, we know how many contained the green, and we know how many contained both. When you have a 2x2 contingency table you can of course do a lot of different things. The first thing that should come to mind if you do statistics is Fisher's exact test. You could calculate a p-value. Are the blue and the green proteins together in pulldowns more often than you would expect by random chance? That was my first idea. I tried that. It turned out to not be a very good scoring scheme. You could do other things. You could calculate an observed over expected ratio. We know which fraction of pulldowns contain the blue. We know which fraction contains the green. That means we can calculate which fraction we would expect by random chance to contain both, and we can compare that to the actual fraction that contains both. That turns out to be a better scoring scheme. But the point here is you have to come up with a lot of different scoring schemes, and you can do even better than what I mentioned here, but you have to come up with some scoring scheme that allow you to rank the interactions from what's best to what's worst. And you have to come up with some scoring scheme that allow you to rank the interactions from what's best to what's worst. And you have to do that separately for each type of data. For co-expression data, it might be a Pearson correlation coefficient. For the other types of data, it's going to be yet something else. Now, once you have your raw quality scores for each type of data or each big screen, you need to do score calibration. And the idea here is that we compare everything to a common standard, in our case pathways from the KEGG database. And now what we do is we try to say how do these interactions, depending on their score, agree with gold standard pathways that represent sort of what we know about which proteins are in a pathway together. So for doing this calibration, we first ignore all the proteins that we cannot map to pathways. So we now look only at the subset of proteins that actually have been assigned to pathways by manual annotation. And we can go in and look at say all the interactions that score between 1 and 1.1 and we can count how many of those do we have the two proteins in the same pathway and how many of them do we have them in different pathways. And we then figure out that something like 14% of them on the same pathway the rest are in different pathways which tell you that a score between 1 and 1.1 is pretty bad. You could do the same for the one scoring between 2 and 2.1, and you would see that in that case something like 70-80% of them fall in the same pathway, which tells you that a score above 2 is a pretty good score. You do that for lots of different score pins. You get a cloud of dots like I'm showing here and you fit some simple mathematical function, typically a sigmoid function, through it and you now have your calibration curve. The trick is we can now go back to all the proteins that we might not know what are doing, the ones that don't fall on any known pathways. And we can take a given interaction, we can calculate a raw quality score, and now that we have a raw quality score, we can go in and say the raw quality score was 1.7. What does that mean? Well, 1.7, if you look at this curve, means about 50-50 chance of these two proteins being in the same pathway. We can do that for all the different types of data. And the real trick is that, of course, you have different calibration curves for different types of data. And even though the scores on the x-axis are completely different things, we've now managed to map everything into the same score space. We've turned everything into posterior probabilities of two proteins interacting or being associated. The next thing we need to do is to deal with the species problem, the evidence being spread across model organisms in particular. And for dealing with that, we have to transfer evidence by orthology. So orthology has to do with evolution of genes. Homologs are genes that share common ancestry. do with evolution of genes. Homologs are genes that share common ancestry. Orthologs are genes, it's a subset of homologs, that separated by a speciation event as opposed to a gene duplication event. So the ones that are orthologs are the ones that in the last common ancestor of two species we believe were a single gene back then. This is a very complicated scheme. I'm not going to try to explain it in detail, but the idea is that you do it in a two-step process. So first we build up what we call atholicous groups at different levels of evolution. So we have groups that group together, the orthologs within mammals. We have broader groups that group everything in vertebrates, even broader that groups everything in vertebrates, even broader that groups everything in metasoa, in eukaryotes, or even going all the way back to the last universal common ancestor, which is normally called leuka. And we can then map all of our evidence back to all those levels, link the orthologous groups at the eukaryotic level, at the mammalian level, at the vertebrate level. We can link all of those based on all the evidence from the organisms that are within them. Then in step two we can transfer evidence from those groups back to individual proteins through a very very complicated scoring scheme. And the idea is that we now say we are looking at two human proteins of interest. We take the evidence that we have for those directly, of course, but then we take the additional evidence from the other mammals, from the mammalian othellicus groups. We take the evidence from vertebrates that are not mammals from the vertebrate groups. We take the evidence from eukaryotes that are not vertebrates from the eukaryotic groups. So we can go back in different steps, transfer more and more evidence in. And the reason why this scoring scheme is so incredibly complicated is really that we need to deal with gene duplication events. We need to deal with different genes evolving at different speeds, and we need to deal of course with transferring from mouse to human. It's going to be a lot more reliable inherently than transferring from E.coli to human. So that's what we do in string and the bad news is that everything I've talked about so far adds up to maybe 10 to 20% of the evidence at string. So we're still missing about 80-90% of the evidence, which is what I'm going to be talking about in the next presentation. So we'll have a short five-minute break, and then we'll get back and talk about text mining. You're welcome to ask questions during the break though. And now I can see the chat. Takk for ating med. Yeah, thanks. I'm now reading up on the questions and the answers. So indeed, the genomes represent different, well, mostly different species. In some cases, we do have a few different strains of the same species, but that's rare. So the main goal of selecting the genomes is we want to not have as many as possible because that just makes the problem blow up computationally and makes the database unnecessarily big. So we want a set of genomes that keeps it at a reasonable level, has as good a spread phylogenetically as possible so that we have a good coverage of the whole species space of what has been sequenced. And then also we want to make sure that the genomes that we have in are of at least a reasonable quality. So we collect first a very, very large number of genomes. We then do a lot of assessments of genome quality. So we collect first a very, very large number of genomes. We then do a lot of assessments of genome quality in terms of the assembly quality, that the genes are not fragmented, that there's actually good coverage of the clone library, so the genes that should be in the genomes are there. And then when we have that subset of things that pass the quality filter, we start looking at which ones are too closely related. Again, looking back at the quality, we start looking at then, okay, when these two are too close, which one is the better? Let's keep that one. We have, of course, a list of sort of the main model organisms, knowing that these particular strains of these organisms must be there, because those are the ones that are like the reference genomes in the field. That's what goes into the genome selection. So the whole point of these networks, there are many ways you can use them. You can use them to guide experiments. One way people use it is you have some gene where you don't know what it's doing and you then try to use something like string to get an idea of what it might be doing and then go in and do experiments. Another way people are using it for more single gene studies is that say they know some genes that are involved in a function but based on experiments they know that that set of genes is not sufficient to reconstitute the function and that means that they know they're sort of missing a gene or at least one gene. They then go into string with the means that they know they're sort of missing a gene, or at least one gene. They then go into string with the genes that they already know are important for the function they're interested in, and try to find additional genes, possibly genes of unknown function, that are functionally associated with the ones they know are important, and that way try to find the missing gene in their pathway, and then go show that experimentally. So that's sort of all the guiding small-scale experiments. Then you have the people coming from the omics side, which is more what we do, where you have a long list of genes. And the way you use that long list of genes is that you want to somehow structure your list. Looking at a long list and finding a biological story is difficult. So you want to put all of those genes into string, or rather into Cytoscape using string, and that way make visualizations. So using the network for doing data visualization of your omics data to help you identify what are sort of the interesting parts of your results. All right, let's get back to the slides. Bingo. So as I mentioned, we're missing most of the evidence. And where that most of the evidence that we're missing is hiding is the text mining. Because this is where the evidence is hiding. So this is sort of the back of the envelope estimate of the biomedical literature. If we naively assume that everything is indexed in PubMed and that we assume that the average article is five pages long, if I were to take it, print it all out on standard 80 gram A4 paper and pile it on top of each other, I'm going to get a pile that is over 10 kilometers tall. Now, of course, this is an incredibly naive estimate. The average paper is longer than five pages. Not everything is indexed in PubMed. And for sure, the pile is more than 20 kilometers as well. But it doesn't matter whether it's more than 10 kilometers or more than 20 kilometers. Because in either case, the reality is just there is too much to read. We cannot read the literature, which sounds ridiculous, right? Because we're spending all this time writing papers, and they are clearly written in a way that is intended for humans to consume. And we're now faced with the problem that we have too much of it, and we can't read it, which means that whether we like it or not, we have to get a computer to read it. So that's the reality we live in. We need to somehow get a computer to read the literature because we can't possibly keep up with all of it ourselves. And whenever I need to get a computer to do something that even halfway approaches being smart. I get worried and I find it useful to think of the analogy that a computer is about as smart as a dog, by which I mean that if I put sufficient effort into it I can teach it to do specific tricks. So borrowing a cartoon from Gary Larson what we say to dogs? Okay Ginger I've had it you stay out of the garbage understand Ginger stay out of the garbage or else. And the only thing the dog hears is blah, blah, blah, Ginger. Blah, blah, blah, blah, Ginger. Blah, blah, blah, blah, blah. And it understood its own name. And when I do text mining, most of the time, this is sort of our level of ambition. We're trying to just get the computer to recognize names, level of ambition. We're trying to just get the computer to recognize names, but most of the text in between is going to be complete blah blah blah to the computer. We're working on getting beyond that, but this is all of the core of the text mining in string at the moment. So this is what people in the text mining community call named entity recognition. And text mining is a scary field to get into because people like to use overly complicated terms for very simple things. called named entity recognition. And text mining is a scary field to get into because people like to use overly complicated terms for very simple things. So named entity recognition sounded fancy to me until I realized that it literally means recognizing things with names. And unsurprisingly, when you want to recognize things with names, you need a good dictionary of the names you want to recognize. So that means we need to know about synonyms. We need to know whether there's a protein called cyclin-dependent kinase 1, and we need to know that CTC2 is the same thing. We need to handle what people call autographic variation, which is the fancy way of saying that people can write the same thing in slightly different ways. So for example in your dictionary, Uniprot or wherever you get the name from, is going to tell you that there is a protein called cyclin-dependent kinase 1. And boy wouldn't it be nice if you were still able to recognize that in the literature when people write it with a hyphen. If you're just doing plain simple string matching these two strings are not the same and when you have it without a hyphen in your dictionary and it's written with the hyphen in the text you won't find it. So you need to have some sort of flexibility in the matching. You also need to know some rules about how people mangle names like you have cdc2 but you have cdc2 in human, you have cdc2 in mouse, you have cdc2 in rat, and quite likely you're starting all three of them in a single paper, so it gets very confusing. So you put an H in front of it to tell me that it's human cdc2. You can do that with all human gene samples. You put an M in front of mouse gene samples, an R in front of rat gene samples, and so on. But again, the dictionaries like UniProt are not going to tell you that. So you have to teach the computer these little tricks of how biologists write. It's not difficult, but it's important. Then you need a blacklist. And a blacklist is a list of the names that are not in your dictionary because your dictionary is wrong per se, but they are a really bad idea from a text mining perspective, because while they can mean what they mean according to your dictionary, most of the time they mean something entirely different. And my favorite example of that is the Human Gene Naming Committee, which in their infinite wisdom decided that it would be a good idea that the recommended gene sample, the one you're supposed to use in the literature of a certain gene, was this. Now anyone who's done wet lab molecular biology of any kind will know that SDS is a detergent that you use in the lab quite often for several things, one of which is to denature proteins. So of course if you text mine the literature and you think that whenever people write SDS it means the SDS gene you are going to be very very wrong a lot of the time. And this is going to be particularly disastrous because since you use SDS for studying proteins pretty much every protein that we ever studied will have been mentioned together with SDS in the literature, for which reason it's going to be a complete disaster when we go to the next step and try to do relation extraction using co-mentioning. So the idea in co-mentioning is very simple. If people mention two things together in the literature, they're probably related, and that would of course be useful to us when we're trying to build a functional association network. related and that would of course be useful to us when we're trying to build a functional association network. Now you could of course argue that we could very easily have the two things are mentioned together by random chance it doesn't mean a thing but that's where we do counting the idea simply being that if people keep mentioning many times the two things together it's not by random chance it's because these things are actually somehow related. The question is of course how you count. Which level should we count? Should we count how many documents mention A and B together? Should we count it only when it's within the same paragraph? Should we count it only when it's within the same sentence? And the answer is that doing either of these three is wrong. Because things being mentioned together in the same sentence is obviously the strongest evidence, but it's still a good hint when things are mentioned together in the same paragraph. And it's still a hint when things are mentioned in the same paper, even not the same paragraph. So you need to count at all the different levels and somehow combine that into a weighted count where the count is not actually an integer but it's a count that is more weighted count taking into account how much and how close to each other are these entities mentioned together. The way the formula looks is like this. So we have a weight for being mentioned in the same document, a weight for being in the same paragraph, a weight for being in the same sentence, and then for entities i and j we calculate this weighted sum k is the sum of all the documents, i, j are the two entities that we're trying to link to each other, and we're now saying are they mentioned in the same document, same paragraph, same sentence for document K. Once we have that, we can convert that into our raw quality score where we basically normalize this very much like the physical protein interaction. So we need to normalize things and take into account how much is written about I, how much is written about J. is written about j and once we have that quality score we then do score calibration against keg pathways exactly the same way as before and that way we get our interactions from text mining scored in a way consistent with all the other evidence. We then take all of that text mind evidence and like for all the other evidence we transfer all of it by mythology and now we're done and we have string. Now the neat thing of text mining is that we can use it for many other things than just protein-protein interactions and that gets us to the next topic of knowledge graphs as people call it. So in knowledge graph you're trying to build not just protein networks but you're trying to link in a protein knowledge graph lots of other things together as well that is relevant when you're looking at proteins. So we've so far looked at intraspecies networks, so string. There are other networks, of course, where you look at proteins going across different species. So intraspecies networks, I just want to highlight here a work of a former PhD student of mine, Helen Cook, who made a version of string for viruses. So there you have virus proteins associated with the host proteins using exactly the same kinds of strategies I've talked about for string. You can have protein-chemical interactions, so linking proteins to small molecule compounds. That could be interesting if you're interested in drug targets, if you're interested in metabolism. For that we have a sister resource of string called STCH, which is basically string with small molecule compounds bolted onto it as well. You can do protein compartment association, so your nodes in your network could be subcellular localizations as opposed to protein entities. And that way you can link things and basically capture the subcellular localization information about the proteins. You can draw it as a network or you can draw it as a pretty figure where you color a cell based on where protein looks like it's located. In either case, behind the scenes, it is a network. Same thing for protein tissue associations. You can look at tissue expression. You can map that onto figures like this or you can think of it as a network where you have a node being the liver. You have a node being the heart. It's a matter of the data model versus how you visualize it. Behind the scenes it's a network protein disease associations we can again we have a database called diseases where you can go in and look up either for a certain disease get the genes or for a certain gene gets the diseases again it's a network where you now have two kinds of nodes proteins and diseases and it's the nice thing here is it's the same strategy we're using for making all of these resources. We take curated knowledge from whatever reliable manually curated databases exist. We take experimental data. It could be tissue expression data, subcellular localization data, physical protein interactions, chemical screens. It doesn't matter what it is. We run text mining looking at co-occurrence evidence so that would be things like if a protein is mentioned with a disease that protein is likely involved with that disease. It's very simple logic here. We map everything to common identifiers so we use the string identifiers for proteins in all these different resources. But similarly, we have standardized identifiers that we use for diseases, being disease ontology, for tissues, being the Brenda tissue ontology, subcellular localization coming from gene ontology, so on and so forth. And then we again develop scoring schemes. We don't consider everything equal. We score things using the same strategies. We benchmark everything to get calibrated scores out. And that's how we then get one big knowledge graph. But even though you can view it as a number of different resources online, behind the scenes, it is really one big network that ties everything together. behind the scenes it is really one big network that ties everything together. So that's the end of this first part. We now should have, let me look at the time, we have a bit more than 10 minutes for sort of general questions and answers and then after that a 10-minute break. Have we experimented with more sophisticated text passing attempts to understand the nature of the text and the association of the terms mentioned? So, good question. In string, we have a very old natural language processing pipeline that I didn't talk about here. That is a rule-based system using grammars to pull out associations or things like A activates B and so on. We do have that in there. We're not really doing anything to develop that one anymore. Katerina, who at least earlier was in the chat, she might still be here. It's actually her main project is using these more modern methods for dealing with the text. So she's looking into all the kinds of technologies with deep learning using BERT transformers and so on. So she can tell you much more about that in chat if you're interested. And how we calibrate the scores. So in String I mentioned it's the way we calibrate them by benchmarking everything against a common gold standard. In the other resources we do things in slightly different ways but overall the idea is the same. You have some gold standard like manually curated gene disease interactions and based on that you figure out how can I calibrate these different raw quality scores for different data types into a score that means the same across evidence type so you know that a score of X means that you are so and so sure so another So sure. So another question over on YouTube is, is string an undirected network? String is mainly an undirected network. It's mainly an undirected weighted graph. However, we do from in part the passing the manually curated pathway databases and in part from this natural language processing pipeline, we do have some directed edges in there. But if you only consider the directed edges, then you're going to be throwing away the vast majority of string. So if you're thinking about it from a graph analysis standpoint, it's probably best to think of it as an undirected network because the vast majority of edges are undirected. So, done text mining. Would it be different scores when two proteins are co-mentioned in the same paragraph, and if they are two different parts of a paper? In the latter case, will it still get a score? Yes, it will still get a score. It will be a very low score, but it will still get a score. So things being mentioned together just in the same paper, even in different paragraphs, does carry some weight. But it's way lower than if it's in the same paragraph. And actually being in the same sentence is only a little bit better than being in the same paragraph. So I would say if you didn't want to do this complicated weighting scheme, the best level to choose is probably to say same paragraph. Over on YouTube, what databases do you use for storing the string data? So technically we are storing all the data in a big PostgreSQL database. So it's an SQL database. We are not using no SQL databases for it. We did do some experiments working with graph databases like Neo4j, but we ended up not really seeing any advantage in doing that, so we ended up just staying with Postgres. How are the scores combined? Good point, as the evidence are collected from different sources, experiments, predictions, and I could add different organisms. So we're in string when everything has been combined, has been calibrated, and you have probabilistic scores, right? You effectively have posterior probabilities given each piece of evidence. Then in most cases, the evidence is being combined, assuming independence. So you're combining the probabilities. I'm slightly exaggerating the simplicity here. The real formulas are in the papers. But if you're 90% sure, given one piece of evidence, the two proteins are associated, you're 50% sure. If you're 50% sure, given one piece of evidence, and you're 50% sure. If you're 50% sure given one piece of evidence, and you're 50% sure given another piece of evidence, then overall you would say there's only 50% chance that the first one is wrong, there's only 50% chance that the second one is wrong. For the edge to be wrong, both of them would have to be wrong, so there's only 25% chance that both of them are wrong. So 2 times 50% is 75%. You're 75% sure if you have two independent pieces of evidence that each gave you 50%. It's slightly more complicated than that when you're combining evidence across species, for example. You need to take into account how closely related the organisms are. If it's very distantly related organisms, you can sensibly assume independence. But of course, if you're combining evidence from E. coli with evidence from Salmonella, you would be a fool to assume independence. So in that case, we are more using a max score rather than combining them. So we either assume things are dependent and we just take the better of the scores or we assume independence and we combine them in the fashion I explained. Thank you. There was also a question at some point about for the contingency tables, what was the best fitted scoring scheme for the physical interactions. It's been changing over the years. I wouldn't really spend time on trying to explain that now, because right now actually Mark in my group is working on coming up with new scoring schemes for the physical interaction data. So whatever I would tell you now would be something that actually would be outdated very soon. But the idea was basically it's sort of, it's an observed over expected ratio, but at the same time combined with the absolute count having to be high as well. So it's sort of a compromise between you need a lot of observations to be sure and at the same time you get it. You need a good observed over expected ratio. Let me just see which paper it is. Katerina managed to dig up on the... Yeah, that's the right one. Exactly. So the paper describing the NLP pipeline was a super productive collaboration with another group in Heidelberg. So it was all done while I worked at the EMBL in Pierre Borg's group. And I was working with Jasmine Sarich, a computational linguist. And I don't think I'm going to offend anyone if I say that his knowledge about proteins was as great as my knowledge of linguistics. So we had very complementary skills, let's put it that way. But somehow it just worked amazingly well. And yeah, I knew what it was we wanted to accomplish. He knew what one could actually do with NLP at the time. And we put together in half a year, each of us working maybe a quarter of our time on it, we managed to put together a pipeline that actually was probably the best in the world at the time. And it's still doing remarkably well. It's sort of embarrassingly outdated technology, but every time somebody's been benchmarking it, they've told me that it is scary good considering how old it is 1.5 kg kv. kg kv. kg. I'm not sure if questions come up so just the plan now is that first we have 10 minute break I just need to grab some more water and have a short bio break here. Yeah, so, oh, there's a question about APIs. Are there REST APIs for accessing string programmatically? There are REST APIs, yes. I don't think there's a Python package for doing it specifically, but you can access the REST APIs. It's all on the StringDB website. If you go to the help section, the API is documented there. So yes, there's definitely REST APIs for accessing it programmatically. That being said, depending on what you want to do, you don't necessarily want to use the REST APIs. There are also bulk download files and if you want to do something like a global analysis of the whole network of a certain organism, I would say you are far better off just going downloading all the data locally rather than trying to fetch it via the REST API. It's just going to be much slower to do it via the API and unnecessarily complicated. Another question. Can we use string to predict interspecies PPIs of organism which is not in string? Yes and no. It's certainly not something you can do via the website. You can't just go plunk in a genome that you're interested in that isn't in string and get a network. But you can use, so there's an Authology resource that String builds on that is called eggnog and if you download there the orthologous groups, then you can figure out you can take your there's a tool as well called EGNOC mapper so you can go map all the genes from your genome to either a close relative in string and transfer the evidence from there or orthologous groups, and then you can take and download the whole network from string and you can map interactions over. So it's a lot of coding work to do. There's not an easy way to do it at the moment. It's something I would like to see in the future but it's certainly not a trivial task and it's also something that would be computationally quite expensive so that's really the tricky part. Can we make this good enough and fast enough that we can offer it as a service that people can just go use? Because we obviously don't want people to be crashing our server. Okay, so let's have a quick 10-minute break. Then starting at 9.30 30 we'll jump into some exercises so the exercises are you should have the link in the course material but let me just open it up here training So the exercises we'll be doing in 10 minutes are these ones. And the idea is to have basically, what is that, 25 minutes or so for that. Then a short five-minute break. You're, of course, welcome to start working already if you want. I just don't promise that I'll be sitting at the computer right now. And get as far as you get during that time. It's to give you an idea of what's actually in string. How do you work with string through the web interface? Where does the data come from? Understand the nature of this protein graph before we at 10 o'clock get back into talking about Cytoscape, StringApp and how to work with these networks inside Cytoscape. Because you have a much better view of the underlying evidence when you're in the web interface. So that's why first we go look at the web interface, have an understanding of what's the data we're dealing with. And then we start working with those networks on a much bigger scale in Cytoscape. And just for everyone also on YouTube if you want to play along with the Cytoscape exercises it would be a good idea to get Cytoscape downloaded and installed as well. It's a bit of a big install so depending on your internet speed you you probably want to get started if you haven't done it already. And if you already have Cytoscape installed, please, please, please make sure you update to the latest version 3.7.2 and make sure that you've updated the apps including StringApp to be the latest version. We usually always run into some people who already had an old version of Cytoscape installed and then the exercises don't work because they are on an old version of Cytoscape, old version of StringApp etc. So especially now that we're trying to teach online and we can't see your screens, please make sure you're on the latest version to avoid a lot of questions. Thank you, and I'll be back in a bit less than 10 minutes. you you you you you you you you you you you you you you you you you you you you you you you Thank you. I'm back. So, people still hear me? Excellent. You hopefully see my screen now. The exercises that we'll be doing, it's this link. It should be on the course website as well. I've just opened up the webpage here. I hope you see me sharing the screen. So these first exercises, they are all web-based and just get myself out of the way there. That's better. And you can see we have a number of exercises and the first one we're really going through basic things with the String website, querying for a single protein, looking at how you can represent the networks in the interface, looking at the so-called evidence viewers, which are all about seeing the underlying data, seeing where a network came from or rather where a specific interaction came from. Experimenting a bit with the query parameters and seeing how they affect the result you get just to basically get an understanding of what's in string how does it all work that's really the key to get through that exercise 2 that one is the stitch. So if you're interested in how you can do similar things with small molecule compounds, have a look at that one. It's not something you really need for the exercises later. So that's very much optional. Exercise three, the disease query. We will be doing a disease query from Cytoscape later on. So it would be great if you go in and have a look at exercise three. disease query we will be doing a disease query from Cytoscape later on so it would be great if you go in and have a look at exercise 3 so that you have an understanding of what's in the diseases database before we use it from inside Cytoscape. Exercise 4 is all about the string viruses so if you are interested and well many people are at the moment even though I'm sorry to say that a certain specific virus that I'm not sure I'm allowed to mention on YouTube is not in the database obviously we have not released a new version of the string viruses after the outbreak but you can look up other proteins from other viruses lots of echo i have no idea where the echo would come from? Is there still echo? Okay good I don't know. There was a chat comment in YouTube that there was lots of echo. I have no idea where that would be coming from. Good to hear that there's no echo. So yeah, dig into the exercises. Just ask questions. 1 tsk vanilleekstrakt 1 tsk vanilleekstrakt 1 tsk vanilleekstrakt 1 tsk vanilleekstrakt 1 tsk vanilleekstrakt 1 tsk vanilleekstrakt 1 tsk vanilleekstrakt Terima kasih. Takk for ating med. Sampai jumpa. Terima kasih telah menonton. Thank you. Terima kasih. 1 tbsps of butter 1 tbsps of flour 1 tbsps of baking powder 1 tbsps of baking soda 1 tbsps of baking soda 1 tbsps of baking soda 1 tbsps of baking soda 1 tbsps of baking soda 1 tbsps of baking soda 1 tbsps of baking soda 1 tbsps of baking soda 1 tbsps of baking soda 1 tbsps of baking soda 1 tbsp. salt 1 tbsp. pepper 1 tbsp. garlic powder 1 tbsp. garlic powder 1 tbsp. garlic powder 1 tbsp. garlic powder 1 tbsp. garlic powder 1 tbsp. garlic powder 1 tbsp. garlic powder 1 tbsp. garlic powder 1 tbsp. garlic powder 1 tbsp. garlic powder 1 tbsp. garlic powder 1 tbsp. garlic powder Terima kasih telah menonton! Right, there's a channel called database, which can be a bit confusing. Personally, I would vote for renaming that channel. The database are the manually curated databases. So they are really the curated knowledge. But I would agree it's not the best name for it because all the data that goes into the experiments channel, for example, comes from the databases, comes from other databases, and all the evidence in string overall, string is a database. So it's not the most accurate term. I think it's called that for historical reasons. Back in the Stone Age, the string database only had interactions inferred from the genomic context methods, and interactions coming from the databases. So it was called database. Since then, we've added a lot more, and the term database is not that descriptive anymore. Terima kasih telah menonton! 1.5 kg kaya kaya Terima kasih telah menonton! Very good question. So the question about what do the scores mean for the database column? Well, since the database is kind of the manually curated knowledge that we believe to be true, how can you benchmark it, right? It's the gold standard. It's what we are benchmarking against effectively. So for that reason, the database channel is the only one that actually is not benchmarked. Instead, everything that comes directly from one database gets a flat score of 0.9. We say we're 90% sure of stuff that is manually curated. If things score higher than 0.9 in the database channel, it's because we got the same manually curated information from several different places. But I don't even think that happens. I think higher than 0.9 has to be from another evidence channel as well, because the curated databases interchange data. So the fact that it's in more than one database wouldn't really make it more reliable. I would have to check that carefully. So I'm not doing the database channel. This is not meant as a disclaimer. It's more the we're a big consortium making this different groups are responsible for different parts of string. And I just want to give credit that it is the group in Zurich that is doing all the hard work of integrating these way too many different pathway databases. So really technical questions on how things are done in that channel. The people in Zurich would be more qualified at answering it than me. Is there a reference of how these scores are calculated? By reference do you mean publication? The problem is there's not a single publication explaining how everything in string is done, which sort of is an artifact of how publishing works. So we are making the string database, we're releasing new versions, and there's this thing called the nucleic acid research database issue. And we publish every two years an update paper or string. But since it's an update paper, every paper describes what's new, what's changed. And for that reason, it's effectively like this series of papers is like patch files to source code. You almost have to go back and read the original paper, and then you have to read all the subsequent papers to figure out which things have been changed since then. There's no way of publishing a paper where it's a live paper, so to speak, where you just have the current information in one paper. It's quite bizarre. on. There's not one equation for a score, no, because the point is all the different evidence channels have different scoring schemes. That's also why if you were to make a publication that explains how everything is scored, it would be a gigantic publication, because that would be explaining the scoring scheme for how do you score fusions, how do you score neighborhoods, how do you score phylogenetic profiles. That's actually a separate paper in its own right. There is a paper out on how we score the phylogenetic profiles. Then there's the whole scoring scheme for experimental interactions, which has changed over the years and which is due to change again. There's the scoring of the text mining. On top of that you then have the whole scoring of how do you transfer evidence by a orthology. So yeah, the best people to answer questions about how actually both the, let me get it right here. The genomic context channels and the database channel and also the authority. Those parts are done primarily in Zurich. The authority transfer. So those are the best people to ask. I think I saw over on the YouTube chat. I think I saw David Lyon over there. He's actually a former member of my group who's now in Zurich, so he might be able to answer questions for their part over there. I'm just delegating things here without even asking people if they're willing to. He can complain later. Terima kasih telah menonton. Question about if two proteins are in fly-based, they would have a score 0.9. No, the scores are about interactions, right? Of course, the proteins themselves are going to be in, Drosophila protein is going to be in flybase, because in fact, we get the genome from Enxample, which gets the genome annotation from flybase, so everything is in flybase. But for there to be an interaction that scores 0.9 in the database channel, there would have to be manually curated information from Flybase that they interact. Does that make sense? Takk for ating med. Thank you. Gå inn i fjellet. Jalur Terima kasih. Nasa kata, Nasa kata, Nasa kata, Nasa kata, Nasa kata, Nasa kata, Nasa kata, Nasa kata, Nasa kata, Nasa kata, Nasa kata, Nasa kata, Nasa kata, Nasa kata, Terima kasih. Takk for ating med. I think I'll just really quickly go through exercise one just to show you what the idea is to make sure that everybody's understood that part before we dive into cytoscape after another break. So here's the string welcome page. When you first arrive, you go in, you can query for a protein, I query for insulin receptor. I can either leave the organism on auto detect or I could select human. If I'm lazy and I leave it on auto detect, it comes up and says I know insulin receptor and a lot of different organisms, lots of proteins it could be. I'm guessing you mean this one in human. Yes, it is. And I get a network. And it looks like this. looks like this. The first thing you notice is that there are many different colors of lines and if you look down here then you will see that the different colors of lines correspond to different types of evidence. So the pink lines are experimentally determined interactions, the yellowish lines are text mining and so on. So this is what is called the evidence view where the colors of lines represent which types of evidence you have for a certain interaction. Another view is the confidence view where now you see you don't have multiple lines anymore instead you have different strengths of lines. So for this we are now just showing what's the overall confidence of a certain interaction and you can see that we are more sure about this interaction between these two than we are between on the interaction between these two. So in this mode we're just showing how sure are we not where did the evidence come from. When it comes to the evidence viewers you can still get in and find the evidence of where does a certain interaction come from even when you're in confidence view so if we're interested in what was it insulin receptor and insulin receptor substrate one to make it easier to click on i'm just going to move these makes it a lot easier to click on, I'm just going to move these. Makes it a lot easier to click on this edge and get the pop-up. And now that you're seeing this pop-up, you see we have evidence from a little bit from co-expression, a little bit from text mining, but it's annotated in manually curated databases. There's experimental biochemical data, both in the same organism in human and transferred from other organisms. That's really where the bulk of it comes from. And sorry, yeah, there was a lot from text mining as well. So that shows where it comes from. You can go in and see more detail. So as I said, there's experimental data supporting this interaction. If I want to see where did that experimental data come from, I can click the show button and it will show me that I have things imported from a number of different databases in the act, HPRD, GRID, which is BioGrid. And you see which type of assay they did. Pull-down assays, affinity chromatography experiments, two hybrid experiments, enzyme enzymatic study, etc., etc. There are loads of different and you see it truncated to just show 10. There's so much data and it's not surprising that we have a lot of data for that interaction. There's a reason why it's called IRS1. It's insulin receptor substrate 1. So of course given the name of it it's obviously very very well known that insulin receptor substrate 1 interacts with insulin receptor. If we go to the query parameters, this is where there's a bit more to understand. There's a lot of different settings that you can choose. There's the minimum interaction score. There's the maximum number of interactors. And that's really what we're trying to make sure people understand here. So what's the difference between these? How do they play together? Right now we have a network. If you count the proteins you see there's insulin receptor and 10 other proteins in the network. If I go over and I set the minimum interaction score to 0.7, update, does it change the network? More specifically does it change the list of proteins? And if we looked at the list of proteins here before and after, I can tell you it did not change the list of proteins. Now, there's a very simple reason for that. We are ranking the proteins based on their interaction score with insulin receptor, the query protein. The top 10 proteins all score way more than 0.9. score way more than 0.9. So when I'm choosing to show the top 10, it doesn't matter whether I set the score cutoff at 0.4 or 0.7 or even 0.9. The top 10 is still going to be the same top 10. However, some of the interactions between the proteins may score lower than 0.7, so we did lose something in terms of interactions when we change the cutoff. So the next thing is let's go turn off all evidence types except from experiments. So if we only consider the experiments channel. Now you've got a network that quite clearly looks different. And if we go look at the legend, you will see, if you compare it with before, that the list has changed. The reason is the score we're ranking on is the score coming from the evidence types that we've turned on. So when I turn off everything but one evidence channel, it means that the score that I'm now sorting the proteins on and taking the top 10, that score is different. It's now the pure experiment score as opposed to the combined score across all evidence channels. score as opposed to the combined score across all evidence channels. If I now go and say let's show top 20. So 0.7 only experiments show me top 20. Update the network and clearly there's a lot less than 20 proteins in this network what's happening well the reason is this is the lowest scoring protein that scores over 0.7 in the experiments channel alone so when I'm asking for a top 20 with a confidence cutoff of 0.7 using experiments channel only, the answer the string comes with is, sorry I don't have 20 proteins for you. There are not 20 proteins scoring above 0.7 based on experiments alone. And I could go lower this one and I still wouldn't get more I think you saw the point any questions to this otherwise we are ready for digging into Cytoscape. So now you've seen String, you've seen some of the other resources around String, and I hope you have an understanding of what's the data that goes into it. We have confidence scores. It's a very complicated setup, but it basically allows you to know which interactions are more reliable, which interactions are less reliable and it's based on many different lines of evidence and that's why it's complicated. Now Cytoscape, unlike String, is a network tool. Cytoscape is great for doing analysis of networks. Cytoscape is great for doing visualization of networks. Cytoscape is great for doing visualization networks in particular. That's what we'll mainly use it for here. But Cytoscape is not a database. It's one of the most common questions that comes up on the Cytoscape helpdesk, which by the way I can highly recommend if you have Cytoscape problems to go to the helpdesk. It's not a database. Cytoscape does not give you a network. When people open Cytoscape and go where do I get my network the answer is well you're using a network tool. You have to load a network in from somewhere. The Cytoscape user interface has three parts. One is the networks where you are showing a network. That's the visualization. One is the tables. tables that's where all the data resides those are the ones you need to populate with some data to have something to show and there's the visual styles which is the mapping of how the data in the tables turn into the pretty visualization that is the network so here's how it looks. We have up here the network visualization and you see in this case it looks like a string network. You have down here the tables where you have a note table that contains all the notes. You also have an edge table that contains all the interactions. And then over on the left you have the style panel where we can say how different properties map onto this. So we can say how the strength of the edges depend on the scores in the edge table or if we have some data how we want that data to be mapped as colors onto the network. So you can do many things in Cytoscape. You can of course load and save sessions. This is the first thing where people get confused. They want to get a network into Cytoscape and they think the obvious thing is you go file load. Well, you don't go file load because you use load for loading and saving sessions. So if you don't have a session already you've saved, you don't have anything to load. So this is more like loading and saving Adobe. So if you don't have a session already you've saved, you don't have anything to load. So this is more like loading and saving Adobe Illustrator for else. To get data into Cytoscape in the first place, you need to use import. So you import a network, and you can import a network either from a local file. So if you have your own tab delimited file or Excel sheet with interactions from your own study, you can load it into Cytoscape using the local file load importer. Or you can import a network if you don't have your own from a public database, which can obviously be string. You can import tables. So that one is something that is named a little bit confusingly. Import table is really all about importing note table information. So when you have imported a network already and then you have for example your own omics data that you want to import into Cytoscape as well in order to visualize it onto the network, you want to use the import table functionality to load your data into the note table. You then have property mappings and there's a few of them. There's the pass-through mapping which just takes the data as is and puts it on the network. That's what you use for example to say this field contains the name. I want a pass-through mapping of that name to be the text shown on the node. You have discrete mappings which is when you have discrete data or data that consists of classes. So say you have clusters A, B, C, and D, and you want to have things in cluster A be red and things in cluster B be blue, you would use a discrete mapping. And then you have continuous mappings, which are the ones you want to use when you want to use, for example, a color gradient to map something like a logfold change value onto a network, or like when we take the string confidence scores and map them onto how strong the line is on the edge. There's something called default and bypass. So I just talked about the mappings. These are the other two halves of how you map data in Cytoscape. The default color is what it's going to show if you don't have any data. So if I load in some logfold change values, I give it a color gradient going from blue to white to red, and then I set a default color to be gray. Then the nodes for which I have missing values would be gray, because I have no logfold change value. Whereas something that I have data for, but it doesn't change, would be white. The bypass is something that allows you to select nodes and set properties specifically for that node and just say, I have this node, I want it to be orange. It's very powerful. You can do a lot of things with bypass, but it's also a bit dangerous because there's not an easy way to see what you've done in a session. So if you mess things up, it can be quite messy if you're using bypass. So I would generally try to avoid using bypass when you can. Another important feature in Cytoscape core functionality is selection filters. So instead of selecting things by selecting a square or somehow clicking on the node, holding down a button, clicking multiple nodes, you can select based on properties. You can say select everything for which the logfold change value is greater than so and so. Select all the nodes connected to the node I've currently selected. All those kinds of things. So you can select by attributes. You can also use a lot of layout algorithms. Cytoscape has a lot of powerful algorithms for how to layout networks. You'll be playing a bit with those. I highly recommend installing the Y-files plugin which has additional layout algorithms. Clustering. There's the Cluster Maker 2 app, which is really powerful for doing all kinds of clustering algorithms on the network. And that gets me to the topic of Cytoscape has an app store. So it's a bit of a bad name. I know that John Scudamore is our collaborator who's on the Cytoscape core team prefers that it would be called the tool shed instead because it's not an app store in the Android or Apple sense. It's not a place where you have to pay for anything. The app store is just a place where you can install additional tools in Cytoscape from. It's all free. So one of those apps that you can find in the App Store is StringApp and there's a few related things as well. So the StringApp basically does this. It allows you to very easily take string and get it into Cytoscape. It has a lot of added functionality as well and but this is really the core function of it. Get string into Cytoscape. You can do several types of queries. You can do protein queries. So in that you query for typically a long list of proteins and fetch a network for it. You can do a disease query where you use the diseases database and go and say, I want a network for Alzheimer's disease. So you just query for Alzheimer's disease, get the top end proteins for that disease, and then retrieve a string network for it. Or you can do a PubMed query where you start from some topic of interest, you query the literature for that topic, you find the proteins of interest by text mining of that literature, and then put a string network. One of the most common use cases of wanting to get string networks into Cytoscape is to visualize omics data. So simple case here, we have a proteomics experiment and what came out of that is of course your typical Excel table with a lot of columns. You see here we have a column called uniprot. It's always good to use things like accession numbers instead of gene names when you're querying a database. So we want to have that and we good to use things like accession numbers instead of gene names when you're querying a database. So we want to have that and we want to use that. We have a lot of data coming out of the mass spec, including some things like some log fold change values after five minutes, after 10 minutes. Then what you do is you take the whole list of Uniprot accession numbers from that table. You do a protein query query and that way you fetch a string network. You then go and do import table to load all of these additional columns from this Excel table into Cytoscape as well. And you can then go and color the network by a fold change for example. So you now take the fold change values from the Excel table and map them as colors onto the nodes using the visual property mappings inside of Scape on a string network. So here you have a network where the coloring of the nodes comes from the user's own data. All the edges come from string. Other things you can do enrichment analysis. We didn't use it in the web interface but String has functionality for doing enrichment analysis. So you can do a go-term enrichment analysis, find terms that are significantly enriched in this network and map it onto the network. You see there are now these these halos or circles around with several segments with different colors and these colors represent different terms that were significantly overrepresented in this network. We can also take things like site-specific data. So this is particularly interesting to people who work with phosphoproteomics data or other post-translational modifications, or things where you have multiple comparisons, for example time series data. And you can use the Omics Visualizer app, which is also developed in my lab in particular by Mark. Nadja has been one of the main drivers, she's also in the chat for the Stringer app. And you can then, the same way as these colored circles with multiple segments map logfold change values for different phosphorylation sites on the same protein for multiple different comparisons. You can of course make overly complicated figures this way if you want, you shouldn't, but it is a very powerful visualization tool and it gives you a lot more power than core cytoscape in terms of being able to map complicated data onto a network. And it is designed of course since it comes from my group to play very nicely with the StringApp. So StringApp and Omics Visualizer work great together. Now the bad news is that what I showed you here were really easy examples. And they were easy because they were small. Typical Omics datasets you don't have a small network like what I just showed you. You have typically hundreds to a thousand, even a few thousand proteins that come out. And for those, you will typically have a thousand to ten thousand interactions. And when you show that, you end up with something that looks like this. And at first, maybe you you think this looks nice. I mean I see lots of networks like this get published in the scientific literature. But try making any sense of this network, right? I mean what can you see? The only thing you can see is it's a big network. It looks like everything is connected to everything. And it is what I like to and many people in the field like to call a ridiculous gram. Everything is linked to everything. It's ridiculous to make a plot like this because you can't see anything from this figure. So what we want to do typically when we have a big complicated network like that is to use network clustering to cut it up. So that's where we use the Cluster Maker 2 app in the exercises which comes from Jan Skudamoris at UCSF, our collaborator in StringApp and Cytoscape. And we use that to identify functional modules in the network. We then typically cut down the network and show only the interactions that are within the clusters and that way we can take the network from before and turn it into something that looks much much simpler. It's still a big network it still takes time to understand this but I hope you agree that this is a much nicer figure than the one we had before. So to summarize I hope I convinced you that networks are a very useful abstraction that I hope you remember. Nodes, those are the things in your network. Edges, those are the ones that connect your things. So proteins and interactions typically. You heard a lot about the String database. It's a database of protein networks. It has a whole suite of related database resources around it. Subcellular localization, tissue expression, disease associations. And all these resources, string and its sister resources, are made by integrating heterogeneous data from a lot of different places. They use text mining. All of them rely heavily on mining information out of the text because no matter how many databases you import, you're not going to capture everything. There's so much written in the literature that is not in the databases. I've told you a bit about Cytoscape. I think the best way of learning Cytoscape is to do hands-on exercises. So that's basically what we'll be doing the rest of the day. It's a network tool. It has apps that you can plug in like the String app that is really useful for working with string networks inside Cytoscape. You have Omix visualization. We can do a lot of things visualizing the data on string networks both using the core functionality of cytoscape and using omics visualizer so with that i just want to really thank a lot of people behind this work so i already mentioned several times the string database it's a huge collaboration it's been running for a long long time it all started in peer box group at the empl where both christian van Meering and I was in his group at the same time. Christian then started his own lab in Zurich. I started my lab in Copenhagen. We're now working together as a consortium, still all of us involved in doing the string database. Lots of people from the groups have been working on this. I want to particularly highlight Damian, who was one of my first PhD students and now working in Christian's lab since quite a few years. Really string being his core activity. He's been the mastermind behind the Authology transfer and many other parts of string over the years. I mentioned Helen who did the string viruses which is certainly a very timely thing now. Michael Kuhn was the mastermind behind Stitch if you're interested in small molecule compounds. David Lyon whom I think might still be online has been doing was first a postdoc in my group and is now also in Christian's group in Zurich. You begin to see a pattern here, it seems. Done lots of things, especially related to the enrichment analysis functionality. On the text mining front, Alexander Junge in my group has done a lot of work. Current postdoc, Doha Grisa in the group has worked a lot on getting the full text mining in. Katerina, who's here helping today, is working very hard on making what is going to be the next generation of text mining in string, so using BERT transformers and all that fancy stuff. I've been collaborating for many years with Vangelis in Greece, so he's really driving a lot of the early work on text mining together. And Sampo Pusalo is our collaborator on the whole BERT project with Katharina. The knowledge graph work, Sune Frankil was the main reason for this whole thing, starting with the diseases database. Oana has been doing a lot of work in particular on the tissues resource. Alberto also on the tissues resource and on making a knowledge graph of the whole thing. Cytoscape, StringApp, Omics Visualizer, that's all. Nadja doing a ton of work on it, collaborating with Janko Otkin's group, where she was joined between my group and Jan's group. John Scudamore, I mentioned him several times. He's one of the core developers of Cytoscape and heavily involved in both String App and Omics Visualizer. And Mark is the main developer of the Omics Visualizer. So thanks for your attention. Thanks for funding as well, of course. And I'm happy to take some questions. And then after that, I'll dig in and give you a quick demo of what you can do in Cytoscape before we break for lunch and then continue in the afternoon with exercises. Do we have any questions to this? There is a question about could you do things with antibiotic resistance? And maybe the smartest is to actually try to just do a demo with that. So how would I go about if I wanted to make some network of antibiotic resistance in some bacteria? Then the first thing is I need to find literature. So I want to dig out a list of antibiotic resistance proteins from the literature. And for that, I'll start by going to PubMed. Let's see... That gives me 200,000 or so. I need to share the screen. A query for that gives me a list of 200,000 papers. Let's see what should we go for? Streptococcus. Yeah, that's good. We now have 14,000. Let's see. What does that give? No bad idea. So this gives us a bit less than 14,000 papers about Streptococcus antibiotic resistance. I'm just going to copy that. I'm going to jump into Cytoscape and this is a perfect case for illustrating what you can do with the PubMed query. Now I say import network from public databases and when you have the String app installed you can choose under data sources instead of universal database client you can choose String PubMed query. Obviously I don't want a human network now. I want something that would be streptococcus. Let's see which one, paste in this query, ask for let's just take 50 proteins. So now if everything works well, we are going to get a network of proteins from this species that are mentioned a lot in abstracts talking about antibiotic resistance in streptococcus. So it queried PubMed. It found the same number of papers as we found querying PubMed. It should, we are using the API for the same database, so it obviously should do the same. It then goes to my database of pre-computed text mining results, figures out which proteins are mentioned a lot together with this, and retrieves a string network for those proteins. So the list of proteins came out of running text mining but the network came from string. So I hope that answered the question can you use this to make a network for antibiotic resistance genes? Yes you can. You very much can. So what else could you do? We can do things like, let's illustrate the features here. So you see down here you have the node table. In the node table we are seeing the list of proteins that came out from this query. We have the edge table it has the interactions coming from string. The network table you don't really need to worry about. In this one we have a lot of columns. You'll see one thing we have over here is the text mining scores. So that's because we did a PubMed query. Then we have a score that actually is what it was ranked on. That sort of combines how many papers were there in total in PubMed mentioning this gene. How many papers out of my 14,000 from the PubMed query mentioned this gene. I can now take these. I could choose to do a coloring of it. So let's say we want to color these proteins based on how strong text mining evidence I have for these genes being related to antibiotic resistance. So for that, you go to the style, and you want to be in the style notes. related to antibiotic resistance. So for that, you go to the style and you want to be in the style notes. And then first thing, we currently have these colors that are sort of reminiscent of how things look in string. It's called string colors. I turn this off. And now I say I want the fill color And now I say I want the fill color to depend on the text mining score. I then say I want the mapping. You see this pass-through discrete continuous. Since this is a continuous valued score, I want a continuous mapping. And I now get a color gradient that turns the numbers from this score into colors on the network. And you see it's a very skewed distribution. There's really one that has very strong text mining evidence and most of the others are scoring quite low. You can go in and customize these gradients. I could sort of pull the middle color down and maybe get a little bit more color into the network that way. So that's the kind of thing you can do with a PubMed query. So on the import page, what does use smart delimiters refer to? I actually don't remember. Maybe Nadja can explain. I think it's sort of like when you have lists of things instead of having one per line. So it would automatically split them. I think if you have like a comma separated list of identifiers, I think that's what it is. The reason why you need to be able to turn it off is obviously depending on how your identifiers look, splitting things could break your query. So if you're having a clean list where everything is on a separate line, you probably don't want to use smart delimiters. Am I right Nadja? Yes I got something right. I'll note that down as a win. Let's look at something else. We have somewhere here a spreadsheet. So this is the kind of spreadsheet, in fact, it is the one I showed on the slides, where you have uniprots, you have a lot of different log ratios and so on. So let's try to go and get a network for this. So I just select all of these. Edit, copy. I jump into Cytoscape and I say file, import, network from public databases, string, protein query, homo sapiens. Yes. Paste in my whole list of Uniprot identifiers here. Maybe crank up the confidence a little bit. And it doesn't matter whether I have smart delimiters on or not because things are nicely separated on different lines and these are uniprot accession numbers they don't have any spaces or commas or funny things in it that this functionality could break. Click import I get to the disambiguation page there's one ID that could match to several different proteins in string. I just use the first one as the representative. It's a whole set of histones that are nearly identical. I fetch the network, I get a network. Now the next thing I can do is that now I would like to visualize data onto this network. So for that, I want to import table. So import table from file. I have the same Excel table as a tab-delimited file on the desktop on my computer. I'm just importing that one. You now see here basically the same spreadsheet shown inside Cytoscape and the important thing is here I have the UniProt accessions it's marked with a little key up here there's what's the key column for the network since we query it with the UniProt identifiers all of those land in a column called query term so these IDs over here accession numbers over here, are the ones that are in the query term column. So I want to match this key column against that key column. When I import, it may seem like nothing happened. But when we are scrolling all the way to the right, you will see that I now have a bunch of columns. Gene name, peptide, sequence, 5-minute, 10-minute, log ratios etc. So I now got all my data from this spreadsheet into Cytoscape and that means I can now go and say fill color. Let's just edit mappings. Let's say I want to color things based on the 10-minute log ratio and then I want to make it a continuous mapping. And you see it automatically detected that this had both positive and negative values in it. So you sort of want a two-ended gradient. This makes sense, but the colors are really faint. It's hard to see anything. You would typically want to go first, maybe pick a different palette with a bit stronger colors. You could go to, I don't know, this one. Boom, that's more strong. Still doesn't really fix it, right? Next thing we could do is to say change the min and max values. Say let's go, because it just auto scales, it's based on what are the biggest values in those columns. But usually you have some few proteins that are kind of outliers in terms of being very highly regulated compared to the rest. So now if we just go, say, minus three to plus three. OK. OK. Now we're getting somewhere, right? Now you have strongly colored proteins and you can see what's going up, what's going down. The next thing we could do is that we could say up here these little structure images that may look very nice inside the node. When you have a big network you can't really see them anyway and when you're trying to map data onto your network they're kind of disturbing the coloring so let's turn them off. So now things stand out even more clearly. We could try and... I mean the layout here is actually not too bad. We could run an Y-Files organic layout. It usually does a does a better job there you have it that gave you a nicer network Yeah, I think that was that. Very quickly with Omix Visualizer. So you might have noticed here that you have this problem. What if you mean merge it? Can you combine? Okay, there's a question on YouTube whether you can combine two sessions. Can you combine? Okay, there's a question on YouTube whether you can combine two sessions. You can't really combine sessions, but you can combine networks. So the networks have to be in the same session for you to be able to merge them. So let's just do a really quick thing. Of course, I can't merge these two networks in a meaningful way since one is a streptococcus network and one is a human network. But let's make another couple of quick human networks. Import network from public databases. Let's illustrate a disease query. A query for Alzheimer's. Alzheimer's disease. Give me top 100 proteins. 0.8. Give me a big network. 200 proteins. Something like that. And that gave me a network of Alzheimer's disease which is a ridiculous grammar you would expect then we can go import another disease database network from public databases Parkinson's disease let's do the same for that. So now I get a network for Alzheimer's disease, I get a network for Parkinson's disease, I could go merge these networks. Now one little thing if I want to be clever, I could go and say there's this score over here say there's this score over here called disease score and you have the same down in the Parkinson's it's also called disease score of course in one network it refers to Alzheimer's disease in the other network it refers to Parkinson's so that's a little bit confusing. You can rename them. So I'm going to rename this one. Let's just call it ADscore. I can go to the Parkinson's network. I can right-click this one, rename it, call it PDscore now I have these two networks I can take them and go tools this is where we combine networks tools merge networks I can take the union of the Parkinson's disease network and the Alzheimer's disease network Merge them. And now I have a big network. You see I had 200 here, 200 nodes in this network, 200 nodes in this network. The merged network contains 321 nodes because there's of course an overlap in terms of which proteins are involved in both the seasons. By the way you may notice now that it's a big network help where did all my engines go why is it not showing any engines it's because it tries to be fast so when the network is very big and I'm zoomed out like this, it doesn't show me everything. If I say view, graphics details, I tell it I want you to draw everything. You see now the labels are drawn, all the edges are drawn. Life is good. Now one of the tricky things here is that when you're merging networks like this, of course you're just taking the union of them. So I took the union of this network and this network. So if I have a protein that is only in Alzheimer's disease and I have another protein that is only in Parkinson's disease, the merged network will not have an edge between them because the edge wouldn't be in either of the two networks so it's not going to be in the merged network. There are ways of fixing that. It's a bit of a workaround. String, set a string network, string and then first I'm just going to change so you know how we set the Set a string network. String. And then first, I'm just going to change. So you know how we set the confidence when we are importing a network. You can change the confidence afterwards. So what I'm going to do is first crank it up all the way. We're never 100% sure about anything. So I just deleted all the edges. And now if I go lower the confidence the app will go back to the server and retrieve all of the interactions so I can now go in and retrieve interactions between them I can also go all the way down I can say let's give me the biggest radiculogram that I can possibly get here get me all the interactions that we have in string no matter how weak I can say let's give me the biggest radiculogram that I can possibly get here. Get me all the interactions that we have in string, no matter how weak they are. So that's how it looks when you do that. And as you can see, we can handle pretty big networks. You're looking at a network with more than 16,000 edges here. Of course, I can do a layout on that. It's not really going to make things look pretty. I mean there's your ridiculous grammar if you've ever seen one. This is where you want to use things like Cluster Maker. MCL clustering. There's a whole lot of clustering algorithms in Cluster Maker. MCL, by everyone's experience, just works amazingly well for string networks. Well, it works amazingly well in general, but in particular for string networks I would always use that one. Granularity. That's something that says how much we want to cut up the network. I am going to go something like 4 here, because this is really a big network I want to cut up. And then it can take into account that this is a weighted graph. That's really important. That's one of the nice things about MCL. It can handle that the edges are weighted. Tell it the one that matters is the score from the string database. And then I can say create a new cluster of network. And I don't want to restore the intercluster edges. So now it's going to run clustering figure out which proteins are in a cluster together make a new network where it has only the interactions within nodes from the same cluster and that didn't really manage to break apart the hairball. That's the danger of live demos. I should have picked an even higher granularity parameter on this. But you see, it did break it up a little bit. I'm not going, in the interest of time, I'm not going to play around and see how to get a better parameter for it. You could increase the parameter, break it up even more. You could also have filtered on confidence first. In any case, this is it. You might think, how can I, now I have these proteins, how can I select the ones that, hang on, show graphic details. How can I select all the ones that are involved in Alzheimer's disease from this? This gets us to the selection filters. So select, column filter. I can now select select based on is... Where are you? AD score. Now it's selected all the ones that have an AD score. So that's all the ones that were involved in Alzheimer's disease. And this is where you could then go and say set a bypass color make them red for example and when I then click outside to deselect them you'll see all the ones that had an alzheimer's score are now red how are we on time here exercises i am running a bit over time right no I'm not so that 10 minutes any questions any other stuff you would want me to demo in cytoscape Maybe one thing I should have shown you is that we're importing from the database a lot of different properties. You see all these columns called tissue? So those come from the tissue database. And that means that we automatically get tissue expression data in. So if you want to say this network of things that are supposedly involved in Alzheimer's disease, does it look like they're expressed in the nervous system? I could immediately go and say fill color, come on, edit, mapping. So I could say I want the fill color to depend on the tissue nervous system. And there we then have a confidence score that goes from 1 to 5 or 0 to 5 in terms of how sure we are. And now I just colored it based on the expression evidence. And as you see, the vast majority of the proteins got a very dark color meaning that we have strong evidence for expression in the nervous system for pretty much all the stuff involved in Alzheimer's disease. That's what you would expect both because nervous system is a very well studied tissue where we have good expression data but also because obviously proteins involved in Alzheimer's disease you would expect them to be expressed in the nervous system. Of course, one downside to doing this is that you can only show whether things are... You can only show one color, right? When I set the fill color here, I have to choose one tissue. What if I wanted to show multiple tissues? That's where we have some brand new functionality in the Omix Visualizer. So instead of importing a table from a file I can actually import a table into Omix Visualizer from the note table. So I can go and do crazy things like say let's show all the tissues. I'm not saying that's a good idea to do. Just to be clear here, you could select a few of them, but just to show what can be done, I can say import all the tissues into an Omics Visualizer table. I now have them in this other table here. I can now say I want to show that as a pie visualization on the network. Values, continuous mapping, yes. That sounds great. Draw it. And now what I get is that every single node is a little pie shaped heat map, if you will, where the different slices correspond to different tissues with their scores mapped onto them. You obviously should not do this with this many slices, but if you have two, three, four slices that could work. Terima kasih. Thank you. No more questions on YouTube. 1.5 kg kvartal Thank you. Can I show mappings or questions on YouTube? Can I show mapping of compounds to proteins like drug-protein interactions? Yes. Yes, I can. So actually, there's a few things you can do. So let me just go. I think I've messed up this session so bad by now that it would be good to start from a clean slate. There's a few things we can do. One is, by the way, there's also this query panel here where you can query. I always forget it myself. So I fetch a network here. That's an interesting feature. It is doing something a bit funny here. It seems like despite doing... That's a bug in Zytoscape. It seems like despite closing the session and starting a new session, I ended up actually having the bypass being remembered. You can right-click it once you've selected all of them and remove bypass. That's a bug report I should file. Anyway, I have a clean Alzheimer's network now. Let's just lay it out. By the way, when things are a bit too densely packed, layout, node tools, spread it out, super useful feature. Now in the node table I have a lot of information and somewhere in there we have information from illuminatingating the Drugable Genome. So Illuminating the Drugable Genome is a big NIH project that I'm involved in, and I'm involved in the so-called Knowledge Management Central, which is headed by Tudor Oprea at University of New Mexico. And there we have things called target development levels and target families so we can go in and color things based on the target development level which is think like is this something where we have an FDA approved drug is it something where we have a small molecule compound or is it something where it's just reasonably biologically classified but we don't have a compound yet or is it a so-called dark target which is a target where it's from a protein family where we are pretty sure that we should be able to make drugs against it but we know nothing about this protein. So let's take the target development level and this is a good chance to illustrate the discrete mapping. Now you see here you have tclin and tchem. So we can take the ones where we have FDA approved drugs, mark them by blue, take the ones where we have small molecule compounds, but nothing FDA approved, Color them orange. And there you have it. You have drop target information mapped onto the network. So that's one thing you can do. The other thing you can do is you can go use the stitch database. And you can go and do import network from public databases. Do a stitch protein compound query. Query for something like, I don't know, Gleevec, if I can spell, ask for some interaction partners of Gleevec, let's say give me 20. And now I got Imaginib, which is another name for Gleevec. I think this is the correct chemical name for it. Gleevec is a brand name for the drug. Dasatinib, which is a very closely related compound. If you see the structures over here, you will see that those two compounds are very similar. And you have then information about you have then information about the targets. Both the ones that are sort of the approved targets and the ones where there's information relating them to these. I think we are done. Last chance to ask questions. And I think this afternoon... I don't know. Should I keep the YouTube stream running? It's going to be a long one, but I guess YouTube can handle it. Or should we just go to purely run it as the Zoom thing in the afternoon? Will it consider off-target as well? Yes, it will. So when we're looking, so it depends on what you mean. So when you're doing the stitch query, absolutely. We're just looking at do we have binding information or things being mentioned a lot in the literature and so on. It's the same kind of functional associations as you have in a string. It's not just going to give you the approved intended targets. It's definitely going to give you the approved intended targets. It's definitely going to give you things like off-targets too. It's also going to give you things like cytochrome P450 enzymes that are involved in metabolizing the drug. The later missing question here isn't the label option under the style tab meant to's label font size over here. If I use, if I change the label font size to be 18, then they all become bigger. However, if you're not seeing it is probably because you're also not seeing the names being centered. It's probably because you are using string style labels, which are these ones with sort of a little shadow on them and offset relative to the center. If you're using plain Cytoscape labels, then these features will affect it. It can also be really useful over here. You see it's the column called display name from string that is being shown as the label. If you're importing some omics study and your Excel table has a column that contains the gene names that you prefer to use in your study, you can go and change the label from being the display name to be the gene names from the Excel table that you got in with the import table so that the name shown in the figure are consistent with what you show in all your other figures in your paper. Is there a way to map known PTMs No. Not from purely Cytoscape. So if you have your own PTM data and PTM data from elsewhere, you wouldn't have to gather that outside Cytoscape. You can, of course, import both of them into Cytoscape to compare them afterwards on the network, but you can't go in and do it purely in Cytoscape, I believe. Although there is the App Store. It's a massive, there are so many apps in there. So I'm not going to rule out that one of the hundreds of apps is capable of doing what you want to do. I'm just not aware of one that does it. So there was a question... let me see... there's a YouTube question about tips and tricks on how to deal with huge networks. Depends on what you mean by huge. So there's a couple of things with Cytoscape and huge networks. So one thing is of course you have networks and you have visualizations of them. If I import a massive network into Cytoscape, by default it's not going to draw the network because it says that it's too big. There's no point. If you really insist I can do it but it's just going to be slower and gobble up a lot of memory. So you're better off analyzing it without a visualization first. However, if you have truly big networks that are so big that it really doesn't make sense to try to draw them yet, if you know how to program I would highly advise that you do as much of the analysis as you can outside Cytoscape before getting them in. When I say huge networks I'm talking tens of thousands of nodes, hundreds of thousands to millions of edges. When you're looking at that Cytoscape can handle it if your computer is powerful enough. I just don't think it's a good way to work with massive networks. If we're talking big networks, you know, a thousand proteins, lots of nodes, and the challenge is not so much how do you get your computer to handle it, but how do you draw it. Then the trick is really things like clustering to cut it up. That's the key thing there, because you have this big, ridiculous gram and you basically have to chop it up in smaller modules for it to make sense. Absolutely, if you're loading massive networks into Cytoscape, it is going to take a lot of memory. I would not really want to run Cytoscape on a machine with less than 8 gigabytes of RAM working with kind of biggish networks, you would want to have Cytoscape on a machine with less than 8 gigabytes of RAM, working with kind of big-ish networks, you would want to have at least 16. So everything I did here, which you saw, was pretty snappy. That was run on a PC with 16 gigabytes of memory, and I was not using much of it. Right, it would be neat to have, I mean, that's something we could consider having a way of fetching data directly from PTM databases into Omics Visualizer. That could actually be a neat functionality to have. It depends on whether those databases have the necessary APIs to even allow us to do that. I haven't explored it. It would be super neat if one could do it. If you would have a way of automatically querying some of the major databases like Phosphosite Plus and whatever they are called. If you could query those and retrieve PTM data directly from them, that would be neat. For now, what you can do is if they have something like an Excel table you can download, you can download that and then just import it into Omix Visualizer from a file. Good. I think we'll stick with the timetable. We'll stop here. We have one hour lunch break and we'll be back on Zoom with the Cytoscape exercises. Since there won't be more lectures or anything like it in the afternoon, I will end the YouTube stream here. You're welcome to contact me if you have questions. If it's more Cytoscape questions I suggest ask on the Cytoscape help desk. We are monitoring that as well so you will very likely get an answer from us on it but it means that it gets shared with other people so instead of having everybody ask the same question and us write you private emails it's really better that you ask your questions and the answers get out in the open to help other people as well. So thanks, everyone. I'm happy to see so many people were here. So many people were on YouTube as well. And see you back, those of you on the course on zoom in an hour. Enjoy lunch Bye you you you you you you you you you you you | Network Biology: Introduction to STRING and Cytoscape | 11,608 | Lars Juhl Jensen | 20200401 | This lecture and software demo will cover the STRING database of protein interactions, related online database resources, the Cytoscape network analysis framework, and how the Cytoscape stringApp allows these to all be used together for omics data visualization.
This is part of the "Statistical Methods in Bioinformatics" Ph.D. course offered by University of Copenhagen.
0:00 Waiting screen
17:00 Pre-course chat
24:50 STRING lecture
59:21 Break
1:04:30 Text mining lecture
1:16:35 Questions and answers
1:29:35 Break
1:41:20 Hands-on STRING exercises
2:10:26 Cytoscape lecture
2:27:48 Questions, answers, and live demo
3:09:32 The end
Hands-on exercises are available at https://jensenlab.org/training/ | 2024-09-21T17:41:52.290070 |
https://www.youtube.com/watch?v=XcUtFig7HZI | Welcome to this short introduction to the core concept of network-based discovery. Network-based discovery is an example of knowledge discovery, that is, the goal is to discover new knowledge from existing knowledge. In network-based discovery, the goal is typically to discover indirect links between entities. The field falls in the intersection of many other research areas that I've covered in earlier talks, namely text mining, enrichment analysis, knowledge graphs, and machine learning. So let's start with text mining. An early example of network-based discovery is Swanson linking, named after Don Swanson, who discovered a link between fish oil and a certain disease. The idea being that both the disease and fish oil had a relation in the literature to blood viscosity, leading him to propose that fish oil could be used to treat the disease. This is known as literature-based discovery and there are two variants of it. The first is what is called closed search. In this case, you have two different concepts, A and B, and you're looking for a link between them, C. This can be used to explain observations. If you've already seen that there seems to be a relation between A and B, but you don't know why, you can go hunt in the literature for an explanation. The other variant is open search. In open search, you start from one concept A, then link to B and further to C, thereby making new discoveries. As you can probably imagine, there doesn't have to be only one path. One could look for multiple indirect links linking A and C. That is, A might be linked to both B1, B2, and so on, all of which are linked to C. And having multiple such links would obviously strengthen the association. Another way of doing network-based discovery is enrichment analysis. Imagine that you're starting from a disease of interest and you've performed a genome-wide association study or a transcriptomic study. In either case, you end up with a list of interest and you've performed a genome-wide association study or a transcriptomic study. In either case, you end up with a list of genes. You now do gene set enrichment analysis or over-representation analysis to discover pathways that contain many of the genes that you already found to be involved in the disease. That is, you found an indirect link between the disease and the pathway. This is equivalent to doing an open search looking for multiple indirect links. You have A, the disease, being linked to a lot of different Bs, the genes, which are linked to the pathway C. The only difference is that in this case we're not looking in literature,, the links between the disease and the genes come from experimental data, and the links between the genes and the pathways typically come from curated knowledge. So evidence could come from many different sources. So why don't we use knowledge graphs for this? In a knowledge graph, you can have many different types of entities or concepts, and you can have many different types of entities or concepts, and you can have many different types of evidence integrated, including text mining, experimental data, and manually curated knowledge, leading to many different types of links between the many different types of entities. That gives you a graph like this, where you have, for example, genes and anatomy terms and diseases and drugs linked to each other in various ways. You can now go in this graph and also look for longer paths. You don't need to have A linked to C just via B. You could have A linked to B to C to D. The problem with longer paths is that you get a combinatorial explosion and many of these paths may not make sense. For this reason they are generally constrained with so-called meta paths that allow you to follow only the meaningful path. For example you could have a compound, a drug, being linked to one disease which has a gene involved in it which is also involved in another disease thereby indirectly via a longer path linking the compound to the disease. Or you could have the compound belong to a pharmaceutical class in which you already have another compound that is used to treat the disease. Of course, when you look for these longer paths going via many different types of concepts, you need some sort of scoring scheme for ranking them in terms of which are your most reliable predictions. And the obvious way to do that is to turn to machine learning. What is most commonly used in this field is unsupervised learning. Specifically, you can do a random walk within this knowledge graph and have it be constrained by metapath, that way obtaining a metapath embedding. This is a vector representation of every node or entity in the graph in terms of what it is connected to in its neighborhood. You can then use these vectors for doing clustering or for doing link prediction, thereby getting the indirect links that we're trying to get. Another way of casting the problem is as a supervised learning problem, where you can use graph neural networks directly on top of these knowledge graphs to do label prediction. Although a lot of work has been done in this field, there are still many open challenges. One relates to the fact that networks are not perfect. And for that reason, the methods need to be robust to error. And that is difficult to do when you're looking for indirect paths. Because all it takes is one wrong link to produce many wrong predictions. For this reason, it's important to be able to benchmark the methods, which is another challenge. Prediction is, as it's famously stated, very difficult, especially about the future. And the only way of really testing whether these methods are able to make predictions about the future is to do pseudo-prospective tests. That is, we try to make predictions based on what we knew up until some point in the past to see if they can predict what we know now. Unfortunately, this is really difficult to do in practice. It's very difficult to make a clean data set that doesn't leak any information from the period you're trying to predict. But the most difficult problem is how to avoid trivial discoveries. These are high-scoring predictions that will often dominate your output, but which are in principle true, but completely trivial to experts. For example, we may have a drug that is used to treat a disease, and you can trivially predict that another drug that is nearly identical might work for the same disease. This is not interesting to predict. Unfortunately, it's very hard to filter these predictions out. So for that reason, when you use these methods, you have a tendency to have to go through long lists of predictions that are correct but uninteresting to find the few interesting ones. That's all I have to say about network-based discovery. If you want to learn more about the knowledge graph and how they are created and what other things they can be used for, I suggest that you take a look at this presentation next. Thanks for your attention. | Network-based discovery: A short introduction to the core concepts of knowledge discovery in graphs | 436 | Lars Juhl Jensen | 20220509 | A short introduction to the core concepts of network-based discovery, which is discovery of new knowledge from existing networks. The topic lies at the intersection of biomedical text mining, enrichment / overrepresentation analysis, knowledge graphs, and machine learning. The presentation goes through how each of these topics relate to network-based discovery and to each other. Finally, it covers some of the most common open problems why trying to use the methods in reality.
0:00 Introduction: automated discovery of new knowledge from existing knowledge
0:32 Text mining: Swanson linkage, scoring of indirect links, and support from multiple intermediate concepts
1:53 Enrichment analysis: knowledge discovery through enrichment analysis and equivalence to multiple intermediate concepts
2:56 Knowledge graphs: multiple evidence types, longer paths, and meta-path constraints
4:27 Machine learning: unsupervised learning, random walk, meta-path embedding, link prediction, supervised learning, and graph neural networks
5:14 Challenges: robustness to errors, probabilistic graphs, benchmarking, and avoiding trivial discoveries | 2024-09-22T10:43:08.770511 |
https://www.youtube.com/watch?v=5ejPI6QqKBU | Now, I want to talk today about one small thing before we move on to signaling, because it really kind of completes the work that we talked about with respect to trafficking. So I popped this question up last time, and it seemed like there weren't quite enough sort of people leaping to give me an answer. But let's just take a look at the big picture of things, as it's always good to do, because this will also get me to one other topic, which is protein misfolding. So at the end of the day, what really defines where a protein is, what it does, is defined by its sequence. But you always wanna remember that a protein sequence is defined by its messenger. The messenger is defined by the pre-messenger. Yes, there may be splicing events that really cause changes in localization, but the pre-messenger includes the content. And then what defines that is the DNA. There's certain aspects of regulation at the epigenetic level that we don't talk about barely in this course. But I want you to make sure that you realize at the end of the day what the protein is, how it folds is defined originally by the sequence of the DNA all the way along here. The post-translational modifications that we started talking about last time are defined by the protein sequence, which all the way is defined by the DNA. So, so much of protein function. And there's one more aspect of proteins that's defined by the DNA sequence, and that's whether a protein folds well or perhaps in some cases misfolds. And that's the thing I want to talk about very briefly today because I think that this captures the picture. So let's just go over here and write misfolded proteins, which, just like everything else, largely end up being dictated by the DNA. Because whether a protein folds faithfully into a good structure or misfolds can be a function of the protein sequence. So there could be mutations in the protein that ultimately end up that the protein misfolds and forms either a misfolded tertiary structure or even worse, adopts an aggregated form that causes a lot of damage within cells and outsides of cells. So I want to talk just briefly about the processes that we have, it's just one slide, to deal with misfolded proteins. So when a protein is translated, it almost starts folding straight away, especially large proteins. A fair amount of a protein may have already emerged from the ribosome and started folding, even when the whole protein isn't made. The sequence ultimately ends you up with a well-folded protein. But if the protein does not fold fast enough or there is a mistake in this, which might be or incorrect folding, then you will end up with a protein that's partially folded within the context of a cell. We especially encounter misfolded proteins when we are overexpressing proteins in cells, because you're just making one of a type of protein really quickly, and it doesn't have a chance to adopt its faithful structure. So there are proteins within the cell that help sort of protect the folding process early on to allow the protein to have enough time in cingulo, not with a lot of copies of itself around that are misfolded, to adopt a folded structure. And these proteins are called chaperones. I don't know if you guys are familiar with the term chaperone. It was a term that was heavily used in the sort of 18th and 19th century. A chaperone used to be an aunt or someone who you would send out with your beautiful young daughter to chaperone her, to protect her so she didn't get bothered by those mean men out there. So chaperones were, that was the original definition of the chaperone, and it's kind of interesting that the chaperones are now proteins that help folding or protect against misfolding. How do they do this? Generally a protein will fold poorly if it's very very if it's quite hydrophobic and hydrophobic patches are exposed and aggregate so let's say you have a protein and there's a lot of copies, but they're not folded. If you have things that are hydrophobic that would normally end up tucked inside the protein, if the protein hasn't folded in its good time, these will just start to form aggregates sort of associating with each other. It's just a physical phenomenon. If you put something that's got a lot of hydrophobic faces on the outside, this will start forming an aggregated bundle and not a nicely folded protein at all. What the chaperones may do is in part hold the partially folded protein. So let's just think of this big jelly bean as a chaperone until things start to adopt a favorable state. But sometimes it's just too much. The chaperone cannot handle the flux of protein, so the protein ends up being recognized as misfolded, and then it gets tagged as a misfolded protein, and it gets taken to a place in the cell for disposal. So if you're unable to fold, there is a tagging process. And I mentioned it last time, it's a process known as ubiquitination. This is also a post-translational modification, but it's one that occurs on poorly folded proteins, and I'm going to describe to you that system, because the ubiquitination is the flag, the signal, or the tag to take this protein to the great shredder, basically. And so what's a paper shredder? I like the analogy with a paper shredder. So here's a fellow who's got too much in his inbox, so he just sends it straight to the shredder. It's a little bit about too much misfolded protein being made, so instead of sort of waiting to deal with the paperwork, you just send it straight to the shredder. And the proteasome is the cellular shredder that actually breaks proteins up into small chunks and then digests them out. So think of the proteasome. It's a shredder which chops up proteins into mostly into short peptides that are 8 to 14 amino acids in length, fairly small. Short peptides won't cause a problem in aggregation and will then be further digested. Now, if you've got this shredder sitting around in the cell, it's like having a paper shredder on all the time. There's a risk things may end up in there without meaning to be. So for things to be tagged for shredding, they go through what's known as the ubiquitin system. So the first thing to get into that is, and it's only then that proteins are tagged for shredding up or chopping up by the proteasome. So as you can sort of tell by its name, it's got protease function, but it's a large macromolecular protease with lots and lots of subunits that are important to cut that polypeptide into smaller pieces. But because many of your proteins may be partially folded or misfolded, they first have to be unfolded. So the ubiquitin is the signal to send proteins to the proteasome where the second action is protease activity and the first action is unfolding. So what I show you on this picture is the barrel structure of a proteasome. Let me explain the components of it. The red component of the proteasome is a multimeric ring that uses ATP and starts tugging apart the protein that you need to destroy. But it will only do that if the protein becomes labeled for destruction by the ubiquitin system. And I am showing you here a massively simplified version. Let's say this is a misfolded protein. It gets tagged with another protein. It's a really little protein known as ubiquitin. I've shown you the three-dimensional structure here. And using ATP, you end up managing to put a ubiquitin chain on the protein that's going to be destroyed. That is a post-translational modification that is a tagging for destruction. If the protein is not tagged, then it's not going to be chewed up. That makes sense. You don't want to be chopping up proteins in a cell with wild abandon. Once the ubiquitin chain is on here, the protein will bind to the unfoldase part of the proteasome. And with ATP, it will just start tugging the rest of the residual structure apart to thread the protein down into the blue part of the barrel. It's a little hard to see it like this, but it's literally the proteasome are four concentric rings. Let me see, I hope my artwork is gonna be good enough. Well, that's an unfoldase. And so is that. And then in the center, there is a protease. And each of these components is multimeric, having six or seven subunits. So it's a huge structure. It has a sedimentation coefficient of 20s, that entire structure. I don't know if you remember when I talked about ribosomes. They were so big, we didn't tend to talk about them by size. We talked about them by sedimentation coefficient. And the large and small subunits of the ribosome, the eukaryotic one, just to remind you, eukaryotic one, just to remind you, with 40s and 60s. So just remember that S stands for Svedberg. It's a sedimentation coefficient unit. It describes how fast a protein precipitates. So once the protein has been labeled with ubiquitin, it binds to the unfoldase. And then the single strand feeds into the center core, which is two sections of protease. So it's feeding in here. It sees the protease activity. And then it's just short pieces of protein spit out of the proteasome. Once these are really little pieces of peptide, they're readily digested by proteases within the cell. And you can recycle the amino acids. Or you can do other things with these small pieces of peptide They actually end up sometimes being sent for presentation on the surface of the cell by the immune system And you may hear a little bit more about that later. So the the proteasome. Oh, I apologize. This should have been 26 This should have been 26. As a molecular weight that's very large, 2,000 kiloDaltons, that's why we refer to it by its sedimentation coefficient. So this machinery is very important to get rid of misfolded or aggregated proteins to destroy them. Now, are people aware of the sorts of diseases that can result from misfolded proteins? Has anyone been reading the news much about certain types of diseases, particularly in neurobiology? Anyone aware of those? Yeah. Which one? Yes, mad cow. So there are a variety of neurological disorders. A mad cow is one of them. Kreutz, U-T-Z spelled, Jakob. But Alzheimer's disease is another one. PICS disease is another. There are a wide variety of neurological disorders that result from misfolded proteins, both inside the cell and in the extracellular matrix, forming these tangles that are toxic to the neurons, causing them to no longer function, and then resulting in many of these neurological disorders. The ones I've described to you, I've mentioned to you here, I know many of you are familiar with Alzheimer's disease. Mad cow disease is a variant of a particular protein misfolding disease that was first noted in cattle. And they basically just fell down, dropped down, and it was in some cases ascribed to, the contagion with the disease is ascribed not to a virus or to a microorganism, but literally to misfolded proteins causing the formation of more misfolded proteins. So these are all collectively designated as prion diseases. I think you'll have read that term. And it's a particular kind of disease that the infectious agent isn't a living system, not a virus, not a microbe, a fungus, a protozoan, but rather a protein where it's misfolded structure nucleates the formation of more misfolded structure that leads to the disease. So I grew up in England during the years where there was a lot of mad cow disease in England, and even though I'm a vegetarian in the US for 30 years, I can't give blood in the US because I lived in England during the time when there was a lot of mad cow disease and this can be dormant for a long, long time before it suddenly takes over. So there's restrictions on blood donation in certain cases and it's because it's not something you can treat with an antibiotic, you can treat with an antiviral. It's literally traces of badly folded protein that can nucleate the formation of more badly folded protein that can lead to the diseases. There's particular instances of some of these diseases in tribes where there's pretty serious cannibalism and eating your sort of senior relatives' brains was considered to be something, an important act of respect. And there's transfer of some of these prion-type diseases through cannibalism as well. So eating contaminated meat, be it a cow, be it your grandparents, whatever, it's something that actually is a serious transmissible disease. And it's really, in the situations where it can be sort of related back to contaminated meat are one thing. But there are variations in the case of Alzheimer's where the sequence of proteins may dictate that they don't fold well or they're not post-translationally modified properly, so they end up as misfolded proteins. So these are often genetically linked disorders, some of the things like Alzheimer's. And once again, remember, that goes all the way back to the DNA, which might in some cases trigger the misfolded disease. So it's a fascinating area, and there's a tremendous amount to be studied because of the aging population, these diseases are piling up, and that we need to mitigate the causes of the disease and find ways, for example, to slow down. If there are these fibrils of protein that are misfolded, can we maybe inhibit that formation with some kind of small molecule inhibitor to mitigate the symptoms of the disease? So it's a very, very active area because almost every, you know, many, many neurological disorders seem to be coming down to misfolded proteins. So let's move on now to signaling. All right. All right. So we're going to spend two lectures on the remainder of this lecture plus the next lecture. And what I want to do in this lecture is introduce you to some of the paradigms, the nuts and bolts, the mechanics of protein signaling. And then in the next lecture, I'm gonna show you examples of how all the characteristics that we define signaling by get represented in signaling pathways within cells. So I'm gonna give you all the moving parts and then we'll move forward to see how the moving parts might function in a physiological action, such as a response to something particularly scary or as a trigger for the cells to do something different. So let me take you first of all to a cartoon-like image of a cell, and we're going to just take from the very simplest beginning, but then this topic will get quite complex as you see, but that's why I think it's important to reduce the process of protein signaling down to simple aspects of it that we can really recognize even in much more complicated pathways. So in protein cellular signaling, this is a complex system of communication that governs all basic activities of the cell. There are no cells that don't do signaling. Bacteria and eukaryotic cells may do signaling slightly differently, but they still do have an integrated correlated system that's responsible for triggering functions of the cell through a series of discrete steps. So protein signaling can be dissected into three basic steps, where you first of all receive a signal. And we're going to talk about what that signal is. What's the nature of that signal? Is it a small molecule, large molecule? Where is the signal? Where does it act? Then the next step is to transduce the signal. And finally, you have an outcome, which is a response. So we're going to talk about each of these components in order to understand flux through cellular signaling pathways and how they work to give you a rapid response to a necessary signal. All right, so in this cartoon, let's just, for example, think about what if we want to trigger cell division, we might have a signal which is the yellow molecule, small molecule, large molecule, we'll get to that later. There's a cell here where on the surface of the cell is a receptor, and that would be the entity that receives the signal. So in the first step in the process, there's a buildup of a concentration of a signal. And it occupies the receptors on the surface of the cell, and in some cases, inside the cell. We'll talk about a bifurcation there. But really, a lot of cellular signaling is dominated by signals coming from outside the cell. What happens upon this binding event is the transduction. If you bind to something on the outside of the cell, as a consequence, you might have a change on that structure. If it crosses the membrane, you might have a change on the structure. If it crosses the membrane, you might have a change on the same molecule structure that's on the inside of the cell. So that's why it's called transduction. You're transducing a soluble signal from outside, binding that signal to the cell surface receptor, and the cell surface receptor is responding in some way. And there are two principle ways in which we respond to extracellular signals, and we'll cover them both. The next event that might happen is through the change that happens to the intracellular component of the receptor. There might be a change, a binding event, another step occur within the cell. And as a function of that, you get a response, all right? So it's really, thinking it in these three components is a good way to kind of dissect out the beginnings of the complication. And then what we'll be able to do is really start to see what kinds of molecules come in, how are they received, how is the signal transduced? And what's the ultimate outcome with respect to a response? Okay? Everyone okay with that? All right. Now, this is what you have to look forward to. So we give you something with three moving parts, and suddenly we show you something with sort of, you know, 100 moving parts. And cell biologists very, very frequently look at these maps of cells, where what they're looking at with each of these sort of little acronyms or names, all of these are proteins, where they have been mapped out through cell biology and cellular biochemistry to be existing in certain components of the cell. And what has also been mapped out very frequently is who talks to who. So the fact that Jack S might interact with STAT 3.5 and so on, so much of this was worked out through cell biology and biochemistry and also by genetics. So Professor Martin has talked to you about identifying a player in a complex system by genetics that say you have a cell that fails to divide. You might perhaps screen or divides unevenly or has some defect in cell division. You might be able to pick out a particular player. Now, the key thing I want to point out to you with this cell is what's on the outside of the cell across and runs across the membrane and might have the chance, the opportunity, to have both an extracellular receptor and an intracellular function. And those key proteins are things like receptor tyrosine kinases. We're going to talk about all of these in a moment. G protein coupled receptors and various other cell surface receptors. So all of these, anything that spans a membrane, has the opportunity to be an important component of a signaling pathway. Because what you're routinely trying to do is have your signal recognized on the outside of the cell by something that spans the membrane. The signal will bind to that. And then you will have an intracellular response. So that's breaking it down. That's why proteins that are made through the secretory pathway that we talked about in the last lecture that go through that endomembrane system and end up being parked in the plasma membrane are so important. Other proteins that actually get secreted through that pathway are also important. What do you think they may be important for? Let's say you've made a protein within the cell, it goes through all the system, it doesn't stay parked in the cell membrane, it actually gets released from the cell. What might that be doing? Yeah, exactly. So that endomembrane system that I described to you, that pathway is great for making receivers and it's great for making signals. And that's really what can sort of fuel the functions of cells. Okay. So in systems biology, you may have heard this term quite frequently. Systems biology is research that helps underlying, us understand the underlying structure of signaling networks. So lots of people who have common interests in engineering, computational analysis and cell biology might bring in data to allow them to make models of cellular systems to understand flux through signaling pathways. So they may make fundamental measurements about the concentrations of some components within the cell and then try to say, okay, I know based on everything I've measured that this is a dominant pathway for gene regulation and I could control this pathway by sort of different types of interactions. In this cellular system, I also show you another component, which is the nucleus. And when we discuss and describe specific cell signaling networks, in some cases, the signaling network may involve receiving a signal, undergoing a variety of changes in the cytoplasm, but then a change that eventually results in a protein going to the nucleus. And oftentimes those proteins that run into the nucleus are transcription factors that then trigger DNA replication or transcription and then promote activities. So this is how you think about it. When you think of cellular signaling, it's really about what does the signal need to do and what's the pathway that I follow to get there. So all of those are membrane proteins. So now let's look at the canonical aspects of signal transduction. So the first, and I'm going to rely on these little cartoons, but I want both in this lecture and the next to really show you where these recur in so many systems. So for that purpose, I want to talk about the characteristics. So the first critical characteristic is a signal and its specificity. So a signal will be something that comes from outside of the cell. It could be a hormone that's produced in the hypothalamus and sent to another organ. But the most important thing about the signal is that that signal, which binds to a receptor in a cell membrane, is specific for a particular receptor, and the different signal won't bind to the same receptor. You have to have faithful signal specificity to trigger the right function. So if it's a hormone, it's got to be the hormone that you want to trigger the receptor, not a related but different looking structure. If it's a small protein, you want it to be the exact one that binds with high specificity to a receptor. So what that means is, if something is binding, if a small molecule is binding to a protein on the surface of a cell with high specificity and high affinity, it means that even at a low concentration, it will make that binding contact. But all the other small molecules that are around won't crosstalk into triggering that interaction. So we have high specificity, and we gain that specificity through macromolecular interactions just like the ones we talked about within biochemistry. So if we have a small molecule or a protein bind to the receptor, it's making all those hydrogen bonding, electrostatic, non-covalent types of interactions with high specificity so that a low concentration of the signal molecule is efficient for binding to the receptor to trigger the function. The next characteristic is amplification. is amplification. Let's put some lines between these guys. Now with all the signaling pathways that you're going to see, we're going to be looking where in a pathway you get amplification. Very commonly, you might have a response that's just the result of a single molecule binding a single receptor. But at the end of the day, you might want a large response. You might want to make a lot of ATP, or you might want to replicate all of the genome. So you need some kind of amplification where, in a sense, you're turning up the volume on your signal. And you need to do that rapidly. So frequently in signaling pathways, you go through a cascade of reactions where the signal might affect an enzyme. But once you make that enzyme active, it might work on many, many copies of another enzyme. And then each of those may work on even more copies. So that's what I mean by amplification, where at some stage you've generated a molecule that can result in the cascade of a reaction. So we often refer to these as cascades. So if you're Spanish-speaking, cascada, you want to think about a waterfall coming from just a single molecule of water, you're getting a large increase in your signal as a result of amplification. The next feature or characteristic of signaling is feedback. At the end of the day, if you're signaling, I've got to make some ATP, I've got to run out of the woods, I'm getting chased, at a certain stage you need to stop all of the process occurring, right? So feedback is just a negative feedback loop that might slow down some of those steps that are involved in amplification. So for a pathway, you only want the pathway turned on for a prescribed amount of time. And then you want to be able to say, I'm done with that whole pathway. I don't need to keep churning through all those enzymes. It's time to stop that. And that usually occurs through negative feedback. And remember, we talked about negative feedback when we were talking about enzyme-catalyzed pathways. So feedback is very often some kind of negative feedback, which suppresses a series of transformations, perhaps through a product of those transformations acting as an inhibitor on an early step. And then finally, the other component of a signaling network, if you think of signaling networks as electronic structures, you have integration. So that's the last characteristic feature. Let me go back to that big circuit diagram quickly to show you an example of integration. So if you look at this signaling pathway, all these signaling steps are not single. You just have a signal come in and you end up, for example, in the nucleus, but rather other components may have crosstalk within one pathway and start either amplifying or turning down a particular signaling pathway. So these are networks. They're not pathways. They're networks that interact and communicate all to amplify signals or turn down signals. So integration is an important part of signaling because you're often dealing with the integrated function of a number of pathways to get a particular response. And that actually ends up being one of the situations where sometimes a particular enzyme may look like a perfect target for a therapeutic agent, but if you don't take into account the integration steps, you may think you're dealing with a single pathway, but you're rather dealing with cross-talk with a lot of other pathways. And what often happens in a cell is there's compensation from other pathways. Is everybody following? Any questions here about this? So what I want you to think about is that it's just amazing what is orchestrated to have even the simplest functions in the cell. How many interacting components there may be. Okay, specificity, amplification, feedback, and integration. All right, so let's talk briefly about types of signals and how we name them, where they come from, in order to make sure we're all on the same page with respect to the language that's used. Now signals may take different molecular forms. For example, an amino acid or a phospholipid. Just something little. Alternatively, they may be proteins. They may be carbohydrates. They might take different forms in terms of their molecular structure, but we tend to describe signals by where they come from. So what I've shown you here is a picture from the book that just describes how we refer to certain signals. So there are four different terms. Autocrine, juxtacrine, and I'm going to just give you a little hint to how to remember these terms. Parachrite, endocrine. Okay, so these don't tell you anything about the molecule. They tell you about where it's come from, all right? So an autocrine signal is a signal that may come from a cell, but it's signaling to itself. So it may produce a component that's released. So it's producing this through a secretory pathway. It's released and it stays in the vicinity of the cell. So the cell is self signaling. So whenever you see something auto, you just want to say, oh, that means it's coming from the same cell where the signal occurs. Let's move to the next one, which is paracrine. I'm going to talk about drugs. And that's usually from a nearby cell, not a cell that's in contact, definitely a different cell. So paracrine is what we would always call nearby. And endocrine is completely from somewhere else. So perhaps coming through the circulatory system, one cell may release an endocrine signal. It may weave its way through the vascular system and then target a cell. So endocrine is always from a distance. And juxtacrine is the only one that's a little odd. It's really from cells that actually are in contact with each other. So it's not self-signaling within a cell. It's not a cell that's nearby, but pretty close. It's actually physically making a contact. And so that's the last terminology there. So hopefully I can get this calcium wave to show you. This is just a video of juxtacrine signaling. I just want you to sort of keep an eye on things. It's usually a cell. What you're observing here is a dye that lights up in the presence of calcium flux. It's called Fura2. And so when you stare at these for long enough, what you can notice is that when one signal will often come from an adjacent cell right near it. So there are long prostheses, you're not looking at the entire cell, but they're definitely, for example, this little duo down here, they keep signaling to each other and that's a juxtacrine signaling because the cells are in the contact, okay? So that just shows you the difference there. If it was autocrine, you just have a single cell responding. If it's paracrine, they would be at more of a distance to each other. All right? I hope that caught that imagery. This is from a website in the Smith lab at Stanford. OK, and then the last thing, I want to give you an example. There are many, many hormones in the body that undergo endocrine signaling. And so one example I thought I would tell you about, you all know that insulin is made in the pancreas. It's an important hormone for regulating glucose levels. And it actually functions at the muscle level. So insulin is an example of an endocrine signal because it travels a distance from where it's made in the body to where it functions in the body. All right, now, so we've talked about the types of signals. Let's now move to the types of receptors. down the types of receptors. Now we cover both the intracellular and the cell surface receptors. But we really will focus a lot on the cell surface receptors. I just want to give you a clue that not all signaling is cell surface. So what I've shown you here is a cartoon where you see signaling where a signal comes from outside the cell, it goes into the cell and triggers a change. And then the majority of the time, we'll talk about these receptors that are in the plasma membrane, and they have an outside place where the signal binds, and they trigger a response inside. And it's only very specific signals that are able to signal intracellularly, that is to cross the membrane to get inside the cytoplasm to do the triggering. What kinds of molecules can cross the membrane easily? We talked about that before when we talked about getting across that barrier. Yeah, non-polar, okay. So you can look at a, you can stare at a molecule, and if it's very polar or pretty large, it's not gonna be able to sneak through a membrane. So something like a steroid molecule, a large greasy molecule, can definitely make that transition. And so those are the only types of signals that we can really do inside the cell because they can get across the cell. Many, many other signals have to go through this, bind to the outside of a cell and transduce a signal to the inside of the cell. So one very typical signal that can bind to an intracellular receptor is a steroid. So remember when I talked to you about these lipidic molecules, things like testosterone and cortisol, these are very hydrophobic molecules. So they literally can cross from the outside of the cell without a transporter. So for example, the hormone cortisol and when that functions it just there's an amount of it Becomes available for example in the bloodstream it crosses the cell and it binds to an intracellular receptor once it binds to that intracellular receptor this disengages a different kind of chaperone protein that's keeping it stable. Once it's bound, it can then go into the nucleus and trigger transcription. So this is the one example of an intracellular receptor that we'll talk about. I just wanted to show you a little bit about the steroid receptors. These are molecules that are very, these are macromolecules, proteins that are very, have quite a complex structure, but they can literally, and I'll show you the picture at the beginning of the talk next time, they can literally engulf these proteins, so once the steroid is bound to that, it completely changes shape. And that's what enables the change for it to be triggered and sent to the nucleus. Now, the key types of receptors that we'll focus on, though, are the cell surface receptors. And there are three basic classes of molecules that occur in the plasma membrane that are critical for cellular signaling. They are the G protein coupled receptors, the receptor tyrosine kinases, and then you will talk in the lecture 22 about ion channels and how they perform a receptor function. So the membrane proteins, first of all, I want to underscore their importance. They comprise 50% of drug targets, the receptor tyrosine kinases and the G protein coupled receptors. The G protein coupled receptors have this seven transmembrane helix structure, which spans a membrane. This would be the outside of the cell and the inside of the cell, so there's signals going across there. The receptor tyrosine kinases are another important type of receptor. They are dimeric proteins that in the presence of a ligand dimerize and then cause intracellular signaling. Once again, they cross the plasma membrane from the outside to the cytosol. And then lastly, there are the ion channels, which also may cross the plasma membrane. And when you think about these classes of proteins, there's a tremendous amount to be learned with respect to their functions. And they are so important to understand their physical functions in the body because they really represent the place, the nexus where signaling happens in the cell. So I want to briefly show you a picture of a GPCR. It's a seven transmembrane helix structure. You can see it here. There are about 30% of modern drugs actually target the GPCRs. And here I'm just gonna show you the structure of a GPCR. Those are the seven transmembrane helices. If you stretch them out, that's about the width of a membrane. That's typical of a signal that would bind to that kind of receptor. This is a chemokine, it's a small protein receptor. So you can see that structure and how it would go from one side of the membrane to the other. In its bound state. The chemokine binds to the seven-transmembrane helix receptor through kind of a clamping action. The magenta is the chemokine. The blue and the green space-filled parts are actually what holds the chemokine. And if you look at it where the membrane would be, you can see how you can transduce a signal from one side of the membrane to the other by the binding of the magenta molecule to the outside of the cell, to those loops outside the cell. That would have a significant perturbation to the biology and chemistry of what's going on on the inside. So next class we'll talk about pathways that are initiated by these G protein coupled receptors and what that terminology means. you | 20. Cell Signaling 1 – Overview | 2,914 | MIT OpenCourseWare | 20200512 | MIT 7.016 Introductory Biology, Fall 2018
Instructor: Barbara Imperiali
View the complete course: https://ocw.mit.edu/7-016F18
YouTube Playlist: https://www.youtube.com/playlist?list=PLUl4u3cNGP63LmSVIVzy584-ZbjbJ-Y63
After completing the topic of protein trafficking, Professor Imperiali introduces cell signaling. In the first of two lectures on this topic, she covers the paradigms and mechanics of cell signaling.
License: Creative Commons BY-NC-SA
More information at https://ocw.mit.edu/terms
More courses at https://ocw.mit.edu | 2024-09-28T13:04:51.522720 |
https://www.youtube.com/watch?v=mvjXFh4P08I | Okay, here we go. A couple of things. Sorry, I forgot to bring candy, but it'll be on sale next week, so we can probably bring it next week. But walking over, I thought, boy, those of you who are here deserve some candy, but it'll be on sale next week, so we can probably bring it next week. But walking over, I thought, boy, those of you who are here deserve some candy. But I've just tried to sprinkle in a few interesting slides for your benefit. I saw this on the MIT News of the Day, and I thought that was really cool. Who would have thought to turn the giant dome into a Halloween pumpkin? And I was also jealous this morning when my husband got ready to go work in the emergency room and he put on his Star Trek outfit. So I was like, oh, I didn't even have it because I don't usually get to actually have a class on the day of Halloween. So he headed off. I think people are unfortunately going to expect him to be able to really fix things very readily in the emergency room today because he's gonna have all those extra powers that he doesn't usually have. But anyway, anyway, so actually it's kind of a good day for a lecture, Halloween, because we're gonna talk about the fight or flight response which is a great paradigm for cellular signaling. So you're going to see how signaling really works in action, because what one has to think about with respect to cellular signaling is that it's dynamic and transient. and transient. And when we look at the molecular details of the switches that enable dynamics and transient behavior in cells, you're going to see how perfectly adapted they are for these types of responses that have to be carried out in cells or in organs in order to respond to a particular signal rapidly and with a definitive timeframe, and then have that signal then stop once the timeframe has passed. So I really wanna sort of stress to you the characteristics of signaling that can emerge just by knowing about two particular cellular switches, knowing the molecular details of those switches, to really understand then when we look at them in action in a couple of cellular signaling pathways, then we'll see how adapted those signals are. And the great thing about biology is that once you learn a few very specific things, then those often get reused in nature. So the cellular signals that I'm going to describe to you are used again and again in different formats to create different signaling pathways. So it's not at every pathway in every cell in the body has different nuances. It has general paradigms that we can learn about and understand. So the key feature is then to think about the molecular basis of these switches. So last time I was talking to you about cellular signaling. And remember, there's always a signal, a response, and an output. So something happens. It's a signal. It's usually a molecule of some kind outside the cell or able to diffuse into the cell. As a function of that signal, there's a response. So this is molecular. The response, obviously, is biochemical. And the output is biological. So I want you to sort of think about these as we look at pathways. What's really my output at the end of a particular signaling pathway? What was my input? How did it get to the cell? How does it have a dramatic effect on the cell as a whole? How does the timing and action of this effect occur so rapidly? So we talked last time about different types of signals, those that mostly occur in the cytoplasm of the cell with signals that are able to diffuse across the plasma membrane and bind to an intracellular receptor and then cause an action. But really the most important ones for today are going to be the types of receptors that span the membrane. And the reason why these are much more significant, they're more in number, they're more predominant, is if you have a membrane-spanning receptor, you have the opportunity to use signals of very, very different types, small polar molecules, small proteins, lipids, amino acids, carbohydrates. You've got a much larger range of signals than you could possibly have if you restricted yourselves to the type of signals that can cross the cell membrane. Those are very limited to non-polar small molecules that can get across the membrane. The more dominant types of signals are going to be the ones that are outside the cell. They arrive at a cell surface. They bind to a receptor that is transmembrane and transduce a signal from the outside to the inside. So that's an important term here. The process of transducing external information to internal information. So when we started the course, we were really thinking about laying down the cellular membrane as an encased environment where things could occur at high concentrations. You could set up systems that were functional within membranes. But in doing that, within a membrane-encased area, in doing that, you've built a formidable barrier around the cell. So the types of receptors that we'll talk about are those that have adapted to take this external information into the cell and then have a cellular consequence occur. So I'm going to talk to you about two specific types of cellular switches. And they're going to be intracellular. And we're going to be referring back to these because they're going to be important as we dissect a signaling pathway. So the first type, so stand aside a moment, put aside a moment the actual process of a signal binding, the response, both biochemical and biological. Let's look first at the molecular detail of these switches and see how they are adapted to their function. And the first type of cellular switch are what are known as G proteins. The G is because they bind guanidine nucleotides. So that's why they're called G proteins. They're small proteins that bind Gdp or GTP. So this is the guanine nucleotide that has either two phosphates, guanine diphosphate or triphosphate. So there may or may not be a third phosphate here. And the G proteins bind them. And the dynamics of the situation are that when the G proteins are bound to GDP, they are inactive. The switch is off. There's an aspect of the structure and it's very dependent on how many phosphates there are in this structure. But when it's bound to theTP, it's an on switch, and it's active. So this is the molecular basis of one of the switches. They relies on the shape, the conformational dynamics of these small G proteins. and when that shape is quite different if it's bound to guanosine diphosphate or triphosphate. And we'll take a look in a moment at how the structure, the shape changes, the shape shifts upon binding the triphosphate analog. So this is a dynamic interconversion. When the GTP is hydrolyzed, you go back to the GDP-bound state, the off state. And there are a variety of proteins that actually help these processes, which we won't talk about in any detail. The main thing that you want to remember is that when the G proteins are bound to GTP, they're in an on state. GDP, they're in an off state. GDP, they're in an off state. And that's shown in this cartoon. And here I should be able to, if all goes according to plan, show you the structure of a GTP analog bound to a G protein. So let's, this little guy is twirling around. He's settled down a little bit. The key thing you want to look at is where it's magenta and cyan, the structure of the GDP and GTP-bound G protein are very, very similar. But big changes happen in the yellow, which is the GDP-bound form, and the red, which is the GT-bound form. Let me go back again so you can see that one more time. So in the in the GT, GTP bound form a portion of the protein swings around and binds to that third phosphate on the GTP and forms a different shape to the structure. And that's a dynamic change that's responsible for activation. When it's just GDP, that's shorter, there's nothing for that red arm to bind, and so it's much more of a floppy structure. What I want you to notice in that yellow, little yellow portion in the GDP bound form, you actually don't really see where the rest of the protein is. This is because this is a crystal structure. And in the crystal structure, when things are very mobile, you can't even see electron density. It's as if the part of the protein isn't there because it is so dynamic. It's only in the GTP-bound form it forms this tight, compact structure that represents a switch that has been turned on. Does that make sense to everybody? Is everyone good with that? So just that change, that extra phosphate, reaching further to the protein and making an interaction with the protein itself makes a difference in the dynamics of the G protein and the activity here. Now, there are different types of G proteins. And you'll see both types reflected in this lecture. There are small G proteins. And they are monomeric. and they are monomeric. And then there are slightly more complicated G proteins that are trimeric. They have a hetero-trimeric structure, so they have quaternary structure where you have three different proteins as part of the complex. So the other ones are trimeric. as part of the complex. So the other ones are trimeric. And the G protein actually comprises three subunits, where one of them is the important one that binds GDP or GTP. But they're a little bit more complicated. And in the first example, when I talk about a particular response to adrenaline, we're going to see the trimeric G proteins. And because they're trimeric, that means there's three subunits. And the convention is that they get the Greek lettering system. So they are the alpha, beta, and gamma subunits. They've each got their name. They're three independent polypeptide chains, and it's actually the alpha subunit that binds GDP or GTP. So that's one, the formulation of one of the types of switches that we're gonna see when we start to look at a pathway. What do you need to remember here? You need to focus on the fact that in one state, the protein is in an off state. It doesn't kick off a signaling pathway. But in the other state, the protein is a different shape because of binding a loop. Let's just make this a little longer. To that phosphate that's negatively charged to the protein. So that's an on state. And they're very definitive types of structures. Now both of these proteins are intracellular, which means they're part of the response once a signal reaches a cell. They're the things that change, and they're what the signal gets transduced to, the G proteins. Now, there's another type of intracellular switch, which is used very, very frequently in nature. And in fact, it crosses, permeates through all kinds of cellular processes. And this is phosphorylation. So here are the G proteins, which is one. And the other one is phosphorylation. I don't like that short. OK. Now, remember we talked about reactions of proteins that alter their behavior, their properties, their dynamics. So protein phosphorylation, remember, is a post-translational modification, a PTM. It's something that happens to a protein after it has been translated and folded. And the PTMs of the phosphorylation involve whoops, amino acids that have OH groups. So that's the structure of, O minus, to the oxygen on tyrosine on the side chain. So actually, it looks pretty different. It behaves pretty differently. There are two other residues in eukaryotic cells that are commonly phosphorylated. They're the other two that include OH groups. So they are serine and the third one, because I run out of space, but you get the general message, and threonine. So these are the three amino acids that commonly get a phosphate group attached, and they change their properties. They are called kinases. Kinase, the root of the word, is actually to change. So the enzymes that catalyze use GTP, the kinases most commonly use ATP to give up a phosphate to phosphorylate the protein. So there's another substrate in this reaction is ATP. Now when we look at this structure, there's two or three things I really want to call your attention to. Once if we're dealing with this kind of switch, we can go back to the off state by chopping up the GTP and making it GDP again. So that's how to turn the light back off. In the case of the kinases, we've got to do something to go back from this state to the non-modified state to turn the light switch off. So for every transformation in the cell that involves a kinase, there is a corresponding set of enzymes that reverse the reaction called a phosphatase. It takes that group off again. So let me write that down here. the phosphatase, it takes that group off again. So let me write that down here. Now, phosphates used a lot. So this is a phosphoprotein phosphatase. So kinase puts the phosphate on. The phosphoprotein phosphatase takes the phosphate off. There are three types of amino acids that get most commonly modified in our cells, tyrosine, serine, and threonine. And one of the ones that forms an important part of an extracellular signaling mechanism are the tyrosine kinases. And we'll delve into them in a little bit in the next sec, not the section I'm going to cover now, but later, because tyrosine, various kinases come in a lot of different flavors. The common flavors are whether you modify tyrosine or threonine serine, because these are more similar to each other, and this guy's different. But we'll get to that later. So in the cell, we have about 20,000 genes that encode proteins, encoding genes. All right, 515 of those are kinases. That's a pretty big chunk of the genome you've got to accept. So in excess of 500 are kinases, specifically protein kinases, the ones that modify. So there's a big hunk of the genome dedicated to this kind of activity. And it's dynamic because there's also about 100 phosphatases. They're a little bit more promiscuous. You don't need so many of them. But a large component of the genome is responsible for phosphorylating proteins and dephosphorylating phosphoproteins. And that part of the genome has its own special name. Let's see. I don't know. And it is called the kinome. I hope that's not too small in there. So if we're describing all the enzymes in the genome that catalyze phosphorylation, we would call it the kinome as a collective set, because it's the set of kinases. And you'll hear that term quite commonly. And the kinome is really important and represents major, major therapeutic targets, because it's when kinases go wrong that we have physiological defects. So let's just go back to this. You can see we've got the kinase and the phosphatase. The donor for phosphorylation is ATP, and it is the gamma phosphate that's transferred to the protein to switch it from the off state to the on state. It is a post-translational modification, meaning it occurs on a protein after the protein has been fully translated. There are a few co-translational PTMs, that seems like a bit of an oxymoron, co-translational modifications, but phosphorylation is one of them, glycosylation is, and we won't go into those in any detail, even though it breaks my heart not to go into those, those in any detail, even though it breaks my heart not to go into those. But OK. All right, so now I want to first of all introduce you to a paradigm for signaling as opposed to really go into what's happening. And so one of the first paradigms is a situation where you have a cell. In the plasma membrane of that cell is a receptor, and it gets hit with a signal. So a signaling paradigm is that a molecule from outside the cell binds to something that's transmembrane, and then you start getting signal transduction through a pathway. So any extracellular signal could be game in this process. And then there's a sequence of events upon signal binding. There's a sequence of changes that ends you up with a final output. And generally, signaling pathways go through a number of steps where there is the opportunity for the amplification of a signal. So I talked to you last time about some of the hallmarks of signaling. Specificity defines how accurately that extracellular signal binds to the receptor. But amplification really refers to how signals get bigger and bigger through certain steps in a signaling pathway in order to have a big impact in the cell, not just a single event going through a single pathway, one molecule at a time. And we'll see that in the example that I show you. Okay. And then oftentimes when we look at signaling pathways, we care a great deal about what's the first response upon the signal hitting the outside of the cell. So in many signaling pathways, this could be a protein. The receptor could be a protein that is bound to a G protein. And that would be the first responder through the pathway that really triggers off the cellular events. OK, and now, so what I want to talk about now is the, I always get this wrong. I always thought it was flight or fight or fight or whatever, fright. I always thought it was flight or fright, but it's not. The response is actually the fight or flight. The response is actually the fight or flight. So let's set this up to understand why this is such a great manifestation of a cellular signaling response, because it includes a lot of the hallmarks that are really characteristic of the cellular response. So this response involves a cellular receptor that is called a G protein coupled receptor. And we saw a little bit about them last time. They are always called GPCRs for short. And what that term means is that it's a receptor that's linked in some way to a G protein. So it could be coupled to a monomeric or a trimeric G protein. So don't confuse the two. One is the receptor. It's transmembrane. It's responsible for transducing, receiving signals and transducing them. The first responder is the G protein that changes from a GDP-bound state to a GTP-bound state. And in the one that we're going to talk about, we're going to deal with a trimeric G protein in the fight or flight response. And what you see here is a cartoon of the players that are involved. So remember the G proteins? We talked about them briefly last time. They have seven transmembrane helices. They span the membrane. They have the N terminal side. Whoops. One, two, three, four, five, six, seven. N-terminus out, C-terminus in. And each of these is a transmembrane helix. And this would be outside the cell. This would be in the cytoplasm. And you can actually often look at a transmembrane protein and know its behavior, because the width of these transmembrane helices often comes in at approximately 40 angstrom, which is the span of a membrane. You can sort of say, that looks like a transmembrane helix. It's exactly that dimension to cross a membrane. And the GPCRs in this case would bind to a ligand outside the cell and have a response inside the cell. So those seven transmembrane helices are responding to ligand binding. So let's take a look at this picture because it's almost impossible for me to get it onto the screen. So it binds to a trimeric G protein. Remember, I talked to you about the two different types. The trimeric has an alpha, beta, and gamma subunit quaternary structure. And they're shown here in different colors. The green is the alpha subunit. The magenta is the, or the red is the gamma subunit. And the yellow is the beta subunit. So what happens when the ligand binds to the G protein, there is a reorganization of that G pro, excuse me, when it binds to the G protein coupled receptor, there is a reorganization of those seven transmembrane helices. Last time I identified them to you, when you look at a couple of these, they're actually fairly large loops that grab onto your ligand, and that will translate conformational information through the membrane to the other side where the G proteins are sitting. And in this response, what happens is upon binding the ligand, the alpha subunit leaves the team and goes from the GDP bound state to the GT bound state. So it literally changes its state and changes its mode of association within the cell upon that action. So you can see how nicely we have transduced the ligand binding out here to a pretty discrete cellular event, turning on the switch of the G protein alpha subunit. Is everyone following me there? I know it sort of looks complicated to start with, but you'll see it in action. Okay, so here are the cellular components of the response. So basically this is the kind of response where if you get scared or you feel you're in harm's way, you will trigger this response in order to generate a lot of ATP in order that you can respond, run away, hide, do something very active in order to rapidly respond to a threat of some kind. And this response is triggered by a small molecule. In this case, it's epinephrine or adrenaline. Different names on different sides of the Atlantic, but you all know what adrenaline is. And here's the structure of epinephrine or adrenaline. And it is the signal for the flight or fight response, because it's the small molecule that binds to the extracellular surface of the receptor Changes its shape so that things can happen Intracellularly and it's just one small molecule. Normally that would be charged. It doesn't diffuse across the membrane So it's stuck being on the outside of the membrane. Okay, so this is the signal that triggers the response so if you have to respond to a threat of some kind, you can't stop, sort of go to the fridge, get a big snack, eat it, you know, digest all your food and hope you're going to get energy quickly. What you've got to do is have a response where you can generate energy from your glycogen stores that are in the liver. have a response where you can generate energy from your glycogen stores that are in the liver. So there is a signal comes from the adrenal region, which is the release of adrenaline that goes to the cell surface receptors to trigger the response. And what kind of signal would this be? Would it be paracrine, autocrine, exocrine? What kind of signal would that be? Sorry, endocrine, paracine, what kind of signal would that be? Sorry, endocrine, paracrine, juxtacrine. Do you remember last time? Yeah, endocrine. So it's a response that comes from the kidneys and goes to the liver. So it's going, it's traveling. Autocrine is self, paracrine is near, juxtacrine is cell contact, but any of these hormonal responses are pretty commonly endocrine responses. So what happens once the signal binds? So the specificity in this situation is that the G protein coupled receptor, now shown in pink in very stylized form, but you can count those seven transmembrane domains, will bind exclusively to this GPCR with high specificity. Another signal, another small molecule that looks like it won't bind, because we have to have specificity for the signal. Upon that binding event, it will trigger a change within the cell. And that change within the cell is that the alpha subunit, here you see alpha subunit beta gamma, they're all shown in green. The alpha subunit of the G protein, remember I told you it was a trimeric G protein, where the alpha subunit is the key player. The alpha subunit leaves the team and it exchanges its GDP, alpha subunit is the key player. The alpha subunit leaves the team, and it exchanges its GDP. That's its resting state. Nothing's happening for GTP, which turns it on. So that's the first response. The G protein is responding to the signal from the outside of the cell through the auspices of the G protein coupled receptor to give a change within the cell that's a discrete change of the G protein coupled receptor to give a change within the cell that's a discrete change. OK? Following me so far? So now what we need to do is trigger the remaining biochemical events that are going to get us out of this sticky situation where we need to produce a lot of ATP. So it turns out that the GTP bound form of the alpha subunit can then bind to another enzyme. And that enzyme is adenylate cyclase. So we've bound. We've changed the GDP to GTP. There's a response. And we activate the enzyme known as AC, which you look up here. It's called adenylate cyclase. So this is a messenger within the cell that's now being generated as a response to the signal coming from outside through the GPCR to the alpha subunit of the G protein, which then, in its GTP-bound state, binds adenylate cyclase. OK, is everyone with me? And once that is bound, adenylate cyclase, okay? Is everyone with me? And once that is bound, adenylate cyclase can do its biochemistry. And the biochemistry that adenylate cyclase does is shown down here. Here's ATP. Adenylate cyclase cyclizes ATP. You lose two of the phosphates and you get this molecule known as cyclic AMP, which is a messenger molecule that will propagate information through the cell. All right, once that happens, so now the adenylate cyclase is activated because it's bound to the GTP bound form of the alpha subunit. That means we can make a bunch of cyclic AMP and cyclic AMP is what's known as a second messenger. And that often means it's a common messenger in a lot of pathways. It shows up quite frequently within the wiring of a pathway, and it acts locally to where the pathway is being processed. So once cyclic AMP is formed from a by adenylate cyclase, that then activates an enzyme. It activates protein kinase A. So pKa is a kinase. It's actually a serine threonine kinase. And that then results in certain proteins within the cell becoming phosphorylated to continue propagating our effect. So we have specificity by the adrenaline binding. We have amplification somewhere in this pathway. So I told you that many pathways go through steps where you start amplifying the signal. Where do you think is the first stage in this set of transformations that I've described to you where you start amplifying the information? Think about what each of the events comprises. What is it? What's, you know, is this one binding to one and one event? Or is it one binding to one and we get multiple events? Where is the first step of amplification that's essential? Because it wouldn't do us any good if we make one molecule of ATP at the end of the day. We've gotta make dozens and dozens of molecules of ATP. What's the first event that could be an amplification? Over there. Yeah, when it's made, yes. So this, let's go through them. One binds to one, great. Once one binds to one, one of these is released. It gets converted to one of these. Once one of these is made, adenylate cyclase is an enzyme, so it can make a bunch of cyclic AMP, which can then activate a bunch of protein kinase A, which can then phosphorylate a bunch of cellular protein. So we've got an expansion of our response, all right? So everyone, does that make sense to everyone? So amplification is really important. Feedback is also important. If you ended up needing an EpiPen, So amplification is really important. Feedback is also important. If you ended up needing an EpiPen because you have an allergic response, you might remember that you've got the jitters forever because there's too much firing and action going on. But in the fight or flight response, there's feedback at a certain stage that slows down this entire process. And that feedback actually comes from an enzyme that chomps up the cyclic AMP to make it inactive as a second messenger. So there's feedback in this process. Okay, now what happens within the cell to get us that biological response? This is a sort of a shortened version of what's happening. So epinephrine binds, here we are with the alpha subunit. With cyclic AMP, for each one of these, you might make 20 molecules of cyclic AMP. That would activate many, many pKa's. And then you go through a series of biochemical steps where different enzymes are activated with the overall goal of, in the liver, chewing up glycogen. OK, so glycogen is a pretty impenetrable polymer of carbohydrates. And you need several enzymes to start to break glycogen down to make glucose phosphate. And so these enzymes here of phosphorylase, B kinase, glycogen phosphorylase, all end up converting glycogen into glucose 1-phosphate. So you access your liver stores of stored carbohydrate, which is in a polymeric form, to get a lot of glucose phosphate, which is then hydrolyzed to glucose, which then hits the blood system. And then you can deliver glucose to all the cells to go glycolysis and make ATP. And every glucose molecule, as you know, can really churn out ATP. So what we see in this process is going through the entire dynamics of the system where we've seen specificity, amplification, and feedback. Later on, I'll describe integration to you. Everyone following? The series of steps that go from a molecular messenger to biochemical steps to physiological, biological response. Now I want to just emphasize one quick thing here. I've got a couple of slides I popped in of drug targets. About 45% are receptors in cells. 25% of the entire drug targets are GPCRs. They respond to all kinds of signals, amines, amino acids, lipids, little peptides, proteins, nucleosides, all commonly going through the G protein coupled receptor to give you a similar phenomenon to what I've described to you. And what I think is particularly interesting, this is not, I'm going to post all of these as slides, what I want you to see, this was quite a while back, but it just shows you so many of trademark drugs that target different GPCRs. They're shown here. And what diseases they're used for to treat, and what's the generic name of those drugs. So you can see here many, many diseases have at the heart and soul of the problem different receptors. And these are all G protein coupled receptors that are treated with small molecules that bind to the receptor and often glue it in an inactive state. So it can't then bind to an activating signal and have all the rest of the events occur. There are very few structures of the G protein coupled receptors, but there are some of them. So many of the target G protein coupled receptors can be modeled computationally. And then you can do a lot of work where you actually model the receptor in a membrane environment and start searching for drugs through computational approaches. And I thought a lot of you might be interested in this because this is a really strong axis where bioinformatics, confirmation, and advanced physics in molecular dynamics can be brought to bear on drug discovery when you don't have perfect molecular models of your targets. Okay, so now we're going to move to a different kind of signal. We're going to talk about the receptor tyrosine kinases. All right, so in the receptor tyrosine kinase responses, we can often see very similar paradigms to what I've just shown you. But there is an important distinction, receptor tyrosine kinases. We often call these RTKs. So that's their shorthand. So over here, I described to you different kinases, that we have kinases that modify threonine, serine, and tyrosine. The receptor tyrosine kinase is a subset of tyrosine kinases that form part of a receptor. So if you were to think about various kinases, you would have the serine threonine, and you would have the tyrosine ones. But these would be differentiated into the ones that are part of a membrane protein, the RTKs, and then the ones that are soluble in the cytoplasm. And the serine-threonine ones are most commonly soluble in the cytoplasm. I'm going to focus on the receptor tyrosine kinases, because they do slightly different activities when they signal relative to the GPCRs. So let's once again, this situation, another paradigm where you see a series of events, but with a number of the receptor tyrosine kinase pathways, the ultimate action ends up being in the nucleus, where as a result of an extracellular signal, you get a series of events that ends up with a protein being sent into the nucleus, and that protein may be a transcription factor that binds to a promoter region. And as a result of that, you'll get gene transcription occur. You'll transcribe a gene, make a messenger RNA that will leave the nucleus and cause action within the cell. So this is a little bit different than the other response that was mainly cytoplasmic. OK, so let's take a look at the receptor tyrosine kinases. Receptor tyrosine kinases are proteins that span the membrane, but rather differently from the GPCRs. And they have a domain that's extracellular, just a single transmembrane domain. This is out, this is in, and then they have an intracellular domain. This would be where the ligand binds. This would be how there's some kind of signal transducing. And this would be a kinase domain. So how do we get the information in? When we saw the GPCRs, we saw the ability of those seven TMs to reorganize and send information in the cell. With the receptor tyrosine kinases, it's different. There are regions of the membrane where there are a lot of these proteins. They commonly bind small peptides and protein molecules. And when they're in their activated form, once the small protein binds, the receptor tyrosine kinase forms a dimeric structure. That is, two of these get together only upon ligand binding. They move together once there is a ligand bound. And then what happens is that the tyrosine kinase domains phosphorylate each other. And that's activation in the case of receptor tyrosine kinases. So when the small protein ligand is not around, this is a singleton. It doesn't work on itself. Once this ligand binds, interactions change. You get a dimeric structure where one kinase can phosphorylate what's called in trans the other kinase domain. So it's different from the GPCRs. It's got a different kind of feel to it, but it's still a dynamic transient signal. Let's take a look at this within a cell and see what kinds of responses. And this is in a response to EGF, which is epidermal growth factor. It's a cytokine that promotes cell division, so a lot happens with respect to the action of a cell, not to produce ATP, but now to respond by producing all the elements that enable cells to grow and proliferate. So the epidermal growth factor binds, you get dimerization. You get dimerization. Upon that dimerization, the kinase domain in one structure, in the blue one, phosphorylates the other and vise versa. They phosphorylate each other intermolecularly. Once that has happened, through the auspices of another protein I won't bother you with the name of, this phosphorylated intracellular RTK binds to a small G protein. In this case, it's a monomeric G protein, not one of the trimeric ones, a small one known as Ras. Once that binding event occurs, guess what? Ras gets activated. It's now binding GTP instead of GDP. And then it starts going through a sequence of events where there's a ton of controlled cellular phosphorylation events that result in moving a protein into the nucleus that helps form a transcription complex that results in cellular proliferation. Similar but different series of events, there's still amplification, there's still dynamics. And in this case, it's a lot of phosphorylation events. And what I want to sort of define for you is that many of these pathways are in trouble in diseased states, be it inflammation, neurodegeneration, or cancer. There is a barren behavior of proteins within these pathways that cause them to go wrong, cause cells to proliferate out of control, or undergo bad responses. And that is why these proteins end up being therapeutic targets, like the G protein coupled receptors. OK. So we've seen the characteristics of signaling. We've seen a signal. We've seen amplification. We've seen responses. What I just want to quickly show you is an idea about integration. So here's an idea with two signaling pathways that sort of end up with the same signal outside where you integrate actions through two different signaling pathways to achieve a bigger, different kind of response. So that's that last hallmark of signaling pathways. It's not that every pathway is clean and straight. It has crosstalk with other pathways, and you get amplified or different responses. Tremendously complicated. I want to give you one more term, and then I'll show one table. When these pathways go wrong, it's often because switches get stuck on. So for example, a G protein gets stuck in its GTP bound state or doesn't even need GTP to be activated. Or a tyrosine kinase is stuck, activated. And that's what's called constitutively active, basically meaning it's permanently on. So many of the diseases that are caused by mutations in your genome, not genetic diseases, but mutations in your genes in some particular cells, end up with constitutive activation where you don't need a signal to have a response. And so, for example, cells may proliferate out of control. So that is an important term to know and understand because constitutive activation basically means that a receptor may be active in the absence of a ligand. And I believe this is my last slide. I just wanted to leave you with this. When one thinks of GPCRs, they're tremendous therapeutic targets. The world of kinases is no less important. This scale is in billions of dollars spent on developing molecules that may be curative of diseases that involve dysregulated signaling. And what you see on this, I want to point out two things. I'm going to go in, of course, this thing stopped working at the last minute. But what I want to point out is this particular bar. This represents the billions of dollars spent on protein kinase inhibitors over a five-year period. And it's just escalating and escalating. Similarly, monoclonal antibodies are very important, but the small molecule drugs hold a real dominance. What do these drugs do? They enable you to have small molecules that can go into dysregulated signaling pathways and stop the activity somewhere in the pathway to avoid signals going constantly to the nucleus and turning things on all the time. So both of these types of functions in cellular signaling, ones you want to understand both from a biological perspective, but from a medical perspective. OK. Okay. | 21. Cell Signaling 2 – Examples | 3,063 | MIT OpenCourseWare | 20200512 | MIT 7.016 Introductory Biology, Fall 2018
Instructor: Barbara Imperiali
View the complete course: https://ocw.mit.edu/7-016F18
YouTube Playlist: https://www.youtube.com/playlist?list=PLUl4u3cNGP63LmSVIVzy584-ZbjbJ-Y63
Beginning with the fight or flight response, this Halloween lecture looks in more detail at cellular signaling pathways in action.
License: Creative Commons BY-NC-SA
More information at https://ocw.mit.edu/terms
More courses at https://ocw.mit.edu | 2024-09-28T13:13:08.434481 |
https://www.youtube.com/watch?v=41grFaceQM8 | Well, you know, it's 12.02, Barbara. Why don't we get started? And I know there are folks still logging on. So for those of us who are here, thank you for joining Kansas Center Grand Rounds. I hope all of forward to year's end and hopefully celebrating a better 2021. But we're really very fortunate to have two exceptional speakers today. And I'll start by introducing our first speaker who, frankly, needs no introduction. Dr. Barbara Burtness is a professor of medicine, co-leader of the developmental therapeutics research program, and leader of the head and neck cancer program at Yale. Barbara is internationally known for her leadership in clinical development and research and understanding the biology of head and neck cancer. And among her many accolades, we can add now only in the past month is the principal investigator of the Head & Necks Corps, of which we are just so proud of both Barbara and the entire team being awarded this really coveted and elite grant, which I think Barbara, if I'm not mistaken, there are only two Head & Necks Corps now in the United States. You're the leader of one of them, which is an extraordinary distinction for the people who work in this space at Yale. So Barbara was kind enough to share with us the work she's doing on head and neck cancer. And Barbara, thank you for joining us today. Well, thank you for the invitation and for all the support that's gotten us this far. So what I wanted to do was talk about P53 mutated head and neck cancer, which is something I have a longstanding interest in. And obviously, P53 is a very critical tumor suppressor gene. It's meant to be the cell's way of reacting to cellular stress signals, and among these are many that we know are important in head and neck cancer, so hypoxia, DNA damage, replicative stress. Ideally, in response to these, p53 is activated and promotes the transcription of target genes in domains of cell cycle arrest, DNA repair, apoptosis, and others. However, in head and neck cancer, we're aware that p53 functionally disrupted in the majority. In HPV-associated head and neck cancer, p53 is wild type, but its degradation is fostered by viral oncoproteins. And in HPV-negative head and neck cancer, over 85% have genomic disruption of P53, including in-frame mutations, truncating mutations, and missense mutations. And you can see here that many of these are clustered in the DNA binding domain. And we know that this type of mutation is deleterious for the natural history of head and neck cancer. So this figure comes from a large trial that the Legacy ECOG Cooperative Group ran. Over 500 resected head and neck cancers, all resected to margin negativity, and all offered appropriate risk-based angioman therapy as was standard at the time. And then P53 was sequenced. And you can see here that long-term outcome was worse for those patients who had P53 mutation. And if you classify the mutations as disruptive or non-disruptive, it was worse for those with disruptive mutation. And the definition that was used in this paper for disruptive was a mutation that was either truncating or in the DNA binding domain. So on the basis of these outcome data, we were interested in the ECOG Akron Head and Neck Committee, which I now chair, in studying intensification of therapy for these poor prognosis patients with disruptive P53 mutation. But the first thing we wanted to do was examine how we really should be calling the P53 mutation. So we started with what we called the Poeta rule. So those were the rules from the paper I just showed you. And we compared them to 14 other classifiers that are out there, many of which are based on in silico predictions of disruption, some of which are based on experimental evidence, actually, of the decrease in WAF1 activation for every specific mutation. And then we also examined our Poeta rules augmented with information about splice site mutations. And you can see that this very simple definition of truncating or DNA binding domain actually outperformed in terms of clinical prognosis all of the other indicators. And so in our clinical trial, we moved forward with this Poeta rules plus splice site mutations. And the trial that we're now about halfway through is a randomized phase two trial of postoperative therapy for patients who meet the criteria for radiation, but who have negative margins, don't meet the criteria for chemotherapy. And then we want to ask in those patients with disruptive mutation, do we see an advantage for the addition of platinum that we don't see in other patients? This is supported by an enormous BFSCWIT grant that takes care of all of the sequencing. And we have two investigators who are doing the mutation calling in real time. So continue to support this trial and see this as kind of an important resource in terms of all the sequencing information that we're going to have on top of the clinical outcome. We also have support for a clinical trials planning meeting at the NCI, which is gonna happen in January. The goal of this is to write trials, both for locally advanced and recurrent metastatic disease, identifying promising therapies for p53 mutated cancer. We also want to develop a national infrastructure for the sequencing and mutation calling with a consensus approach that all of the groups within the NCTN will accept. The breakout groups for this have been meeting for about five months now. I can tell you that the focus is very strong on immunotherapy and synthetic lethal strategies, and I'll mention both of those in the remaining minutes of this talk. So head and neck cancer is one of the cancers where it appears that increased tumor mutation burden is predictive of response to immunotherapy. And we know that this is a cancer with a higher number of non-synonymous mutations, particularly in the HPV negative cancers. And in the platinum refractory setting, both for pembrolizumab in this early single-arm trial and for drivalumab in a randomized phase three trial, also in the platinum refractory setting. In both cases, we see that as tumor mutation burden rises, the likelihood of benefit from immunotherapy increases. So working with my long-term collaborator at Fox Chase, Erica Gilemas, we wanted to examine whether or not mutations in not only p53, which is the most commonly mutated tumor suppressor in head and neck cancer, but also CDKN2A, which is mutated in slightly over half of HPV-negative cancers as well, see how these related to DNA damage as reflected in tumor mutation burden with the idea of establishing whether or not p53 mutated cancers would be particularly susceptible or appropriate for study with immunotherapy. We had access to a data set of 1,010 HPV negative cancers that had been profiled at Karis Life Sciences. Their gene panel is about a 600 gene panel. They exclude HPV-associated cancers with standard methods. And then the CDK N2A mutations that we saw were almost invariably truncations or deletions, so we included all of those. But for p53, we were interested, once again, in what's the best way of calling meaningful mutations. So we started with the American College of Medical Genetics variants calling. This included essentially all the p53 mutations that we saw. We then looked for consensus between the ACMG and two other variant calling algorithms, Intervare and Clinvare. We used the International Agency for Research on Cancer guidelines for what was dominant negative or loss of function. We then looked at the variances defined by the Poeta rules that I just alluded to. And then we called out those patients who seem to have gain-of-function mutations, most of which are defined experimentally across a range of publications. And TMB was measured just by counting all non-synonymous missense mutations across the about 1.4 megabases that are included in this panel. This shows you the patient characteristics. So predominantly oral cavity and oropharynx cancers, as we see in the clinic, males outnumbering females. And as you see at the bottom, the number of patients who had P53 mutation by the CARIS was higher than if we looked at the consensus calls or the disruptive call. Gain of function was less than 10% of all of the mutations that we saw in p53. And indeed, it turned out that either p53 or CDKN2A mutation was associated with an increase in TMB. Here we look for a threshold of 15 per megabase as being likely predictive of response to immunotherapy. And you can see that across the board, having both genes mutated was associated with higher TMB than having one or the other. And the only exception here was that those patients with gain-of-function mutations in p53 did not have an increase in tumor mutation burden. So, you know, we concluded that mutation of p53 or CDKN2A is associated with increased tumor mutation burden. This is highest when they're damaging mutations in both genes. And so just to kind of segue to the next part of the talk where I'm gonna talk a little bit more about synthetic lethal strategies, P53 mutated head and neck cancer, I think remains a really important subject for study because it's common, it has a poor prognosis. We still don't, after many decades of people examining this have agents which directly target mutated p53. And so the increasing evidence that synthetic lethal strategies might have promise in these patients has kind of attracted our attention in the lab. And so one of the things that we know about disruptive p53 mutation is that the cell loses the ability to perform cell cycle arrest at the G1S transition. And as a result, it becomes much more dependent on transition at G2M. And so we, I mean, obviously many people have been interested in this across many cancers. But we were interested in examining some of the potential targets that regulate G2M. We know that aurora's increased, and I'll show you a little bit about this. We know that aurora expression is increased in head and neck cancer. And aurora content will go up at the end of G2. Its activity is required to localize CDK1 to the centromere to foster mitotic entry. Aurora also, in addition to its roles in centrosome maturation, it also has the property of activating the CdC25 phosphatase, which removes an inhibitory phosphorylation from CDK1. And on the other hand, it's important to know that that inhibitory phosphorylation is placed by the mitotic checkpoint kinase, WE1. And so both WE1 and Aurora are recognized as upregulated in head and neck cancer. of synthetic lethality in p53 mutated cancers, but they appear to have kind of contradictory or opposing roles. And so the data that I'm going to show you now will try to make the case that by co-treating these cancers with an aurora inhibitor, which will lead to abnormal spindle formation, defective cytokinesis, but by inhibiting the Roro will lose the ability to remove the inhibitory phosphorylation from CDK1, and that will result in cell cycle arrest, that we can counter that by inhibition of WE1 so that phosphorylation isn't placed and accelerate these cells into mitosis where, given the spindle disruption that's been caused by the aurora inhibition, they will be unable to complete a normal mitosis and instead will apoptose or undergo mitotic catastrophe. So it's been recognized that aurora content is increased in the face of loss of p53. And there are a host of publications which demonstrate that increased aurora levels are correlated with poor prognosis. I'll show you some of our work. This is a panel of cell lines that we use in the lab, all of which have either mutated or P53 null status. And you can see that all of them increase the expression of Aurora A relative to either fibroblasts or normal epithelial tissue. a aqua assay, an in situ fluorescence assay for aurora that could be applied to tissue microarrays. And so you see here that green is for keratin defines where these head and neck cancer nests are within the tissue core. Blue is for dappy, so that will be your nucleus, and red is for aurora. And in this aurora high cancer, what you can particularly appreciate is the high level of expression of aurora within the nucleus. When we looked at nuclear aurora in the tissue microarray, first for all cases, we saw that high aurora expression was associated with worse survival. This is also true just as a reflection of natural history in those patients who had had no post-operative treatments and never been exposed to any DNA damaging agents. And we were able to show that this was entirely driven by the HPV negative cancers. So on the basis of this, these data we went to millennium and argued that Aurora rate for aurora monotherapy in head and neck cancer turned out to be about 9%. And given the increasing experimental evidence that aurora inhibition may always be intrinsically limited by this kind of compensatory cell cycle arrest. We were then interested in what would be a rational combination with Aurora A inhibition that could optimize the targeting of what we continued to think was likely to be an important target in this disease. And so Eddie Mendez and colleagues at the University of Washington, together with Del Yarbrough, our former colleague here, had undertaken a functional kynomic screen in p53 mutated head and neck cancer. And actually, Aurora came out of that screen. But another thing that came out was this mitotic checkpoint kinase that I just alluded to, WE1. And people have been interested in the idea that inhibitors of G1 will abrogate the G2 checkpoint. The G1 checkpoint is already abrogated by p53 mutation, and that this might accelerate cell death, particularly in the presence of DNA damage, such as you might generate with cisplatin. in the presence of DNA damage such as you might generate with cisplatin. And they showed in animal models that the V1 inhibitor MK1775, which is now known as adavacertib, was synergistic with platinum in p53 mutated head and neck cancer models. Eddie Mendez then took this forward as a window trial in head and neck cancer. So a small number of patients treated with a davisor tip together with low dose weekly chemotherapy. And you can see here that the majority of patients had some diminution in tumor size and a number of them had rather major pathologic responses. And most intriguingly, you can see that there was evidence of target engagement. And so among those patients who had responses, there was a decrease in phosphorylation of CDK. There was a decrease in phosphohistone H3. And potentially, you could see some increase in gamma H2AX. potentially you could see some increase in gamma H2AX. They also were able to correlate both pathologic and clinical response with the presence of p53 mutation in the HPV negative cancers. And across the board, these p53 mutations are disruptive or deletions. are disruptive or deletions. So we've been exploring whether or not you can combine Aurora A and WE1 inhibition and observe synergy in p53 mutated head and neck cancer. And you see here a picture of Zhang Wuli, my colleague in the lab, who has done the majority of these experiments. And so you'll see MLN, that's the Aurora inhibitor, AZD1775, that's the WE1 inhibitor. And we see synergy in terms of cell viability, soft agar, oncosphere formation. And this was present in two separate HPV-negative head and neck cancer cell lines that bear P53 mutations. Trying to figure out whether or not our guess about the mechanism was correct. You can see here that when you give the Aurora inhibitor, there's a dramatic increase in phosphorylation of CDK1. This happens on a slightly different timeline in the two different cell lines, but seems to be a reproducible phenomenon. And that's abrogated by the addition of the WE1 inhibitor and completely abolished when you give the two together. This results in an increase in the number of mitotic figures that's abnormal to the presence of really only single-digit normal mitotic figures in the presence of the combination. So if you just walk through here, these are the normal mitotic figures. When you give the WE1 inhibitor, you get some disaggregation of chromatin reflected here in the phosphohistone H3 stain. When you give the aurora inhibitor, you get the formation of these multipolar spindles, three to four spindles per cell. And when you give the two together, you get an abnormal catastrophic mitotic figure. We also showed in using an Exin-5 flow and looking for cleaved PARP that there's an increase in apoptosis. And we wanted to compare this to Aurora B inhibition which completely cuts off mitotic entry by abrogating the phosphorylation of histone H3. And there was no synergy between these two agents. And you can see there the lack of phosphohistone H3 and increase in DNA damage. DNA damage. Taking this into xenograft models here at either of two doses of the WE1 inhibitor, the standard dose of the Aurora inhibitor, tumors continued to grow, not too differently from vehicle, but when we gave the two together, there was control of tumor growth and actually statistically significant improvement in survival for the animals. Looking at the tumors under the microscope, when we gave the two agents together, there was a decrease in proliferation reflected in decreased Ki67. There was increased cleaved caspase, and there was a decrease in phosphocdk1 within tissue. And if we did AQUA for phospho-CDK1 and counted the amount of phospho-CDK1 signal in the tumor leading edge, you can see this was dramatically decreased. Alacertib has been a difficult drug to work with in the clinic. It's associated with myelosuppression, and there's been a negative phase three monotherapy trial in lymphoma, and so we were concerned that the development of that agent might not go forward. However, there have been a number of second-generation aurora inhibitors that have come forward, and we've had access to a compound from Tahoe called TAS-1119 recently been acquired by Vitrac and it's going to be called VIC1911 moving forward and once again across a range of p53 mutated cell lines we see dramatic synergy for the two agents once again we see synergy in xenograft models this is confocal microscopy that again shows you the multipolar spindle formation with the use of TAS119 as the aurora inhibitor. But with the cells really arresting in that or becoming quiescent in that multipolar spindle state and as they then attempt to enter mitosis in the presence of both the WE1 inhibitor and the Aurora inhibitor, developing these very catastrophic mitotic phenotypes. And I notice that I'm sort of running out of time here, so I won't march you through this, but the mechanism looks to be identical here as what we saw with L-acertib. Working with Erica Gilemes' lab at Fox Chase, we undertook a high-throughput screen to see if we could find additional partners that would be both hindering and fostering mitotic entry, again, with the attempt to exploit these multiple regulators of G2M. And another hit that appeared very strong was the CHECK1 inhibitor prexacertib. This is an agent that's not really moving forward in the clinic because of its toxicity. But I wanted to show this just because with very low dosing and a single dose, we saw a profound clonogenic survival effects that make us hopeful that with a number of these pairs, one might be able to go to very low doses in the clinic. So TAS 119 has completed two clinical trials. There's a recommended phase two dose. The toxicity seems to be very manageable with diarrhea and eye disorders, probably the most prominent side effects. And so we are moving forward with a window trial in HPV-negative head and neck cancer that will have both an initial dose escalation, looking at the combination of VIC and a DAVA-CERTIB, and then followed by a dose expansion. And that will be part of project two of our head and neck spore. So I wanted to leave a couple of minutes for questions, but I didn't want to end without, first of all, calling out all of the fabulous colleagues who were part of the team that put the head and neck spore in, and then acknowledging all the people whose work I've just talked about, particularly Jongwoo Lee, Janaki Parameswaran, and Teresa Sandoval-Shafer in the lab. So thank you very much. Well Barbara, thank you. That's fabulous work. Congratulations on all of it. And folks can submit questions on the chat box of Zoom. But I wanted to ask, you know, with regard, as you look through the combination of an aurora kinase and WE1 inhibitors, do you have a sense of, first, what might emerge? I mean, even when you're getting response, because of the complexity of those pathways, what might emerge as mechanism resistance that will occur when you have dual inhibition, if any. And then the second question I have is, what do you anticipate will be the toxicity profile or the therapeutic window for the combination clinically? So the WE1 inhibitor has been quite tolerable, because I'm going to take the second question first, because I've already wrestled with that a lot. The WE1 inhibitor has been quite tolerable in the clinic, but when it was combined with PARP inhibition, diarrhea really became the dose-limiting side effect. And so this second-generation Aurora inhibitor did have about a 25% rate of high-grade diarrhea at the recommended phase two dose. So the two things that we're sort of hoping is one, that we'll get away with lower doses as we have in the animal models. And second of all, the diarrhea as a dose limiting toxicity is one of the easier ones to manage. And so that if we're on top of this with an imodium regimen early on, hopefully that will be helpful. In terms of resistance mechanisms, this is not something that we've really gone into with the combination yet, but is well studied for both of the agents independently. And one of the resistance mechanisms to the Aurora agents has been a kind of conformational dependence on the inhibitors. So the inhibitor binds to the activated form of Aurora A. And if you get an adaptive process where the cell just generates more inactive Aurora, the current generation of inhibitors may not work as well. And there is a group, Kevin Shokat's lab, has been developing novel aurora inhibitors that may be more able to bind the inactive confirmation as well. And in terms of we want inhibitors, there is a suggestion that the DNA damage effects of the we want inhibitors may have an S phase could actually up regulate some checkpoints that are earlier in the cell cycle but it's a good question probably something we should devote more effort to yeah well I'm sure it'll it'll definitely emerge as you get samples from your trial. So it's really exciting and congratulations. So I know we're at 1231, Barbara, so why don't we turn now to people can submit questions to Barbara online, but we'll turn now to our second speaker and very fortunate to have another valued member of our faculty speaking. Dr. Elizabeth Klaus is a professor of biostatistics and neurosurgery, focused not only on brain tumors, but also the epidemiology, most notably the genetic epidemiology of these malignancies. She received her MD and PhD from Yale and completed her surgery here in nurse surgery. And through her work, she really has been an international leader in the investigation of the epidemiology of CNS malignancies, most notably serving as the leader of the Meningioma Consortium, the Meningioma Genome Wide Association Study, and also a leader of the Yale Acoustic Neuroma Study. And again, we're so pleased to have talented people who bridge the gap of epidemiology and biology of cancer. And Elizabeth, thank you so much for sharing your work with us today. Thanks very much. Can you see my slides okay? Yes. Great. So I'm gonna talk a little bit about something we've been working on. And I do want to note that this is work done in collaboration with Jeff Townsend's group, who I think you all know very well, and Vincent Canotero, as well as Stephen Gaffney. So despite all the things that we've attempted to do, we still don't know much about risk factors for glioma. And we wanted to take a look and see if there were different methods that we could use to see if we could tease out both environmental and then also another hot topic is sex-specific signatures of glioma causation. So you all know that gliomas are the most common type of malignant brain tumor, accounting for about a third of all brain tumors and the majority of malignant tumors, but they've about a third of all brain tumors and the majority of malignant tumors, but they've proved to be very heterogeneous. And we have not done a great job at identifying risk factors, be they genetic or environmental for glioma. And so we were interested in doing that, particularly in light of the poor outcomes that we see with this group of patients. in light of the poor outcomes that we see with this group of patients. So we do know that there are sex-specific differences in glioma risk and outcome. And the plots I have here are for all gliomas and then glioblastoma or sort of an IDH positive, excuse me, IDH negative tumor and then lower grade gliomas, the males being the blue, tumor and then lower grade gliomas, the males being the blue, the females being the red. And it's interesting in that we see this sex specific difference across the entire age range. So it's a little bit different than we see with, for example, meningioma, where we see the women having greater risk, but the risk difference decreasing once women pass through the menopause. but the risk difference decreasing once women pass through the menopause. Whereas here we see the sex differences for glioma across the age spectrum and across all subtypes. And so that obviously suggests that other mechanisms, in addition to sex hormones, must be behind the difference. Men are at greater risk of being diagnosed with the disease. And again, that's across pretty much all the subtypes. And they also have lower survival in general than for females across all subtypes. So we've looked at this a little bit, and I've been lucky enough to collaborate with a group of individuals called the Glioma International Case Controls Consortium, and that's led by Melissa Bondy, initially at MD Anderson, then at Baylor, now she heads up the epidemiology section at Stanford. But we were able to gather over 10,000 cases and 10,000 controls. And so these are essentially looking at constitutional or germline risk alleles by sex. So if I can draw your attention to the table over here, these are the variants that we found to be significantly different, males versus females, males being the blue and females being the red. So we did certainly find differences at the germline level, but we were also interested in looking at things at the tumor or the somatic level. So sex is a biologic variable. As I mentioned, this is a very hot topic now. We obviously know there are biologic differences between males and females. There's also some thought as to whether there's variation in the prevalence of risk factors, and then also whether there's a difference in sort of a gene by environmental interaction. So for example, and this has long been postulated, but it's really been pretty difficult to prove that males in particular are more likely to be exposed to work like toxins that might be associated with risk. And so that was one of the things we wanted to look at as well and in part why we divided our analysis by sex. So there's two goals and what I'm going to talk about today. We wanted to look at the relative contribution, and this is based on some of the work that I know you've already appreciated with Jeff Townsend, but we're applying it specifically now to glioma. But looking at the relative contribution of cancer cell lineage proliferation and survival of single nucleotide mutations. And we divided our study subjects up by IDH mutation. And as most of you know, IDH mutation is one of the key dividers into the higher and the lower grade tumors and certainly a prognostic factor, as well as a factor in response to treatment. We also wanted to quantify, and this is something that is a little bit new to epidemiology in terms of how we've tried to identify risk exposures. Typically, we've done things like large case control studies where we look at large numbers of people that have the disease, compare them to large numbers of people without the disease, and look at things like questionnaire or work pick exposure, and see if we can figure out differences between the cases and controls. So what we're doing now, and this is sort of an emerging field in cancer epidemiology, is to look at the cosmic cancer mutational signatures in tumors and see if we can then backtrack, match it to possible risk exposures. And one of the things we're hoping to do in the future is to go back to our cohorts and studies for which we collected good occupational data and see if we can match it up to mutational signatures. So the methods I'll talk a little bit about, I am highlighting here Jeff's paper that he had in JNCI two years ago, and I think you've seen some of these sorts of methods applied in particular to actually head and neck cancer. including the Glioma Longitudinal Analysis Consortium, or GLAS, which is led by Roel Verhoek at Jackson Labs, and which I'm also a member of. So these groups have identified the most common genetic changes in primary glioma tumors, including TP53, IDH, EGFR, but the relative importance of these mutations and how they relate to tumor genesis is not well known. So one of the things that we've been working on, and Jeff has been a leader in, is defining this cancer effect size. So this metric of the relative overabundance of variants due to their contributions to survival and division versus what you're actually seeing in the tumor. So we're quantifying the cancer effect size. We're using single nucleotide mutations. And then we basically do a scaled selection coefficient for the different variants. We look at it by sex and by IDH subtype. And so we're trying to get a feel for whether this would help us explain any differences in the glioma risk and outcome that we see by sex. And then we're going to move on to the cosmic mutations. So I won't go into the gory statistical detail. This is drawn from Jeff's paper, but basically you're comparing expected to observed. So expected number of synonymous mutations, and then we're looking at the rate at which the mutations actually occur. The data that we're using here, our whole exome sequencing data from a pretty good size data set in terms of glioma, so about 1,100, and these are all adult patients. There's no pediatric patients in here. And we drew it from the Cancer Genome Atlas Study. And as I mentioned, I know some of you may be aware of what GLASS is. So it's an effort, as I mentioned, led by Roel Verhoek, but which Yale is also a member of, looking at not only the initial tumors, but the tumors over time. So how do they change in terms of their genetic makeup when we do nothing to them, when we do chemotherapy, or when we do radiation or a combination of all of the above? And what changes do we see and what do we learn from that in terms of what we should or should not be doing? And then we also used a lot of data. All of this is readily available off the internet, but we use tissue-specific mutational covariates, and this helped us figure out what sort of mutation rate calculations we should use, gave us a little bit of information about replication timing, and some of the other data sets that are listed here. So here's some of the results, just to take you through it a little bit. So I have it divided by tumor type, and it's by sex and by mutation. So the wild type tumors who would be considered the higher grader, primarily the glioblastoma tumors are in the first two rows and the IDH mutant, which would more typically be the lower grade tumors. And then I have males versus females, males versus females. And then there's sort of a cancer effect size here. The blue is non-coding region and the red is coding. So you can see the patterns are quite different for what might be called the low and the high grade. The IDH mutant tumors had few unique recurrent substitutions. All of them were in coding regions, whereas the wild type tumors, and obviously this is in part what makes them so hard to manage, is they exhibited many substitutions, but they were primarily in non-coding regions. So here's another picture, a little busy, but divided once again. The IDH mutant or the lower grade tumors are presented first. The wild types are second, and there's female, male, female, male. And so what we're looking at here is that items that top the list are the most important. The size of the circle that is attached to them measures the prevalence. So there can be kind of this disconnect as to what is important and how frequently it occurs. So we can see that in the low grades, it's pretty much as expected. Previously reported mutations in IDH1 and 2, TP53, some of the other classics were confirmed. But what's interesting is if we go here to the IDH wild type tumors, the most important with respect to cancer effect gene is this low prevalence, right? You can see that the circle that matches up to it is small, not large, like we see for IDH, is this BRAF V600E. So we know that it's important. It turns out that it looks like it's the most important, but obviously it doesn't occur that frequently, but interesting what drives some of these gliomas here. The other thing we looked at is do males and females show the same pattern of what's significantly overburdened? And there were a lot of similarities. The way that we have this broken up here is each panel is a gene. The mutants come first in each panel. And then within each panel, we've got the females and the males. So we did see some differences, although overall, most of the things the males and females showed were similar. But we did see differences in the PI3K pathway. were similar, but we did see differences in the PI3K pathway. So in IDH mutant tumors, the PIK3CA mutations were located in the helical domain for females and the kinase domain for the males. And so that's up here, this panel here. Okay, so it's the mutant and the non-mutant. And then the variants of import also differed by sex for PIK3R1. And so that's interesting in part because we know that the way in which these areas are targeted by various chemotherapies does differ. We looked in the literature and we don't see too much reported on this. We did find a paper by Dan Cahill et al. at Mass General, and although they didn't report it as such, they found something similar, where the females tended to have variations in the helical domain and the males had them in the kinase domain. And so, as I said, although both domains are involved with gliomagenesis. There's differential amounts potentiated by these two regions, and obviously there's different sensitivity to various treatment types depending upon domain. So back to environmental exposure. We have searched, and not just our group, but many groups have searched long and hard for environmental and genetic risk factors for glioma. In terms of genetic risk factors, we have found small numbers of families with high risk, but typically that does not relate to the general population. And so no genetic risk factors really explain a large proportion of inherited risk. And other than high dose radiation, to which not many people thankfully are exposed, we really haven't found much of an association between environmental risk factors and glioma risk. There has been reported a fairly consistent but low effect in inverse association with history of allergy. So the question comes, why haven't we found anything? Is it that there is no association or is it basically statistical power that there's so few cases of glioma relative to other things we've looked at? For example, I started my work with breast cancer and even just using the state of Connecticut as a base, you would have enough cases for a large study. For glioma, that is not true. And also likely a lot of the exposures that we think are causing risk are themselves rare. So one of the things that people have been thinking about doing is there another way to do this now? So now that we have these mutational signatures that are listed in the catalog of somatic mutations in cancer or cosmic, can you use that as a way to match up to exposure, particularly if you have previously obtained environmental or other exposure history in the patients? So we did that here with the 1,100 cases that we mentioned. We grouped exonic SNVs and tried to match them up to what is in COSMIC. And so you know that the COSMIC catalog is rapidly changing. New things are always being added, but we looked at what existed at this point. And obviously, they have previously found a matchup of environmental exposure to signatures, not only in head and neck cancer, but smoking and lung cancer, UV exposure. And so we looked at that for glioma. And so again, here's our slide here, again, broken into IDH mutant, which is the top row, IDH wild type, bottom row, and then females are first column, males are second column. What each of these bar charts relates to is the proportion of our cases for whom the majority seem to be associated with a certain signature. And the overall news is a little bit depressing in the sense that the primary molecular signature identified was age-related mutagenesis. Basically, the older you get, the more at risk you are. But we did find one thing that was quite interesting, particularly in the light of there's such a paucity of risk factors identified for glioma, and that was occupational exposure to something called haloalkanes. Pretty much true across whether you were male or female and whether you were IDH mutant or not. We did find a little greater rate of the signature showing up in the males versus the females, but we certainly saw them in both. And then we also saw, which we haven't quite figured out how to explain yet, these UV light signatures. And it's interesting because glioma has been associated in a number of instances with melanoma and also with the BRAF. So we're trying to sort out whether that has anything to do with why we're just seeing some of those signatures. So haloalkanes are basically used for many industrial and day-to-day purposes. Of interest, they're seen in refrigerants, fire extinguishers, flame retardants. And we thought this was very interesting because there's always sort of been this theory that in some of these occupations, including for firemen, and that has been reported, that there's an increased risk of glioma. And so whether or not this ties things together or not is unclear. So the signature was basically developed by looking at cholangic carcinoma in a group of workers that were exposed, known exposed to haloalkans in Japan. And so essentially they had 111 workers that were exposed, 17 developed what you all know to be a pretty rare cancer. So it was quite unusual that this number of individuals was diagnosed with it. They all were working in printing companies and they all were known to have occupational exposure. And so essentially what they did was they took the tumors from these individuals, looked at the molecular pattern and developed the signature. So that's essentially how the signature was initially identified. And so that's what we're seeing basically in our data. So conclusions here, that the majority of cancer-causing mutations in these gliomas we're seeing primarily as a consequence of endogenous rather than exogenous exposures. We did think it was interesting that different domains of genes in the PI3K pathway were different for males and females. For those of us that have searched long and hard for some of these risk factors for glioma, we are excited that at least potentially there's a new means to try and identify, even if rare, these environmental risk factors. And it's sort of a whole new aspect of glioma that we're looking at. So some of our future directions, we're looking now to partner with colleagues who have worked with us both in the meningioma consortium and the glioma registry, looking at cohorts in which we have a good occupational history. So the San Francisco Bay Area Adult Glioma Study, which is led by Margaret Wrench and John Wienke, they collected extremely detailed occupational history for their cohort, and they have all the tumors. So we're going to try and go back and genotype those tumors and see if we can confirm these associations, which they found with firefighters and glioma. And also they found it with painters as well. And so we're also collecting glioma patients with occupational histories and just sort of throwing it out to people. If you're aware of any firefighters or similarly occupied individuals with glioma, I would love to try and get a cohort together. The other thing that was just sort of luck this past semester, so I teach over at the School of Public Health and everything has been remote. And so as I was meeting via Zoom with one of my students for her final project, And so as I was meeting via Zoom with one of my students for her final project, she revealed that she was actually the principal project director for the Firefighters Cancer Cohort Study. So we're also hoping to pair. I know NIH has a big directive to try and look further at environmental exposures in cancer. So we're hoping that we can partner with some of these folks to look at individuals either living or dead that may have been diagnosed with glioma that we know have this exposure. So thank you all for your time. I wanted to also thank Jeff Townsend, Vincent Canotero, who was a postdoc in Jeff's lab, but now is an assistant professor in biology up the road a little bit at Emanuel College. And I have to thank him. He made all the beautiful pictures. And Steven Gaffney, who also works in Jeff's lab. Thank the various brain tumor associations, including the ABTA and the NBTS, as well as Loglio and a Dutch group called Stop Brain Tumor for their support. And then also thank you for Dr. Roel Verhoek and the Glass Consortium who allowed us access to the data. So happy to take any questions. Elizabeth, thank you. That was a terrific summary of your work. And obviously we'll open it up to questions on the chat line. But let me ask i found it interesting the the observation i guess from asia about the association of halal canes with cholangiocarcinoma you may know there's there's a biologic difference between intrapatic and extrapatic cholangio where extrapatic actually get IDH mutations, but intra don't. Were the cases that they found in Asia, were they extra hepatic? You know, I don't know the answer to that. I gave a similar talk at UCSF, and they mentioned this as well. So we're trying to gain access to some of that information, but I don't know at present um and then um with regard to the finding of the potential differential in mutations within pick 3ca by gender by sex is there an understanding of why those two domains would be different between men and women no and you know we've started to look at that a little bit and we collaborate a bit with Dan Cahill, as I mentioned up at Mass General. So we don't know yet, but he's going to try to take a look into that. He presented the data but didn't note the differences. So he's going to try to take a look and see what that might entail. And then my last question, and this is going to show my naivete and understanding brain tumors, but in sort of the natural history of the low grades, is there an evolution of the somatic events such that they look more like high grades? So it depends. They generally remain quite different. The IDH mutation stays constant throughout. And so that's sort of been one of the issues is what you show up to the party with tends to be what you stay with and that makes it a little bit different to manage them. We didn't find in some of the glass consortium work that we've looked at that really things changed that much whether you gave them treatment or whether you didn't give them treatment. So I was a little bit disheartening, but we're going to try to look a little bit further at that. Yeah. Yeah. You know, judging by the, what you described for the presence of hella alkanes, you could imagine they may be more ubiquitous in our environment than we might otherwise appreciate given all the things they're in. Absolutely. And it doesn't have to just relate to glioma, you know, could relate to lots of different things. Right, right. Well, very interesting. You know, I know we're just about out of time and really appreciate, oh, actually, no, Joe Contessa has a question, forgive me. So Joe's question is high dose radiation therapy delivered to pediatric patients can lead to glioma. Have you found evidence that medical imaging and radiation exposure in this setting is associated? So there's actually and you probably even know of these two studies. There's a big cohort from Australia as well as a second cohort from England. as well as a second cohort from England. And they did find that even exposure to head CTs at an early age in children was associated with a, I mean, it's a very small increase in risk, but a definite increase in risk of both glioma and meningioma. And then anything we looked at, we did find it was a fairly hotly contested topic. We did find exposure to bite wings was associated with an increased risk of meningioma, but that sort of exposure level in terms of dental x-rays generally doesn't exist now. But yeah, in terms of head CTs, the two big cohorts from Australia and England do suggest that. Although, even though the risk is increased, the absolute numbers are small. Of course. And then Antonio Mura asks, great talk. Have you looked at thyroid hormones, thyroid disease and differences between men and women? So we haven't, but one thing that is very interesting and it relates a little bit more to meningioma is a gene that we found and this is a constitutional gene on chromosome 10 we've found to be associated with meningioma, breast, ovarian and also now thyroid tumors. Interesting. Well Elizabeth thank you we are at the top of the hour appreciate both your talk and Barbara's really outstanding work. Thank you for sharing all of it with us. And to everyone who joins us today, thank you for taking the time to join Grand Rounds and we'll see you all again next week. Have a good day. | Yale Cancer Center Grand Rounds | December 1, 2020 | 3,340 | Yale Cancer Center | 20201202 | "Synthetic Lethal Therapy of TP53-mutated Head and Neck Cancer" and "Environmental and Sex- Specific Molecular Signatures of Glioma Causation"
Barbara Burtness, MD and Elizabeth Claus, MD, PhD | 2024-09-29T13:14:36.151359 |
https://www.youtube.com/watch?v=HEclY4gNRp0 | All right. Thanks, Jay. So cool. You can just write whatever you want and Jay will read it. Hopefully someone believes it. Anyway, so I'm just going to get this going. We don't have a lot of time for the intro, but I'll share my screen so we can see that. Right genetic so just go through this quickly so genetic interactions really just mean when you combine two mutations and you get something that's unexpected not additive so it's off the sort of the additive side of the equation. And synthetic lethality is a great example of that, where if you can delete a gene and cells might be perfectly viable because most genes are non-essential. And then you could delete another one in the same phenotype. But when you combine them, you might lead to a lethal phenotype. And that's sort of like a perfect example of synthetic lethality, although each mutation can have some kind of fitness defect. As long as the double mutant has a more extreme phenotype than either of the signals, that's called a genetic interaction. And synthetic lethality is descriptive. So we can explore genetics. We can explore this systematically using genetics and map things on a global scale. And in yeast, you can delete each gene in a diploid format, convert the diploid to a haploid, and ask if the genes are essential. And 80, you know, of the 6,000 yeast genes, 5,000 of them are non-essential and could contribute to synthetic lethal combinations. The essential genes, if you have a partially functional allele, hypomorphic allele, then those alleles or variations like that in those genes can lead to synthetic lethal phenotypes when they combine with other mutations. And the essential genes actually drive a lot of the synthetic lethal interactions on our, our, our global map. So we, Brenda and I got together and generally, and came up with an automated form of yeast genetics so that we could map these things systematically. And then we generated a global map for yeast. And in the end, what happens is you get this wiring diagram where you could see something like mutations. The 19S proteasome genes are essential, but we can have a partial loss of function mutations in any one of these. And they're all synthetic lethal with the APC complex. So we end up connecting these two modules. And this is the sort of the way we think genetics is wired underneath the hood. We also score positive interactions. So you can see this mutations in this cog secretion complex really have no phenotype when we compromise the function of the proteasome. Lee Hartwell articulated two major concepts about synthetic lethality. Lee Hartwell, and this was in the late 80s, early 90s, Lee Hartwell won the Nobel Prize for his description of genes driving the cell cycle that are conserved from yeast to humans. But he came up with this idea that you could take what was a very esoteric concept of a synthetic lethal genetic interaction and translate that into a cancer therapy because tumor cells might be loaded, have a mutational load, and we can take advantage of that and target what is normally a non-essential gene in a wild type cell and turn that into an essential genes through a synthetically lethal interaction, you know, in a tumor. He also articulated that genetic interactions may underlie much of our inherited phenotypes or a significant component. And we think that's true and we're trying to prove that, but we won't talk about that today. So here's the original paper. Actually, it was in the late 90s, 1997, that it was published, and Hartwell articulated that if you could take, you know, what are mutations that are often found in cancer cells, map synthetic lethal genetic interactions that are conserved from yeast up to flies, where you could do those types of experiments at the time, you might be able to then identify synthetic lethal interactions that could be used for cancer therapy. And there is an example of this. This is this PARP inhibitor that will kill BRCA1 cells or BRCA2 cells in a quite sophisticated manner through a synthetic lethal type of interaction. It's a little more complicated because the drug actually traps creates a toxic component, you know, drug-protein combination. And so that might be why this one's particularly effective. Several groups, including Jason Moffitt here, showed that, you know, now with CRISPR, we can actually map essential genes in cancer cells and look for cancer cell line specific vulnerabilities using CRISPR. And of course, the depth map is doing this across hundreds of different cancer cell lines. And you can see that YKT6 is an essential gene. It's past this sort of threshold here, but there's variation even in the fitness associated with disruption of this essential gene in various cancer cell lines. PTAR1 is a non-essential gene. It has a slight fitness defect. And again, you can see this variable fitness. And there's some cell lines where it's absolutely essential. And presumably this is through some kind of synthetic lethal genetic mechanism. With Jason, we've been mapping synthetic lethal interactions in a cancer cell line. We delete a gene and then we do a CRISPR screen and we look for genetic interactions that are specific to that query versus another one. And we're creating a map that looks very much like what we've made in yeast. And this is quite powerful. The difference between this map and the DEP map is that we know the synthetic lethal pairs. In the DEP map data, all you know is that if you delete a gene and it's non-essential in many different cell lines, but it's essential in this cancer cell lines, but it's essential in this cancer cell line that there's some underlying genetic mechanism, but quite likely it's not as simple as a digenic synthetic lethality situation. So we delete B-tar when in here, we don't know what the genetic mechanism is, but I suspect given what we've done in yeast, because we can do the same thing in yeast, delete genes in different cell, different strains that are of yeast that are found in different parts of the world. And we can find genes that are essential in one strain and not another. And when we look at the genetics underlying that, usually it's quite complicated. There might be two, three, and often four modifiers that lead to the synthetic lethal phenotype. So with that, that's just sort of the background I thought might be interesting to this group, since the DEPMAP is sort of cancer-related and synthetic lethality. But with that, I think we've got three incredible speakers here. Jay's going to introduce them, and away we go. Yeah, thank you, Charlie, for that excellent introduction. I'd like to introduce our fourth speaker, Dr. Stephanie Leroux. She's going to represent the clinical translation theme, research theme. She's a medical oncologist, clinical investigator, site lead of gynecology, oncology, and co-director of Beyond Chemotherapy Program at the Bernts' Margaret Cancer Center. As of July 1st, she's a talented associate professor, faculty of medicine at the University of Toronto. Her clinical and academic interests are focused in gynecology, cancer, and early drug development with an emphasis on translational research and have garnered two ASCO merit awards. She received the 2019 Career Development Award from the American Society of Clinical Oncology and the 2021 Young Investigator Award from the University of Toronto Division of Medical Oncology. Internationally, she's a co-chair of the Translational Research Committee and director representing Princess Margaret Consortium at the International Gynecological Cancer Intergroup. She's also a member of the Gynecological Cancer Steering Committees at the U.S. National Institute of Cancer. steering committees at the U.S. National Institute of Cancer. In the last five years, she has over 60 peer-reviewed publications and widely cited journals. And she's a principal investigator, a co-principal investigator in different clinical trials, and has several large-scale peer-reviewed funding opportunities from Canadian Cancer Research Institute, Concord Cancer Foundation, American Society for Clinical Oncology, OICR, Ovarian Cancer Canada, and the Terry Fox Research Institute that cover preclinical and clinical areas of interest. Dr. Leroux, thank you so much for joining us today. And the floor is all yours. Thank you so much for joining us today. The floor is all yours. Thank you so much. So thank you so much for inviting me, and thank you so much for this nice introduction. It's a real pleasure to be there with you today and to focus on synthetic lethality and clinical perspective with a focus here today on the high-grade ovarian cancer. So here my disclosure. So in terms of the high-grade serous ovarian cancer, so every treatment that we develop is usually based on biology. So we need to know about the disease biology to develop new drug. And about high-grade serosuvae in cancer is currently still a lethal disease with an unmet therapeutic need, specifically in the platinum-resistant setting. And what do we know about high-grade serosuvae in cancer is usually characterized by genomic instability and the copy number change. We know that there's some precursor lesion that has been identified as tick lesion that can be there for a few years before having this advanced cancer, which unfortunately is usually diagnosed at late stage with metastatic disease. We also know that there's some metastases that have distinct niche with different chemosensitivity profile. And it's also known to have some chronic chronal heterogeneity and some immune invasion. Unfortunately, in high-grade serosuvae in cancer, we do not know yet how to target the immune system to have the response that was seen with other disease with some immune therapy. So in high-grade serosuvae in cancer, there's a posse of driver mutation that we can target therapeutically. The main mutation that is known for high-grade serosuvaein cancer is the TP-563 mutation, and it's also known for having an early emergence of drug resistance. But recently, with the TGCA, we know that what we call the gene that is involved in the DHR deficiency, that can be mutated or altered in the high-grade serosuavian cancer. As you can see here in the TGCA graph, that there is nearly half of the portion of high-grade serosuavian cancer that can be defined as a monologous repair deficient by BRCA1 or 2 mutation in the germline context. We also have some tumors that are BRCA1 or 2 somatic mutation. There is also some BRCA1 methylation that has been defined, and other mechanisms that can lead to the homologous repair deficiency. So here, the high-grade serosuavian cancer, the new therapeutic, has been trying to target the DNA repair. So the main chemo that we use is a platinum. So the platinum induces some double-st strike break in the DNA. And if you have this homologous repair pathway, you can actually try to repair this double strike break. And you use all the machinery from the homologous repair deficiency. And we can be qualified homologous repair deficient by BRCA mutation, as you can see below. There's also the gene RAF51, which is also important. But here in high-grade serosuvaein cancer, we know that the HR is deficient for nearly half of the high-grade serosuvaein cancer. So here the concept of synthetic lethality, if you are deficient in this specific pathway to repair the DNA because of the cytotoxic agent, you will rely a lot on the alternative DNA repair, which is the single break. And actually here you can target the PARP inhibitor treatment. And it's what have led to the concept of synthetic lethality in high-grade serosuvae in cancer. If you have a hemoglobin repair deficient, and if you target BRCA, and if you target the PARP, you can have this concept of synthetic lethality. And that has been the success of PARP inhibitors. So you have the patient who have a pneumologous repair deficiency, and you target the alternative DNA repair pathway by targeting PARP. And this early stage clinical trial showed benefit of the PARP inhibitor treatment, such as Olaparib. But there is other PARP inhibitor that has been developed. And they initially develop in the early phase one and two with some signal of activity, and then move as maintenance post-platinum-based chemotherapy in the platinum-sensitive recurrence. And now, given the benefit that happens in observing our patient, we have the PARP inhibitor treatment approved in first-line setting. And I will take a few minutes here today to go with you about this potential treatment that now is being integrated in our practice. So in clinic, what do we do now? We have now access to the first-line maintenance for all our patients. Here, I target most the maintenance with a PARP inhibitor treatment, knowing that there's another maintenance that can be approved with bevacizumab, but it's not the topic of the session today. So, what we have been seeing in terms of clinical trial. So, here's a trial that was done in the first line for patients with a high-grade serous or high-grade endometriosis advanced stage, stage three or four. And they were patients with a deleterious BRCA mutation identified either in the tumor or in the germline setting. All patients were treated with a platinum-based chemotherapy and were in response. After response to platinum-based chemotherapy, the patient was randomized to the PARP inhibitor Olaparib or placebo. And the treatment was given for two years. And then they look at the primary endpoint, which was a PFS in this trial. When they look at the PFS, definitely the patient who got the Olaparib have a median PFS that was not reached at the time of this initial presentation versus 13.8 months with the placebo. So definitely here, we have a clinical validation of the concept of synthetic lethality. You have patients with a BRCA mutation who have then the homologous repair deficient. They're responding to the platinum-based chemotherapy, and you target the alternative DNA pathway with a PARP inhibitor treatment. So here, definitely a positive trial. What is interesting is to have a follow-up. Here, the updated PFS analysis was a five-year follow-up. So when your patient was followed for five years, despite they stopped the treatment at two years, you see that the median PFS was significantly in favor of the Olaparib for 56 months versus the placebo at 13.8 months. So definitely a benefit of the PARP inhibitor maintenance for patients who have a BRCA mutation after response to platinum-based chemotherapy. What is interesting is to see that actually this benefit was also seen for patients who were considered with a lower risk of recurrence. So you take the patient who actually have a complete surgery at the complete response post-platinum-based chemotherapy, and still, this patient who have a lower risk of recurrence still benefit definitely of a PARP inhibitor maintenance with a placebo, despite having complete response, have a median PFS of 15 months with a placebo, and it's not yet reached with the olaparib. So definitely, that is now part of a standard of care. But homologous repair deficiency is definitely characterized by BRCA1 and 2 mutation, as expected, but we have seen that the HR deficient can be implicated with other gene or other mechanism. So this trial is a PREMA trial, which assess the same population, but beyond BRCA mutation. So it was patient with a high-grade serous, high-grade endometrioid advanced stage of cancer, which gave a complete response to partial response to platinum-based chemotherapy, and were randomized between the other PARP inhibitor, niraparib, versus placebo. and were randomized between the other PARP inhibitor, niraparib, versus placebo. As part of this trial, the patient was stratified according to the specific HRD status, the homologous repair deficient status, that has been tested with a specific test. The primary endpoint was also the PFS. As part of this trial, in terms of the PFS in the overall population, having nirapib is beneficial with a median PFS of 13.8 months compared to the placebo at 8.2 months. So in all population, there is a benefit of the PARP maintenance. But that differs according to the HRD status, as you can see here. The patient who has an HRD deficient by a BRCA mutation, the hazard ratio was 0.4. So definitely the patient who have a BRCA mutation benefit from a PARP inhibitor maintenance first line. When you have patients who are this HRD deficient by the test, but they're not characterized by the BRCA mutation and actually are BRCA wild type, there is a benefit of hazard ratio of 0.5. So definitely a benefit. And this homologous repair deficient is due to other mechanism than the BRCA mutation. And then you have this group of HR proficient. So it means that the specific HRD score is negative and the patient are characterized as proficient. And here the hazard ratio was 0.68. So still a benefit, but definitely not at the same amount as the other patient characterized by homologous repair deficiency. Another trial that was presented at ASCO last week is the ATENAT trial. The same population again, not focusing on BRCA. It was all patients with advanced ovarian cancer, high-grade, responding to platinum-based chemotherapy. As part of this trial, they look at different maintenance treatment, but here at ASCO, they reported the maintenance with Recaparib, which is another PARP inhibitor, versus placebo. So here, we present that result, and again, the same result. In the overall population, the PFS was in favor of the maintenance recuperative, which is the PARP inhibitor treatment versus placebo. So definitely, again, in a benefit of PARP maintenance. And again, when you look at the population of HRD, which was the same test that was used before, the recuperative median PFS was 28.7 months versus 11 months in the placebo. So definitely now part maintenance is in practice for the first-line treatment in high-grade cell-surveying cancer. But we still have some limitation. As you can see, this trial in the primary endpoint was in all population, but we know that the one who benefits the most are the patients defined as homologous repair deficient, as the biology shows us, because of the concept of synthetic lethality. So the tests that were used as part of this trial are the HRD score, which is based and performed by Myriad, which is a composite score based on the loss of heterozygosity, the telomeric allelic imbalance, and the large-scale state transition. So it's a combined score. And from retrospective study, the score serol has been set at 42. So if you are based on this score, you are proficient or deficient. There's a lot of effort to see if we can improve this test because usually this test is performed on the archival tissue on the initial diagnosis, and that may not be reflecting the current status of the disease. And here's an example of another HRD test that was developed by the Leuven Group, which is an academic laboratory-developed HRD test, which also involved three components. And here they tried to test this new HRD score versus the Myriad score as part of another trial and showed that actually there is overall agreement about 91%. And the Leuven HRD test was slightly more HRD positive patient compared to the Myriad test. But overall, an agreement of 91%, but was still able to actually separate the one who had the best benefit of PARP versus the other one. So a lot of effort currently ongoing to define the HRD. But now we need to think as a clinician of what we will do, because as you can see in the first line, most of our patients now will have a PARP inhibitor treatment as maintenance post-platinum-based chemotherapy. So two concepts that is important. First of all, as you can see, all the trials in the first line setting have a specific duration of the PARP maintenance, either two years or three years. So PARP exposure will not mean necessarily PARP resistance. And that was already proven by a recent study showing that the PARP exposure is not equivalent to the PARP resistance. What we know is that PARP resistance is usually platinum resistance. But we know also that when we give a PAP inhibitor treatment to a patient, you may have a different response to the subsequent chemotherapy. And here's an example. The olaparib maintenance treatment was assessed at the time of recurrence. And at the time of recurrence, patients are usually re-challenged with a platinum-based chemotherapy. And if they didn't have a PAP before, they can have a PAP. And the PAP maintenance and at the time of recurrence, patients are usually re-challenged with a platinum-based chemotherapy and if they didn't have a PAP before, they can have a PAP and the PAP maintenance is until benefit. And actually as part of this trial, it was Olaparib versus placebo. And when patient progress and was re-challenged with a platinum-based chemotherapy, the patient who got placebo got a better response to chemotherapy compared to the one who have Olaparib. Despite the overall survival was in benefit of volaparib, but it showed that when we treat a patient with PARP, the response to subsequent therapy can be different. So that we need to learn more about that. And that is just an example that we have seen in our patient. As you can see here, with our patient, we have the archival tissue. Then the patient unfortunately relapsed several times and then got the prolonged benefit to the PARP inhibitor treatment. And we were able to have the tissue at the time of PARP progression. What we have learned by whole exam sequencing and NRA sequencing, we have seen that on the initial tumor, which was PARP-naive, the patient did not express BRCA, which was not related to a BRCA mutation. And at the time of PARP-resistant, she restored the BRCA function and the expression. In addition, at the time of initial diagnosis, she didn't have a cyclin E overexpression, but at the time of progression, when she began to be pap resistant, she actually has a cyclin E amplification. So definitely with treatment, the tumor evolves, and we will need to target the tumor differently. So if the patient begin to have resistance to treatment, what we will do, and that is currently a lot of investigation and a lot of effort currently. That is just a small example of patients who progress in clinic on PAP inhibitor treatment. So patients got treatment and progress on PAP inhibitor treatment. And before, for all the treatment for ovarian cancer, we distinguish platinum sensitive and platinum resistant. But when patients are treated with PAP, does it mean anything still? So here we enroll patients with platinum sensitive and platinum resistant. All patients got the biopsy at the time of PAP progression. And when they progress on PAP, we re-challenge with a PAP, but combining to an anti-angiogeny to see if we can overcome the resistance. What did we find? We find that actually some patients, despite progressing on PARP, can benefit of a rechallenge with PARP in combination with anti-angiogenic. Some patients benefit, but some do not. And that was interesting as well. You can see here in blue, it was a duration of the prior PARP and then the duration of the PARP rechallenge with an anti-angiogenic. What we have learned is that by sequencing the tissue at the time of PARP progression and sequencing the tissue at the time of the initial diagnosis, chemo-naive and PARP-naive, we see, again, some evolution of the cancer. So here we have seen that at the time of PARP progression, there's some reversion mutation in this HRD gene, including BRCA1 and 2 and RAD51. We have seen some amplification of the cyclini, and we have also seen at the time of PARP progression, a progression of the multidrug efflux pump ABC1B. Interestingly, the patients who have this reversion in HRG have really worse outcomes and do not benefit of a recharge of PARP. And what we have seen as well is that one patient can actually have multiple mechanisms of resistance to the PARP inhibitor treatment. So different roads for the PARP inhibitor resistance, but the same destination for people. So we know that the PARP inhibitor resistance can be related to the restoration of the homologous repair pathway. It can be also an up-regulation of the drug efflux. It can be also a senior transduction pathway, but also now, and it's why it's a huge focus for high-grade psoriasis in cancer, is a replication for protection. So in practice, what we will do, we need to identify this mechanism of resistance in real time, because the tumor, the initial diagnosis may not be reflecting currently what's going on with the patient. We will need to do tumor biopsy, but sometimes can be challenging. Can we develop new tools, for example, secreting tumor DNA? And we will need to redefine our treatment strategy. Can we prevent the resistance when we actually give the PARP? Or if the patient progresses on PARP, can we overcome resistance? Or actually, can we pass by the resistance with all this ADC new development? So that is one of the hypotheses that we will test. And this trial will be open to Princess Margaret hopefully in one or two months' time, is a phase two trial with ovarian cancer where we will take patients to progress on PARP inhibitor treatment or receive prior a PARP inhibitor treatment and progress. And we'll try to see if we can adapt the treatment in a real-time assessment, knowing that the genomic can evolve. So this trial is supported by OICI OVN Cancer Canada, and we will take patients who have high-grade ovarian cancer who previously got a PARP inhibitor treatment, regardless of the platinum status. All patients will undergo tumor biopsy and ctDNA, and they will be rechallenged with a PARP inhibitor treatment with an anti-angiogeny because we know that some can benefit. And then we will adapt based on the genomic sequencing on the tumor biopsy and the CT DNA. If patients have benefit, they will continue the treatment. If they do not benefit, then we need to adapt according to the alteration observed in the tumor or the blood. If there is no mechanism of resistance observed, they do not have sufficient benefit, so we'll add an immune therapy to try to target the microenvironment. If they have a nomologous repair gene reversion or a second overexpression, we need to change our treatment, and we know that the rechangery spark will not be benefit. And this patient will undergo chemotherapy. So this kind of trial is a pilot trial to see if we can adapt the treatment based on the timely assessment. But definitely, we need to overcome resistance. And there's a lot of new DDR target in the DNA repair network. So we will discuss just a few slides after the V1 inhibitor treatment, because we know the DNA repair is also dependent on the cell cycle. And we know also that the high-grade cell-surveying cancer is always B53 abnormal. So there is a lot of target in the DDR that we can look at. The V1 is one. There's also the ATR inhibitor that is currently on development in a selected population or in combination. So just to finish, definitely targeting the cell cycle is important in high-grade serosurveying cancer. And that was a randomized phase 2 trial that our group conducted based on the rationale that it's not synthetic lethality, but it's a methotic catastrophe. When you target all the point of the cell cycle, you can lead to a kind of a synthetic lethality. So high-grade serosurveillance cancer, if P50 is remutated, so you increase your reliance of the G2M checkpoint here. So you can block it by a WE1 inhibitor, for example, the AZD1775. And then you can target the X phase with a gemcitabine, which is an anti-metabolite. So you target all the checkpoint of the cell cycle to go to the methotic catastrophe. So here we did the trial based on this hypothesis in platinum-resistant high-grade psoriasis cancer. We randomized the patient between gemcitabine plus the AZD1775 versus a gemcitabine plus placebo. Here are the clinical results. So with this combination of the W1 inhibitor plus gemcitabine, we have increased the PFS and we have an overall survival benefit with a combination of 11 months versus 7 months with gemcitabine alone. So clinical benefit, but also some potential synergy is term of objective response rate with a combination led to a 23% response rate versus 6% with a gemcitabine. What is also interesting as well, you can see that some patients have prolonged benefit to this combination. So we were able to actually identify potential biomarker of response because all patients that undergo this trial have a baseline biopsy. And you can see here the patient in the group of the WE1 inhibitor plus gemcitabine, the one who had the best response actually were the one who had the sacclinic amplification. So definitely, this hypothesis is that the patient who has a replication stress and has a sacclinic amplification may benefit from this kind of treatment strategy. So in terms of the TECOM message, so definitely the DDR defect causes potential vulnerability for high-grade serosuvae in cancer, but also offers actually a therapeutic opportunity. If there's alteration, you can actually leverage this vulnerability by a new treatment approach. But also, I wanted to show that the synthetic lethality has been validated by the PARP inhibitor treatment, specifically in the HRD population, but it's definitely broader than that. And there's different way to have this synthetic lethality concept. Definitely with the cell cycle and the replication stress is an opportunity. In challenge for clinic, we definitely still have challenge in defining the HR deficiency patient. Who are there? Because we know that the HRD test is not completely accurate. So there's a lot of effort to try to define this homologous repair deficiency in clinic. We know also that we need new treatment option for the patient who are HRP. You can be HRP at the initial diagnosis, but you can also become HRP by different mechanism of resistance. So we need to develop new treatment for overcoming this resistance. And definitely something that we'll follow in the discussion is how we can target this replication stress in ovarian cancer. So I want to conclude by thanking all our patients, as you can see some of them here that participate in clinical trials that allow us to answer some specific questions. And thank you, our research team and our research friends. So thank you. a specific question and thank you, our research team and our research friends. So thank you. Thank you, Dr. Leroux. Our next speaker is Dr. David Gallo, who's going to represent the adaptive oncology team, research team. David completed his PhD at the University of Toronto in the Department of Biochemistry, studying the DNA replication stress response using yeast as a model organism. From there, he moved to Dr. Dan Turoce's lab to complete a postdoc where he uncovered the mechanism of how PKMYT1 loss is synthetically built with CCNE1 overexpression. Currently, he's a senior scientist with Rare Therapeutics, helping guide the PKMYT1 inhibitor RP6306 through clinical trials. Dave, thank you for joining us and the floor is all yours. Hi Jay, yeah, thank you for the kind introduction and the opportunity to present some of the work and how we identify new drug targets from synthetic lethal interactions and how we attempt to translate that into the clinic. So today I'll tell you a little bit about, first I'll start by talking about utilizing and understanding synthetic lethal screens to identify new drug targets. I'll then move into a little bit about how we can expand patient populations of synthetic lethal drug targets using chemogenomics and other screening techniques. And finally, I'll finish off with trying to understand how we can use biom but CRISPR-Cas9 genome editing technology has really revolutionized the way that we do genomics in human cells. So briefly, basically what you need to understand here is that Cas9 is an enzyme that cuts DNA and it binds to synthetic guide RNA. And the synthetic guide RNA will guide the Cas9 enzyme to specific regions of the genome where it introduces the double-strand break. If that double-strand break is repaired in a mutagenic fashion by the non-homologous end-joining pathway, you create indel mutations, which then result in a non-functional RNA, which is decayed. So essentially, what happens is if you target the targeting region of the genome where the SGRNA goes, ends up in a gene deletion, the equivalent of a gene deletion. So what has happened is there's been many synthetic guide RNA libraries that have been created to systematically knock out every gene in the human genome. And so we use this technology to screen for synthetic lethalating cancers. And so basically, this takes the form of a pooled lentiviral dropout screen. So basically, what you'd start with is a synthetic guide RNA library. You introduce this into your cells at a low titer. So you get one guide into the cell. So this library will have three or four guides for each gene in the human genome. So yeah, you transduce at a low titer so that you can make sure you get one guide into each cell. So each cell then harbors a single gene deletion. You grow that out, and then these are all barcoded. So then you can do a barcode next genome sequencing and understand which genes when knocked out are essential in that background okay so that's the idea of a pooled dentivial dropout screen but how do we use this to find synthetic lethality well there's two methods I think think that, you know, two main methods to do this. One would be to take a cancer cell line. So here, shown here is a bunch of different cancer cell lines that are well characterized, and then do your screen in that cancer cell line. An alternative approach is to create an isogenic cell line. So in this case, you take a wild type cell and you introduce a cancer mutation into it, and then you screen both isogenic pairs with the genome wide library to figure out which genes are essential in the cancer cell line, but not in the wild type cell line. And so these two approaches have both been used. However, cancer cell lines are often, have multiple mutations, multiple different copy numbers of genes, which makes it very difficult to do CRISPR sometimes in them. And also it complicates identification of synthetic lethal pairs due to this kind of messy mutations in the background. So the preferred method that we like to use at Repair and many others is to use this isogenic pair method where we use an immortalized non-transformed cell lines so like a human like an h-transform cell line or a mortalized cell line sorry and these cell lines have a lower mutational burden relative diploid ploidy which results in a better gene editing efficiency and it's easier to track back your genetic interactions because ideally you're only looking at one mutation that you've introduced into the isogenic pair. Okay, so to do this, you have to first create a nice isogenic pair, a nice isogenic cell line with your mutation of interest. So keeping in the concept of what Stephanie was explaining, we chose to use cyclin E overexpression as an isogenic model for our initial screens here in this paper. So this has all been published in Nature recently. And so basically what we did is we created a cyclin E overexpressing cells. So we picked two different clones here. And you can see then the first step to this is characterization of the isogenic model. So we can see that indeed they do overexpress cyclin E. They also express markers of replication stress, which is what we'd expect when we overexpress cyclin E. And they have this S phase accumulation in the cell cycle, which is another phenotype of cyclin E overexpression. So we've now got a very nicely characterized isogenic model system to screen and so we carry forward the screen and basically so the screen's done the same way I just explained so basically you take a cancer cell line your wild type cell line you do your screen and at the end of the day after your NGS sequencing of the barcodes, you can then score which genes or which synthetic guides drop out of the mutant cancer cell line, but remain in the wild type cell. So these would be your synthetic lethal interactions that are specific to the cancer cell line. And in this case, it's cyclin E overexpression. Okay, so what does that look like? So basically what we've done here so we use our favorite workhorse cell line which is the RPE1H church. These are all p53 knocked out so we always work in a p53 mutant background and so in this case we've done the screen here with the TKOV3 library which was developed in Jason Moffitt's lab at the Donnelly Center and so what you can see results from the screen here is that we get so what here I'm plotting is two different scoring algorithms to score essentiality and we get this kind of plot here where you can see the synthetic lethal interactions up here in the top right corner. So this is a lot of genes here so we want to think about different ways that we can narrow down this list to a more manageable set of interactions to follow up on. And so to do that, we actually just do more screens because that's what we do. And so in this case, what we've done is we've taken multiple different clones. So to get a nice robust screen, it's good to screen a couple of different clones of your overexpression. And we've screened it with a few different libraries. So a few different iterations of the TKO library. And in this case, now we've narrowed it down to about five genes that look interesting up here. So we can see that these now become more interesting hits in the screens. And from there, you can actually go one step further and start data mining. So we can use DETMAP, which Charlie introduced from the Broad Institute. And so we can do an in silico screen where we actually ask what genes are essential in these cancer cell lines that overexpress cyclin E. So when we look at cyclin E overexpressing genes, PKMET1, cyclin E, and CDK2 come out at top. So we can see that PK-Met1 is a common theme among these genes, or among these screens that we've done. And so this now becomes a very interesting hit, which we ended up following up on. So if you're a biochemist, you look at this and you say, oh, PK-Met1 is a kinase. So that's actually a perfect druggable target. And so I think Richard will explain a little bit more about this part of the drug development process later. So suffice to say, we ended up targeting going after PK-MET1 in this for follow-up. And indeed, we were able to create a potent and selective orally available inhibitor of PK-MET1 called RP6306. So RP6306, yeah, it's a PK-Met1 inhibitor, and we are actually now currently in phase one clinical trials with this drug for cyclin-E amplified tumors. But so now we've got an inhibitor, and we've got a nice inhibitor, and we've got a synthetic lethal interaction, but how can we use screening to expand our patient population? So we're always thinking, what else can we use this drug in? We always want to look for new patients that we think the drug will work well in. So to do this, we actually turn back to more screening, of course. So from here, what we can then do is what's a chemogenomic CRISPR screen where you then use the PKMet1 inhibitor in combination with the screening campaign to look for additional patient populations where loss of function of these genes will then cause sensitivity to the PKMet1 inhibitor. So we can use chemogenomics to do this to expand our patient population from cyclin E to other loss of function mutation. And so by doing this with the PKMET1 inhibitors, so we did an RP6-0-6 chemogenomics screen, we actually uncovered a loss of function of FBXW7 to be another synthetic lethal interaction with PKMET1 inhibition. And when you look at this, it actually makes sense because FBXW7 is the adapter that guides cyclin E for degradation. So when you lose FBXW7, you get accumulation of cyclin E protein. So this makes sense at the mechanistic level as well. So that's one method to do a chemogenomics screen. So, however, there's lots of genetic space available for chemogenomic screens. And so when we're doing CRISPR and activating screens, we're really looking at loss of function and activation of tumor suppressors. And the genetic alterations in cancers here might be biallelic truncations or epigenetic silencing. And that's what we did as a CRISPR and activation genome-wide screen. However, there are other genetic alterations in cancers that are not loss of function. So for instance, we can have gain of function or loss of function, small nucleotide variants. And this looks like missense mutations, hotspot mutations, or splicing mutations. And so we can't use a loss of function screen to identify these, but we can use other technological CRISPR approaches to screen for these, which might be base editors. So we can use CRISPR to introduce small nucleotide variants and screen that way. Alternatively, we also have overexpression, so driver or passengers in cancers. So what we see is this can be seen as amplification of genes, tandem duplications, or epigenetic upregulation. And to screen this, we can use a CRISPR activation screening library. So I'm not going to get into the details of these, but suffice to say there are other methods available to use in the chemogenomics space to identify new patient populations. Okay. So finally, an additional way that we can screen is not by CRISPR screening, but there's this new trend of using pooled cell line screens in the chemogenomics approach. So this has been kind of championed by the Broad, of course, by, and what they do is this screen called the PRISM. And so PRISism is they basically taking a whole bunch of cell lines, so 578 cell lines in this case, and they barcode each cell line. So then, then they do a pooled cell line screen. So they pool the cell lines based on the growth rate. And then you basically can do a pooled cell line screen with your drug on board. And from there, you can get an idea of which cancer cell lines in these panels drop out when you have your when you have your drug on board. So basically this offers a nice fast message to screen for drug sensitivity in multiple well-characterized cell lines so these are these are well-characterized cell lines, subtype mutations, mRNA protein expression levels, copy number, methylation status. So they're well characterized. You can also use these, you can use this method to find mechanistic analysis of your drugs. So you can use whatever your outputs here are and do a correlation to match genetic screens or other drug screens. And so if your drug behaves like another drug or like another gene deletion, then you can use this guilty by association kind of idea to gain some mechanistic insights. Furthermore, this also allows data mining to find new patient populations. So if you see a bunch of cell lines drop out of your screen, you can go in and mine those for genetics and various other characteristics that they have to do data mining to find new patient populations in this way. So that's great. So now we have a drug and we found a whole bunch of patient populations that we think are going to be, that we might have efficacy in the clinic. And so that's the first part of the CRISPR screening platform to find synthetic lethality in cancer. However, the next part is how do we find these patients in the clinic, right? So we can have a great synthetic lethal interaction. We can have a great pair. But if we can't find the patients in the clinic to treat the drug, we can't do anything with it, right? So really, this is where the challenge in synthetic lethality is translating into the clinic. So to do this, we really need robust biomarker development to find the right patient in the clinic. And then we also have to think about figuring out effective strategies, either in the mono or combination therapy space, using these biomarkers that we've developed. So how do we go about that? Okay, so there's many different assays to measure biomarkers. So basically, I'm going to use cyclin E as an example, because that's what we did in our manuscripts, and that's what we're going after in the clinic. So to score cyclin E, manuscripts, and that's what we're going after in the clinic. So to score cyclin E, first of all, you might think about looking at protein levels of cyclin E, which is an IHC assay. Advantages to this is you have low tissue requirement, it's three to five days to get a result. However, you can only get a single biomarker out of an IHC, and it's got a relatively small dynamic range. So you can see here, these would be graded from zero to four, and these are basically just scored on this scale. So this is great for when you're looking at amplified protein amplification, but if you already have a low level of expression and you're looking for something that's going down, this can be a bit challenging through the small dynamic range. Alternatively, we can use FISH to find cyclin E amplification, so fluorescent in situ hybridization. So we can use a probe for cyclin E and look in the nuclei of these cells. And we can see that basically the more dots they have is the more amplification, gain or amplification of cyclin E they have. And so again, this is a low tissue requirement, three to five days to results. But again, we have the same problem with only being able to run a single biomarker at a time. This has a bit of a higher dynamic range. So really what the field's kind of moved towards is an NGS, so sequencing techniques. So this is looking at the genomic level for amplification or deletion of genes. And so this, however, it takes a higher tissue requirement, a longer time to get the results. However, we do get the whole genome in this case. So we'll get, we can test multiple biomarkers for amplification or deep deletion if you're looking for BRCA, for instance, and we have a really high dynamic range here. So the problem with this though becomes is what, how do we proceed if the expression and genomic level data are not aligned? So in the case of cyclin E, you can think of a scenario where, and this is a scenario actually in the clinic, where you see high amplification based on an NGS, but then you look by IHC and you have low protein. So we might be actually picking patients if we're picking on NGS that don't actually end up with high protein level. So we might be treating patients that might not respond. actually end up with high protein levels. So we might be treating patients that might not respond. Alternatively, a patient could have high cyclin E levels but not be amplified at the NGS level. So we might be missing these patients if we're only looking at NGS. And so I don't have a good explanation as to how to go about this. We're still working through this in the clinic, but these are the kinds of things we have to consider when we're looking at biomarkers and how to select patients. Okay, so finally, I'll just end with thinking about some effective combination strategies to consider. And so thinking about moving into the clinic, we want to really consider what's an effective combination strategy. So combinations, you know, are very important in the clinic. So how do we go about doing that and considering the biomarker? So basically, the way we think about this is it's kind of three factors that come into play. So first is the mutation or the mechanism. So the mutation being cyclin E and the mechanism of action. So, the mutation being cyclin E and the mechanism of action. So, well, okay, let's take high-grade serious ovarian carcinoma as an example. So, as Stephanie said, you can be HR defective, or you could have another gene such as cyclin E that's an oncogene-induced replication stress. So, based on that mutation or that mechanism of action, you know, we could think about how to make a combination partner based off that. So does the MOA of the synthetic lethal interaction align with the MOA of the combination? So another important thing is the indication that we're going into. So this could be, you know, if we find this mutation in these patients, what's the indication that we find them in? And so which indications have the high prevalence of the patient selection biomarkers. And really what this has to do with combinations is that each indication could have a different standard of care. And so depending on the standard of care in each indication and which partners are already approved, that might influence the way that we pick a combination partner as well. So we will use high-grade serious ovarian carcinoma as an example here. So as Stephanie said, many of the high-grade serious ovarian carcinomas are homologous recombination defective. However, some are cyclin E-amplified and others are in the other pool. So if you come and so you turn up in the clinic and you're homologous recombination defective by NGS, as Stephanie mentioned, this is right to treat with PARP inhibition. And we might consider a combination with an ATR inhibitor in this case based on the mechanism of action. So this would be a targeted plus targeted combination therapy. So, however, if you do turn up with cyclin E amplification, you might just, the gemcitabine would be a better combination partner, as Stephanie said, because you're combining the replication stress phenotypes together, which creates problems for cyclin E amplified cells. And in this case, we would come in with our PKMET1 inhibitor because based on our mechanism of action, we think that PKMET1 inhibition would be more synthetic lethal when you combine with gemcitabine. So this would be a chemo plus targeted. I'll just say that PKMET1, I didn't get into the mechanism of PKMET1, but PKMET1 is the sister of WE1. So you can think of PKMET1 inhibitors similarly to WE1 inhibitors. So alternatively, if you move away from high-grade serosylvarian carcinoma, cyclin E, and from cyclin E, we know, like I said, from our chemogenomics screens, we're able to identify FBXW7 mutations as being sensitive to PKMET1 inhibition, and these are more prevalent in colorectal cancers. And so, and colorectal cancer, full flurry is commonly used as a chemotherapeutic. So in this case, we might consider using a full flurry PK-Met1 inhibition, PK-Met1 inhibitor combo to address this in the clinic. So taking together, I think, you know, synthetic lethality, I think we start with a screen and we're able to identify, you know, nice synthetic lethal interactions with screen. However, the real challenge becomes translating this into the clinic. So how do we then, you know, find these patients in the clinic and how do we combine these and translate this into effective combination strategies to help treat these patients? So I'll stop there. Thank you very much, Dave. Our next speaker represents a therapeutic innovation team. It's Dr. Richard Marcellas from OICR. He's currently a scientific advisor in the drug discovery program. And prior to OICR, he worked in the pharmaceutical industry for over a decade at Geminex Pharmaceuticals in Montreal. Richard has broad experience in therapeutic development, spanning the early drug discovery and development process from target identification and validation to IND support. His expertise includes protein production and characterization, acid development and compound screening, drug target confirmation, and project management. Richard, thank you for joining us today, and go ahead. Thank you for the introduction, Jay. Glad to be here. I'll share my screen. Okay, can you see those slides? Let me zoom up. How's that? Looks good. Okay, I'll try and go fast so we don't run out of time here. So I'm coming at this very much from a drug discovery perspective. So we can see lots of interesting synthetic lethal combinations, but ultimately we have to find things that are druggable. So in our definition, a target for me is some component of the cell or some biological component that we can modulate with an external agent. In our group, it's always a small molecule agent that can be a direct modulation or sometimes an indirect modulation. But the idea being we can alter the course of the disease or the condition with this small molecule treatment. When we talk about validation in our terms, we really would like to start with some evidence that modulation of our target is first of all possible, it's a druggable target, and will give us this desired phenotype to help alleviate the disease. So for us, what we're looking for for for a small molecule binding site, is really a protein needs to have a deep hydrophobic pocket. Drugs are almost always hydrophobic. They're not too charged or they won't get into a cell at all. So the active sites of enzymes are good examples of those kind of pockets. Protein-protein interactions are historically considered undruggable, but there are more and more examples now where if a protein-protein interaction does involve deeper pockets, they can be druggable, and we've done work in that area quite a bit. Generally speaking, small molecules are going to inhibit your target. So there's only a few examples where you can actually activate a target. So generally, we think of them as inhibitors. Now, what we can't do is go after highly polar surfaces like DNA binding domains, microRNAs or long non-coding RNAs. That would be much better suited to nucleic acid type therapies. Small molecule drugs that are polar enough to interact with those things are not going to have the properties we need to dose patients. And as I sort of alluded, we can't replace missing functions with a small molecule. We can shut something off. Maybe we can activate something in some cases, but we're not going to replace a protein's function. So we're not going to fix a tumor suppressor loss. When we look at the new targets, we tend to throw them on a kind of a grid like this, where we're thinking, okay, how likely is it that it's druggable? So that's our x-axis, our druggability score. And how good is our target validation? So if we're sitting up in the top right corner there, we know something's druggable, such as a kinase. It's got really good validation from the clinical genetics. That's going to be a very competitive space and the pharmaceutical companies are going to sit up there. Biotechs will tend to want to have good validation. They want to know who their patients are going to be, but they'll tolerate more of a druggability risk. So they're in the upper left there. A lot of work we do actually with collaborators is in the bottom right. So we go after things we know we can drug in a reasonable time frame, but we don't really know its full biological role. That hasn't been worked out. So we'll do things like probe development to help develop the tools to allow us to assess the validity of the target. Genetics is great, and we use genetics a lot, but genetic ablation of a target, especially a CRISPR, even knockdowns, is not the same as binding a compound. They hopefully will correlate, but the correlation is not sometimes as high as you would think it should be. The bottom right there is the very risky stuff. We do a bit of that, but we try and get away from the things that are going to be too risky. It's a long road to develop a high affinity and specific inhibitor. So I'm going to start from a clinical genetics perspective, mining big data, something we have people doing quite a bit, taking all this RNA sequencing and DNA sequencing from tumors and mining that data, looking for the overexpressions, translocations, etc. that are seen in cancer cells versus a normal, looking for the activations, which are presumably the oncogenic drivers or the things supporting the tumor phenotype, and also the inactivations, your tumor suppressors and things like that. There's the primary target itself which is showing up in these screens with the activating mutations for example but we also pay attention to their interacting partners because we do have quite an interest in the protein protein interaction space. Sometimes if the the key protein you wanna go after is not particularly druggable, there may be a partner protein which would give you a similar outcome. It'd be more druggable. So we always pay attention to that. And from all of this, we're generating some kind of hypothesis and then we'll get to work setting up our assays, doing our screens, et cetera. So I'm gonna start with an example, just using C-BioPortal, which is a pretty accessible way to get into the clinical data sets from all these sequencing projects. So here I've just selected the top, the MSK impact sequencing set done at Sloan Kettering. That's almost 11,000 patients that have had a lot of targeted tumor sequencing done. So we can look and see what these patients have. So, oops, one more. And you can see that it's a lot of different solid tumors in this set. There's no heme. So you can see the breakdown up here in the left, what kind of tumor types are there. And then down below on the lower level, you can see which mutations they have. You see P53 is there at the top. 41% of these patients have a P53 mutation, pretty standard. KRAS mutated, but inactivating mutation in this case. So P53, of course, generally inactivating mutation. KRAS is activating. Telomerase, your TERT gene, activating. So you can see a lot of your drivers are going to be there. And of course, some of the tumor suppressors getting shut off with point mutations. On the middle, you've got your fusion genes, which is generally a type of amplification in that two genes fuse together and express one of the components at a much higher level than it should be expressed, sometimes change its properties. Tempris-ERG is a pretty well-known fusion between those two partners that activates a transcription factor. Very nice validation, very nice biomarker, but very difficult to drug Tempris-ERG. And the last column is the copy number aberrations. And I'll go and focus in on that. And you've heard a bit about some of these. So in copy number aberrations, this is your deletions, generally of things that are stopping a tumor from forming, your tumor suppressors and things, and the amplifications of things that are supporting tumor growth with your driving oncogenes, MYC, a lot of the growth factor receptors you'll see amplified. Now, the most common homologous deletion you see in cancer across most types actually is CDKN2A. So that locus, 9P21, harbors, that gene actually codes for two proteins. One of them is a CDK4 inhibitor. So P16-INC is a CD4K inhibitor and P19-ARF inhibits MDM2. So it is both functions to restrain the cell cycle at the G1S interface. That's the CDK4 is pushing you through that. And of course, MDM2 shuts down P53. So you want to, in a tumor cell, does not want P53, wants P53 to be shut down. So you don't want the inhibitor of MDM2. So we are focused in the little example I'm going to give you here, we're focusing on the CDKN2A. So I, instead of doing an actual isogenic screen here, I'm going to show you a bioinformatic way that we will use. So if I go into, let me see, next slide. If I go into what you've heard many times now about the DEPMAP, an extremely useful tool, is ask, split all the cell lines essentially into those that harbor the CDKN2A loss, and these are the ones on the right, and those that do not harbor a CDKN2A loss. So we've taken the 700 or so cell lines, split them into two groups, and then ask which vulnerabilities we can find. And what showed up with a decent correlation score at 0.37 was PRMT5, protein arginine methyl transferase 5. So that is an enzyme that's showing up. Here's just a scatter plot showing the sensitivity to PRMT5 on the y-axis and the expression of the CDK N2A on the x. So you can see as it gets lost, you're more sensitive, you're going more negative in your score. So your dependency score is getting better with that gene. So this combination of going in with some kind of genetic idea, and then we don't have dependency data, of course, from our genetic data sets. We just see what the tumors have. We don't know for sure they've created a dependency. This is a way of sort of bridging that. Of course, these are just cancer cell lines. They're not primary cancers, but many things we'll track through to them as well. Okay. So if we then go back into the debt map, but now take PRMT5 as our query and say, okay, what's happening when we knock down PRMT5 across 700 cell lines? So here it's just a summary. You can see an RNAi or the shRNA. You can see it's moderately left shifted from the zero, but not strongly. But you can see there's quite a big tail coming off the left side, really pushing into the area of strong dependency. So it gets scored as a strongly selective in RNAi. In CRISPR, however, it is a common essential. This is a very important enzyme. You CRISPR it out, you're going to kill almost all cells. And what you see, one way I use to assess the function of a protein and figure out how well the literature is reporting on the role of that protein and its binding partners is look at the codependencies. So in CRISPR, the top codependency is a protein called WDR77. That also showed up in the RNAi at the top codependency that is an obligate binding partner for PRMT5 so that makes a lot of sense these two things work as a complex we'd expect them to show similar phenotypes and a correlation of 0.5 out of the depth map it is a fairly noisy system so a 0.5 I consider to be extremely good correlation makes sense in the RNAi which is showing quite a bit more texture I think we're getting better information out of the RNAi, which is showing quite a bit more texture, I think we're getting better information of the RNAi. You can see a bunch of other things that from the literature would be potentially predictable. MAT2A is the enzyme that makes a substrate for PRMT5. It makes SAM. Rheokinase is a binding partner for PRMT5 involved in ribosome biogenesis. So I see a pattern here, which looks really good, makes me trust the data. So if I dig a bit deeper into the PRMT5 data set, look at all the correlations, what I now find is that in fact, CDK N2Aa which is the original thing that drew me there certainly their copy number loss of CDK and 2a and B correlates quite well however it's not the best correlation best correlation is a loss of a different gene called mtap and you can see it's correlating at very high correlation score 0.56, extremely high. Again, you see the binding partner there and MTAP showing up both as expression and copy number. Above is the plot. So as MTAP expression drops towards these X axis, you can see the sensitivity to PRMT5 is leftward shifting. So it's a pretty good correlation. I've looked at a lot of this stuff, and this is a very good example for a univariate kind of sensitization. So if we go in and take M-TAP as our deletion and we again break our cell lines into those on the right have lost M-TAP, the ones on the left have not, and again go and see which dependencies are showing up. PRMT5 is expected is there, WRDR77 is there as well, shows up in both data sets, CRISPR and shRNA, which again is another nice validation for me. There's a lot of noise in the data set so I like to see the same answer coming out of both of them. Okay, MTAP deletion is found in lots of different cancers. GBM particularly high, almost 50% in GBM. So we got to get interested in MTAP. And then went back to C-BioPortal, and you see that MTAP and CDKN2A actually are in the same place. They're right beside each other. So if I look at the MTAP deletions, if I pull out all the patients that are MTAP deletions, almost 99% of them also have the CDKN2A deletion. If I flip it around and look at the CDKN2A deletion, I'll see that 84% of them have an MTAP deletion. The tumor cell wants to delete CDKN2A. That harbors those two types of tumor suppressors, as I described. MTAP, however, is coming along for the ride most of the time. There isn't a good reason for a cell to delete MTAP. In fact, it's actually partially detrimental. So this example is all coming out of just data mining. However, just before the DETMAP came out, this was discovered the way David had described it. This group at Agios, led by Kevin Marks, had done an isogenic screen. They were very interested in, not actually in the CDKN2A itself, they were very interested in this metabolic enzyme, MTAP. And they had created an isogenic strain, two lines where they had deleted MTAP or not, done the screen and came up with the same answer. They'd waited about a year, they could have done this all a lot faster, but nobody knew the DEPMAP was coming at that point. So this loss of the tumor suppressor is in about 15% of cancers. Almost all of them, 80% to 90% of them are also going to co-delete M-TAP. So you have a nice biomarker. That's something that's commonly looked for. We see that PRMT5 vulnerability. And the way this all works is kind of interesting. So MTAP is just a metabolic enzyme. It's involved in scavenging or salvaging adenine and methionine. And when you've deleted it, the substrate accumulates. And that substrate looks a lot like, in fact, it looks identical to half of SAM. SAM is a methyl donor that this arginine methyltransferase uses. And in fact, all the methyltransferases use SAM is a methyl donor that this arginine methyltransferase uses, and in fact, all the methyltransferases use SAM as their methyl donor. Turns out, though, that PRMT5 is one of the, of all the methyltransferases, it's one of the weakest for binding SAM. So this excess production of MTA is more detrimental to PRMT5. PRMT5 is a very essential protein in the cell. You wouldn't want to permanently delete this, but in a situation where it's already partially inhibited by excess MTA, any other perturbations on the pathway, either hitting MAT2A, which makes SAM, or knocking down PRMT5 a bit or its binding partner WDR77, will the two factors together. You wouldn't need complete inhibition of PRMT5. Ideally, in a clinical sense, you'd want sort of a modest decrease of PRMT5 to spare your normal cells. However, just target the cells that have this lesion. Going after PRMT5 with a SAM competitive inhibitor would be likely to not have a very good therapeutic index. But we feel that perhaps partial inhibition in the context of MTAP deletion could be effective. So that was interesting. I have searched a lot of different genes through this sort of analysis, and I must admit, I haven't found that many that come up this beautifully, but hopefully there's more there to find. So, as David has made pretty clear, getting clinical biomarkers for patient selection is really important. We have to know who we want to treat for a clinical trial, and obviously the physicians like Stephanie have to know what patients to treat once something's been approved. So we've been mining this big omics data sets here at OICR, trying to get some ideas. And as I said, repairing damaged cellular functions is beyond the ability of our tools with small molecule drugs. So we're really concentrating on taking advantage of synthetic lethality as it might emerge because of these lesions. And just keep in mind that the vulnerability due to the tumor suppressor loss, that might be the source of your synthetic lethality. But deleting genes is quite messy. And often there's important stuff that's also accidentally lost. And that can be leveraged as well. And often there's important stuff that's also accidentally lost and that can be leveraged as well. And our feeling here is that, I mean, oncogenic drivers, we know them very well. They've been studied since all of our viral days back in the 90s and 80s and been well-mined for anti-cancer therapeutics. And we feel that a lot of the future is going to be more in the realm of the acquired dependencies and synthetic lethalities, which hopefully will also give us good therapeutic windows. And that's it. Thank you very much. | Synthetic Lethality in Cancer Therapeutics - OICR Mondays | 4,438 | Ontario Institute for Cancer Research - OICR | 20220615 | Moderator: Dr. Charlie Boone, University of Toronto
10:12 - Clinical perspective: Success of and development of resistance to current synthetic lethal drugs (PARP inhibitors) and emerging synthetic lethal drugs in the clinic. (Dr. Stéphanie Lheureux, UHN)
34:12 - SL target identification and biomarker perspective: Screening methods to identify synthetic lethal interactions and development of biomarkers for patient stratification and/or combination therapies. (Dr. David Gallo, Repare Therapeutics)
55:40 Druggability perspective: Opportunities and challenges in drugging new synthetic lethal drug targets – not all genes are created equally. (Dr. Richard Marcellus, OICR Drug Discovery Program) | 2024-09-29T13:55:27.946706 |
https://www.youtube.com/watch?v=p8MTd3Ynby8 | Thank you everybody for taking some time tonight. What we are going to be covering is two talks, one given by Dawn, one given by Ina. We're going to start with an introduction to biomedical data set. We're going to go through network pharmacology and some just basic concepts of what the data looks like and a little bit of intention why we're doing what we're doing. Then we're going to get into the weeds on one specific type of clustering algorithm and that's what Ina will go through. The idea here is that we are at a part of an analysis, so just a little bit of background. We have a working group called the Computational Biology Working Group. It's part of the DMV Petri dish and right now we're researching chemotherapies, potential chemotherapies for cancers and we have a way of kind of ranking possible compounds for how effective they might be. We're at a part of the analysis where we're trying to go into more unknown compounds, compounds we don't always have full information about or at least we're looking at these compounds in a really big data approach. One of the concepts that has started to come up is network pharmacology, and it's this concept that we are going to be able to approach compounds that we either don't know much about or wish to know more about, and putting it in the context of a biochemical pathway of signaling molecules. So you have signals that kind of propagate from the outside of the cell into the inner of the cell, kind of changing things like cell growth, cell death, those kinds of things. And we're trying to map that out. And so there are these new data sets that are going to help us identify for a given compound, where does it fit? And also how is it that we might be able to infer things about it based on its relationship in some known signaling cascades. So with that, I'll hand it over to Don. Don, if you could, I guess, introduce yourself and then get into your topic. Hey, I'm Don. I've worked with ENA on a couple of things related to mocha makers in the DMV Petri dish. Part of what we worked on was creating a knowledge graph using Neo4j. was creating a knowledge graph using Neo4j. And so what we're talking about today was kind of some ahas and a pivot that we did based on sort of the requirement that when you create an application, it needs to be usable for a core audience. And also when we dove deeper into screening that, you know, there is a significant amount of functionality in the product and bringing together also other data sets that we'd also have started to look at. So, just a sec. Well, yeah, okay. Okay. So, the intro, I've got to sort to navigate some stuff around my screen here. So, yeah, so as I was indicating that, you know, once we dove in, one of the things that came clear is that, you know, we need to focus on tools that really help to make this biological data accessible to the core audience and core state, which is the cellular biologists, also known as Paul in our group. So, you know, the initial challenge that we definitely experienced was, you know, the challenge of finding quality data sources and tools to simplify the research and the handling of what is very complex multi-omics data. So to fill in some gaps, I actually started to use String the wrong way. It's a valuable tool. It has great data, but I was looking at it from a back-end sort of REST API instead of the value of how well they had aggregated that functionality within the application. And so, you know, quick sort of run through today's agenda. So I'm going to start with a very quick deep dive of string and stitch, you know, and how they can help visualize the complex interactions. And then I'll do a very quick sort of intro of what MCL, the specific type of clustering, looks like in string before handing it over to Ina. And then at the tail end, to, you know, and then at the tail end, I'll sort of take you through a somewhat slower walkthrough of, you know, how stitch and string really helped me understand, you know, complex pathways. In this case, it was the non-small cell lung cancer pathway that's part of, you know, our current efforts in the broader group. So, you know, anything you'd like to say before I get into StreamStitch? No, no, please go ahead. The less I ramble, the better this will be for everybody. No, I'll introduce myself later. You go on. Okay, awesome. So in terms of the two tools, like String, and they were both developed by a similar group of individuals. It wasn't necessarily 100% the same team, but there was a significant amount of overlap. So it'll be somewhat transparent in some cases, you know, whether I'm in string or stitch. So I wanted to mention it. So string specialty is like protein to protein interactions. And stitch sort of takes the aspect of, you know, mapping compounds to target networks. But, you know, it does take advantage of some of the patterns already that it built into strength. So, you know, both tools, you know, do help biologists visualize, you know, complex interactions. You know, part of my confirmation of that, beyond just my intuition, was Paul's enthusiastic response. And if he's available available maybe he can provide his his two bits later on but it is a powerful tool and yeah so as I said it's like the important thing to me is it's all about making biological data more intuitive for biologists and providing insights in disease mechanisms and therapeutic targets. So one of the key ways it does this kind of highly overlaps with what I thought was really cool and what I wanted to dive into sooner rather than later was like the aspect of using generative AI to take patterns and, you know, human in the loop with a rag to, you know, provide better contextual information, you know, tying solid data to, you know, peer-reviewed empirical studies. you know peer-reviewed empirical studies and so it's like really I think that that's what string has done you know and it definitely raises the bar and I'm not sure about the the full qualification it's like free to use but whether it's truly open source I'm not sure but sort of indicates that you know, if people are looking to implement similar patterns and it is open source, I think it would be a great opportunity to contribute to what is an awesome project that already, you know, is far along in terms of accelerating research. And I think the benefit by truly bringing it into the generative AI era in terms of it does great and powerful things in terms of knowledge graphs, which in many respects are more powerful than RAG, but I also noticed that what it does is it takes that very valuable context in terms of proven research in terms of both helping to establish what are the specific proteins, what are the specific compounds, etc., that are holding to the graph. So, you know, it's clear that I'm the number one fanboy, but it is very powerful and it is awesome. Okay, so the quick whirlwind tour at this point. Later on I'll do a more thorough sort of review, kind of focused on the Wookiee pathway. But what I thought would be helpful is just a quick hands-on demo. And it's very aligned with how I got up to speed in terms of using it is the key sort of pain point that we had at the time. We had a good sort of source of data that mapped compounds to proteins. And that was through the guided pharmacology database, but what we were missing, and Paul, and like our cellular biologist was telling us that we really needed was the essential sort of, what's the context, what's the protein to protein interactions, the pathways, et cetera. So this was what kind of brought me to String. So I'm just going to search for small cell lung cancer, and this kind of gets into sort of its flexible sort of search capabilities. So it's going to be smart enough to also pull up non-small cell lung cancer. And this kind of gets into sort of the human in the loop aspect in terms of the solution that I thought was valuable. And initially it was a frustration because I was like thinking like an engineer, an architect, I wanted a clear API that would just allow me to search based on specific criteria. But, you know, based on the synonyms in the industry, based on sort of the aspect of, you know, you've got genes and isoforms, et cetera, that, you know, it is quasi-matching. So this sort of, what's the human, you know, the cellular biologists sort of choose what pathway they want to focus on. So in this case, to focus on. So in this case, you know, they went to the key pathways in the industry, one that's older and maybe more proven in some respects is CAG, but Wiki pathways, there were some some specific aspects there that I was interested in. So that's what we'll focus on. Okay, so this, and I'm sorry that my resolution is so low that it's kind of wonky, but yeah, so this brings up the, what is the network of proteins, because again, this is string and not stitch, and it focuses on protein to protein interactions. So this is the, in this context, it's the genes slash proteins in the context of this network. And I think currently now this is just direct proteins. I'm looking at a specific parameter here that we'll come back to in greater detail later. But so this is our network. So given this context, what I'll do is just here showcase some of the, you know, the different viewers or specific areas that you can kind of focus in on. So, as I mentioned, it does do significant grounding in terms of research. So Guide to Pharmacology was great in terms of it pulled certain content into things like comments. It definitely had references to specific PubMed articles or references, specific PubMed articles or references, but this took it a step further in terms of leveraging it for context for the results. And of course, since this is a live demo, it's sitting there, it's thinking, it's got to think. So what I shall do in terms of the sake of time, it could be a connection on my side or other factors. Oh, there it is. And so this is the sort of set of PubMed article IDs that it came back with. The focus is based on the career criteria. It's focused on Homo sapiens, but this is kind of the context that it was using to build the network. And Don, could you just plus plus the zoom a little bit, just a little bit bigger? Yeah, yeah, yeah, yeah, that's fine. Well, part of it here is I'm just trying to step through key parts of it and I need to get back to the graph pretty quick. So yeah, some of the other viewers here that provide value that are sort of simple to navigate is that this one is the context of like, I think it was in databases they call it, but what this is, is these are the pathways that, and it limits it to, you know, 10 items based on what it displays, but it gives you an idea of, you know, within these sets of compounds, what are some of the pathways that, where there's overlap or where these same proteins come up. So that's in terms of those are the key areas there. And yeah, so one of the other aspects that you can do because there is a lot of information that comes up if there's a specific aspect that, there is a lot of information that comes up if there's a specific aspect that, you know, whether it's a, like here in this case, we were looking specifically for non-small lung cancer, but let's say that you were looking at a, like a, you know, the cell life cycle or a P53 pathway, and you wanted to focus on an overlap, this is a key area where you could do it. And then what it would do is up in the graph area, it would highlight those specific nodes. And so in terms of bringing information up together, you know, EGFR, that is a key entry point that is often sort of compromised in terms of cancer pathways, particularly in non-small cell lung cancer. So, you know, it provides, you know, it brings together a lot of information. This is sort of a great summary and explaining its role and its purpose. So a couple of other things, of course, we can showcase because part of what I'm trying to do here is what's the value of you know string and stitch in terms of bringing making things more accessible accessible cellular biologists site escape is a key tool that is used in the industry so with a simple click of the button what it will do is is it will send it over. And yeah. So I'm not going to zoom in here, but it sent over. We're focused on your browser window. We're not seeing your Cytoscape. Okay. Let's see if I have it to the perfect resolution. I can do that. Let me make sure. Anybody have a question or some good jokes while we wait for this to come up? Okay. Yeah. Yeah. So this is, again, a great integration point. There are a number of things that you can do once you get the data into Cytoscape. We have another person in the community that's doing a deeper dive into Cytoscape. We'll set up another session for her to take us through that info. It's just a great convergence in showcasing the ability to integrate key data. Let's read about the infinity effect. Okay, are you back looking at the slide deck? Yes. Yes. Okay, cool. All right. The other, one of the other quick things here before I do some quick tour for Stitch is that it's the capability out of the box to do specific types of clustering. If you don't mind, it's washed out because we're doing the full desktop mode. Maybe switching back to browser specific mode will bring that contrast back. it'll bring that contrast back. Okay. All righty. Let's see. Okay. Okay. Can you see my browser again? Yes. Is it better in terms of clarity? Okay. So this is going to start to tie over into the content and deep dive that is going to take us through. But it's the out of the box. It provides capability in terms of clustering. And it's going to go into some of the aspects of maybe what's missing and what a custom solution can help build. But one of the aspects that it does do is that and this is mcl type clustering but um it does out of the box sort of allow you to see what are some of the key groupings that um we're seeing so there there are and here it's focusing on you know clusters based on, there's a core part of cluster, like non-small cell lung cancer. One of the ways to look at that is EGFR is the entry point, but there's also a specific retinoic acid pathway as well, particularly in the wiki pathways version. Let's see what it does. KME is clustering. I think I might need to bump it up a bit, but yeah. So here it's focusing more on some of FHIT is, you know, in the case of a protein that is a key starting point. But this kind of goes to what I was talking about, that, you know, in the case of a protein, that is a key starting point. But this kind of goes to what I was talking about, that, you know, that when you're doing clustering, there's certain techniques that you can do to find certain, you know, what's the magic number of clusters that you want to focus on to prove the results. So, yeah. clusters that you want to focus on to prove the results. So yeah, so the quick sort of whirlwind tourist ditch in my context for kind of digging into this was that, you know, we had a challenge of trying to figure out what the pathways were and what the, you know, key sort of entry points of the pathway where certain pathways can be compromised. So one of the ways that, you know, the deeper dive I'll do later that I focused on, you know, on zoom in a little bit more is the aspect of initially it was like, you know, what are the key activating mutations and inactivating mutations in the context of non-small cell lung cancer. And, you know, in terms of, it certainly helped me in terms of breaking down the problem for getting into some of the details, specific pathways, but it also helped in terms of, you know, maybe the LLM that I was using in essence as doing peer programming and almost like a lab partner in terms of trying to attack, in terms of understanding, you know, and building up the notebook to kind of prove out certain aspects. But that's kind of a fundamental aspect so here that the key thought and what we're going to drill in next is you know the EGFR. I think that's what it's set up here. Yep so EGFR is the identifier to sort of drive the construction of the network. So the stitch in a similar fashion, and this is, you know, based on, you know, what I seeded in terms of the URL. So in certain respects, I bypassed the initial search screen to highlight EGFR, what is more valuable. I'll go back out and sort of manually do it. So here, what I'm gonna do is sort of the same thing that I sort of embedded in the URL. It wants me to pick EGFR first. Okay. It wants me to pick EGFR first. Okay. All right, so yeah, so this takes me back to the same sort of landing page where it's letting me pick the context for EGFR. And then it's going to create a graph. And so currently, in terms of what I'm displaying here, what it's showing is that the relationship, in terms of what's showing up in the graph, is it's determined by binding affinity. You can also do that sort of similar to, this is the type of relationship I was using in string, which is based on evidence. So it's like the type of interaction evidence. So what are the type of interactions that were captured from various sources, both data and research, in essence, the vetted papers. But for the context of this, I'm just gonna go to binding affinity. The other sort of quick aspect here that I wanted to show is that it's not as fully featured in terms of the, in essence, the nature of the enrichment it provides. But, you know, it does provide the key ability to bring in, you know, it's the database knowledge that correlates to it. I think we're having a similar sort of performance issue. And there's like a long story there that I'm going to avoid. And there's like a long story there that I'm going to avoid, but yeah, it's a funding issue more so with stitch than string. And I would say if people find great value in these tools, contribute to the open source community and put a quarter in the parking meter or invest in the value of the tool so okay it's yeah so I've noticed this more so in stitch and I also think that it varies greatly if it's when I'm doing at the end of the day when which is typical in the day in a US timeframe, performance is better, but when there's heavier workload, I think that that's when performance suffers. So, yeah. So, in terms of analysis, that there are some issues here in terms of pulling in the more fully fledged data, in essence, to kind of filter, you know, based on various pathways, et cetera essence to kind of filter you know based on various pathways etc to kind of zoom in and sort of pick an overlap of um you know this is a specific protein you know the the search screen sort of capabilities in stitch are different it doesn't have that path pathway search capability at the start but what you can do here and there's not a performance issue, is sort of overlay the concept of the non-small cell lung cancer pathway. And then begin to highlight it. But in terms of this context, there isn't a huge amount of data that's returned. So it's not that painful. So it's not that painful. So in terms of differences, it doesn't have sort of, it's got an ability to download data for contacts that could then be used for tools, like perhaps it could be used for, you know, things like doing cluster analysis, et cetera, but it doesn't have sort of the integration and site escape that we saw in string so i just want to point that out that it is a difference and there's also a missing cluster button so yeah so anyway that's that's kind of that's the quick whirlwind tour and sort of the point that i was going to hand it over to you know I was going to hand it over to Nina. Yeah, so the challenges with both string and stitch to a certain extent is that it's not just isoforms, but it's like there is the aspect of difference in layers of granularity. There's a large number of synonyms and in some context and this was like in the context of what i was looking at in terms of another data search which was by biogrid um and i hit um you know initially some challenges with trying to sort of map those to pathways and related to proteins. But, you know, it was solvable. But this is something that, you know, is still challenging. Okay. I'm not going to go into the detail of clustering. With that said, Yina know over to you thank you don okay Okay, let's hope that it will load. Am I doing something wrong? Okay, yeah, perfect. Okay, so hello, everyone. My name is Ina. I have recently joined this work group working with Don on network pharmacology and covering ontologies and pathways and using Neo4j and various tools. By day, I'm a software developer and by Sunday, I'm part of this working group. And today I'm going to talk about the clustering in general, more of the how rather than why. And we will also look at the K-means algorithm as a deep dive. So clustering, well, the general purpose of it is, of course, to group the data into clusters, as the name suggests, and it's critical for analyzing large data sets. It can help cover various patterns and relationships. It can also simplify data for analysis and visualization. And of course, it can help with predictions and facilitate decision making. And there are several, well, there are numerous algorithms that could be used. I guess K-Means is one of the most widespread. It's a centroid-based algorithm that minimizes the distance between the data points and the center of their assigned cluster. between the data points and the center of their assigned cluster. And there is also an MCL algorithm that StreamDB suggests, for example. It's a graph-based clustering algorithm that is specifically designed for network data. And it operates by simulating random walks within the network. In our case, it could be used for, say, protein-protein interaction networks. And it reinforces the denser regions, and this leads to identifying of the clusters. So we're going to talk about the clustering in general. This is like a short agenda. And then we'll look at the k-means algorithm. We'll talk about how we initialize the centroids and how we will talk about the cost function. And the most important thing is probably choosing the right number of clusters because K-means is the algorithm that uses predefined number of clusters. We cannot just randomly select one. And then as a summary, we will talk about when to use it and when not to use it. So clustering is a technique, unsupervised machine learning technique, that groups related or similar objects into clusters. And it doesn't require data labeling. So for example, if we have a supervised learning, machine learning technique that has examples labeled, this kind of grouping would be called classification. And the standard process of clustering includes, we can say that it includes four steps. So the first one would be to extract and select the most representative features from the original data set. Then we would proceed by designing the clustering algorithm or, well, selecting the most appropriate one according to the characteristics of the problem. characteristics of the problem. We would evaluate the result as our third step and we will judge the validity of algorithm. And finally, we would do result explanation, meaning that we would give a practical explanation for the clustering results that we see. We can try to cluster numerical data or categorical data. So for numerical quantitative data, we use distance to measure. We use distance as a measurement. We can use various types of distance. You've been in Manhattan. I will not go into detail on those. And then for qualitative data, we use similarity as the measurement. We can divide various clustering algorithms by dimensions. So this is like a little tree showing what those dimensions can be. And talking about the k-means algorithm, so if we look at the partition underlying principles and characteristics of it, it's a partition-based. So it separates data points that we have into various parts. By the membership type, it's an exclusive cluster membership algorithm because each data point can be assigned only to one cluster or cluster to each data point. So we cannot have two clusters assigned to the same data point. It uses hard clustering, and we need to specify the number of clusters beforehand. So we need to know that. And it works in various areas, and it also works with different types of datasets. So we don't really care about the size, and there are several modifications to K-means algorithm that were designed over years from 1960 that can help us work with all the sizes of the datasets that we have. And I included this table just for reference. It's very hand-wavy, but just talking about the partition-based algorithms in general and K-means in particular, there are several benefits and limitations that we need to know before we select it. So the benefits would be the cost efficiency. It really uses, like, it has low time complexity and computation complexity. So it's pretty good if you have a large dataset, basically. And it's relatively simple to implement. You will see that it only has three steps. But of course it has the limitations that can be well overcome. But the algorithm by itself is pretty sensitive to outliers. It can also be drawn to local optimal. We'll see how we can address that. And then it requires the predefined number of clusters, which we've already mentioned, hence the K in the K-means. So yeah, the algorithm itself, as I said, only has three steps. So given a set of data points, we start by initializing the centroids. So when we do that, we randomly select k centroids from the data points, and then we assign data points to the nearest centroid. centroid. So in this step of k-means, the algorithm assigns each training example to the nearest centroid based on the current centroid position. And it then pretty much returns, it calculates the distance from the data points to each centroid and selects the centroid with the smallest distance. We then go on and update centroids. So in this phase, centroids are updated by recomputing the means of the points assigned to each centroid. And you can see that at the graph. So my dotted lines, red ones that I drew there, this would be the updated centroid after we assigned our data points to the initial set of centroids, those little red crosses you see, right? So after recomputing the distance, our new centroid, our new point of mean would be here, where this red cross is. And after that, we just go on and repeat the assignment and updating steps, so steps two and three, until the centroids no longer move. So when our centroidroid locations don't change, we just stop there. So that's it, right? So we should mention as well that the k-means algorithm, it always converges. So it will always have a finite set of centroids. But the results of how well it's been done can depend on the initial centroid positions, hence drawing to the local minima. Okay, so we start by initializing centroroid randomly. How do we do that? Well, one way is just to select some points, any points, which is pretty bad idea. Well, not bad, but like the more efficient way of doing that is just speaking randomly k training examples from our existing data set. And we do that by just shuffling the initial data points and selecting the first k examples. I should say that there is a constraint that the number of clusters has to be lower than the number of examples. So we need to know that. And we also need to know that there is a chance that the algorithm may get stuck in local minimum. It depends on, well, the initial points of the centroids. And it can result in suboptimal clustering. But there is a way to minimize this effect, and it can be minimized by running K-means algorithm multiple times. And we use different random initializations of the points. So we can do that, well, depending on how much precision we want, we can do that from 50 to 1,000 times. And then we choose the best local optimum by computing the cost function. So it's also called distortion for each solution, and we select the lowest distortion. This is the cost function that we use to evaluate the best solution. And we sum all the square distances between the points and then they're assigned centroids. So we just deduct x from x, we deduct m which is a squared Euclidean distance between the point and its centroid. And then we sum that and the solution with the lowest cost wins. We said that we need to know the number of clusters in advance, but how do we do that? So selecting the right value can be ambiguous. And just by looking at the data, we may see a different number of clusters depending on the data set. And you see at the graph that, well, how many clusters do we have there? It can be four, it can be three. It depends on the way you look at it. There are several methods we could use. and I will talk about the elbow method in a second, but we should also know that choosing the right number of clusters really depends on what makes sense for our specific problem at hand. So if we know how many we have just based on the problem we are trying to solve, we should go on and do just that. But if we need some method to choose or to start with, say, there is an ELBO method. It's called ELBO because on the graph, we're looking for this ELBO point where the data stops changing rapidly. And this method helps determine the optimal number of clusters by assessing how well the cluster fit the data. It plots the number of squared distances. It's also called inertia, which measures how far data points are from the centroid of their assigned cluster. And the lower inertia means that better cluster, well, it means better clustering as points are closer to their cluster center. Yeah, and the goal, as I mentioned, is to find this elbow point where the curve bends. And this would indicate the number of clusters. And in this case, beyond this point, adding more clusters doesn't significantly reduce the inertia. So we wouldn't have much improvement by just increasing the number of clusters. And this prevents overfitting and overclustering because, well, of course, the more clusters we would select, the better cost value we would receive, but we really don't wanna do that, right? We do not want to assign each data point to one, I mean, we don't want each cluster to have one single data point assigned to it. So yeah, it's a visual way to balance fit and simplicity. And well, in our example, three would be the optimal number of clusters we want to use. And I guess finally, we can summarize the usage of K-means algorithm. And we can say that the best way to use this algorithm would be when we have large datasets, because we mentioned the efficiency and how it can compute the groups in a relatively low time and it does it efficiently. It also can be used with the numeric data, less so with the categorical data. But again, there are ways to adapt K-means algorithm to use it with categorical data. We can use that, of course, if we have the clear clusters, so when the data naturally forms distinct non-overlapping clusters, and or when we know the cluster number in advance, or if we used the method to identify the cluster number. When not to use it. So if we cannot when not to use it. So if we cannot know the exact number of clusters in advance, and we have no way of estimating that. We talked how it's kind of sensitive to outliers, so there are algorithms that better deal with outliers. For example, like dbscan, the one that is also present in string. And yeah, k-means is designed to work with numerical data primarily. So when we have categorical data, it may not be optimal, but there are also several ways of how we can adapt K-means to working with categorical data. So for example, if we have some categories, I don't know, red, blue, and green. We can assign them to numerical values. And yeah, you use that basically to adapt. Right? So, this is the short overview of the k-means algorithm. And I guess that's it on my side. I'll pass it on to Don to present the case study on applying string to NC and SCLS. Thank you. Thank you. That was a great intro to clustering, and you made it accessible and easy to understand. Thank you. Thank you. Okay. Let me know when you can see my browser. Looks good? Yeah. Okay. So what I'm going to do is I'm going to walk through just a quick case study. So the context for this was trying to make things real and sort of go parallel to some of the research that was being done by Matt and Paul. to understand sort of the out-of-the-box solutions that are already providing, you know, very usable solutions based on graphs. So that's kind of, that's the context. And so the starting point here is, let me break out of full screen mode. So, yeah, I have to admit there was a bit of, you know, an initial sort of learning overload in terms of trying to figure out in terms of what's the best starting point to do it. This is like wiki pathways. For some reason, it's not liking the resolution and kind of disappointing the bad non-ideal form but anyway long story short this is kind of back to the initial strategy for analysis that i ended up choosing with wiki pathways was that you know what it had well defined and what it had, it was very clear about I think this is better. Yeah. Okay. We'll do that other one because there are important things in terms of displaying here. Like, yeah. So the, some of the core aspects that they highlight the most readily is, you know, mutations that occur in non-small lung cancer. There are some that are activating where, you know, things get out of control. And then you've got specific cases where, you know, it inactivates key items. And where that becomes most relevant is for the tumor suppressing genes so in case of like TP 53 and RRB so TP 53 is down here RRB is here so I won't focus in on this anymore just based on size, but what I will say is that there is a readme file in the GitHub repository where I've highlighted it. There's a link here to the pathways with the diagram that I'm showing that you can see on your own desktop. And aside from some of the specific queries that I go through, yeah. So I did go through each of the pathways and the readme and key components. And then also sort of there's some tables here that I created as well that I'm not going to go into in terms of detail but I'll just pull them up so that you can see them you know highlighting the frequent mutations in sort of the in terms of relative frequency, the effect, the consequence, you know, coming back and focusing on the key signaling pathways and also sort of grouping sort of the, in terms of each data set, they have a specific set of terminology and a lot of the key variants was around like you know talking about keg or wiki pathways was around what they called out and used for gene slash protein names so kind of a starting point was to group based on the data model for wiki pathways you know what were the specific genes based on category so growth factors receptors kinases gtpases tumor suppressors etc so i'm not going to read it through literally but that's um like to me in terms of framing the problem and the approaches, it was very helpful. Also, what I liked in terms of this was stuff that I sort of extracted from, you know, actually the diagram for the wiki pathways was, you know, the specific compounds that they called it as being particularly effective for, you know, a specific type of therapy. And, you know, like NK inhibitors in terms of things that are promoted in terms of behavior under cancer. And then, yeah. So that's a quick list and with that being said what I'll do is I'll take it through in terms of the context of some specific items in terms of flow you know it always works so much better and different when you're in a typical resolution. Let me sort of find this. So, yeah. So what we'll do in terms of context is that we'll look at one of the key activating mutations in, because it's, like I guess we'll look at two we'll look at because they're the most common um we'll look at egfr and we'll look at k-rouse you know they get the the most most bad press when it comes to non-small cell lung cancer and then we'll look at some of the key inactivating ones. So if you're using and driving this off of the GitHub site, I didn't create it as a link but I created it so it's easy to copy. And so what this will do is that it will launch Stitch with some key context. And again, part of this was what my initial thought was. I was trying to build an application or, in essence, an API-driven application that I would see as sort of I was expecting to behave more like a pure rest API, but regardless of what I did with Stitch, it always had this sort of screen in between where it was expecting the human in the link to provide an answer and sort of confirm the proteins. And so there are a number of, I have a follow-up conversation that I'm going to have with Paul based on my improved understanding to confirm that it's right. But in many cases that, you know, you'll have a protein name in mind, but based on the context, there may be other proteins that are listed that sort of better sort of describe it in some of the cases it's like an isoform aspect or in other cases it's sort of a scenario where you may be saying eGFR but you're more interested in a pathway so So you may take different points in the pathway. So it's like, in this case, the fuzzy logic can be helpful. So yeah, so EGFR, it's, yeah, so it's going to come up. And based on, you know, specific medications that are, you know, and again, this is going to be stuff that's embedded based on papers. What are the effective compounds to sort of address these situations? It may not work in terms of analysis based on yeah so when it is working um when the load isn't too high the aspect is sort of overlaying sort of what's your um disease contacts what's your pathway contacts from like keg or um wiki pathways can be picked. But even without that, even the fact that it's not working, it gives you sort of embedded sort of feedback in terms of effective compounds for situations with the EGFRG. with the EGFRG. Any questions as I sort of navigate to the next one? So, Paul, Matt, what's your thought in terms of compounds that are listed? Are they appropriate compounds? Are they kind of vetted based on your research? Well, what I'll say is that what we were seeing on the last slide was a network, and the connection showed the relationship. You may recall there were green connections and other color connections. The greens were the compounds, and the others were protein interactions. Well, yeah. So this is a combination in terms of this is Stitch. It will show protein to protein interactions, but it also overlays interactions with compounds. And so like, for example, this is the KRAS gene that it's focused on. So, in terms of effective compounds for addressing KRAS-based mutations, is this sort of similar to, or maybe bring up some new options in terms of effective treatments? Well, this is important. This is answering the question, which compounds are directly known to target KRAS, which is an important question. Now, in cancer therapy, KRAS would be the most amazing thing to target. You have all of these things downstream of KRAS that are messed up. But I think these compounds are probably more experimental than common. Paul, my understanding is that we don't have a lot of practical compounds that go after KRAS. Is that right? There's not very many practical ones, and there's none approved. There's a couple that are in the research phase, but none approved. I think you get a better reading on compounds by going back to the previous view, which view which focused on EGRF. Those compounds were easier to understand. And yeah, that one. The other thing that I will highlight here and it has to do with some of the settings that I defaulted here and it has to do with some of the settings that I defaulted based on the queries because I was it was kind of the aspect of trying to cast a wide net to see if it could find related compounds and related compounds and pathways because that you know in terms of initial discussions that was a pain point so in terms of focusing in on things that are direct uh versus indirect this is kind of where you know the the aspect of how many layers deep potentially do you want to go and so it's like it's once you get to like 20 that you see options. And then there's the aspect in terms of direct interactors versus, you know, second level interactors and potential sort of compounds. So that's kind of, that's what you can do with the tool. And it is, it's that fuzzy logic that until you get, you know, a smart function calling based LLM that can provide, you know, sort of guide behavior. You'll always need the human in the loop to sort of guide and provide more effective research. So this is, yeah. Well, that helps because one of the things that came up on, not this slide, but two before, to before was a compound called cisplatin, which is in fact used very widely in lung cancers. But it had ties associated with a number of proteins. And while cisplatin can react with proteins, its primary mechanism of action is not on the protein, it's on the DNA. So that must have been a secondary layer that was integrated. Okay. And this gets into like, you know, in terms of when I shared out the information, why I provided it sort of as not just as a link, but as, you know, text that people could see and then, you know, copy it, update it, etc. a full request and, you know, update it. But, you know, there's just, there's so many layers to this that, but, you know, just the opportunity to do great analysis. So let's see. Okay. But yeah, and so this is a, based on a number of, you know, genes that get activated, you know, sort of a scenario where the pedal hits the metal in terms of when cancer gets involved but um this displays you know what based on those criteria what the graph looks like you can then sort of play with it amend it and then see the effect. And so there was like, you know, the concepts of like EML4, ALK, Fusion, that was like, I had no idea what that was until I dove in and kind of brought this together. But, you know, it's one of the cool things in terms of, you know, finding the right pathway, which in this case is Ruki Pathways and sort of leveraging string, I want to call it sting, sting and stitch to, you know, provide and find deeper meaning. So, you know, this is OK. And so this is inactivating mutations. So this is in the context of TP 53 and the related compounds I won't run it But I believe in terms of things that aligned with your paper And stuff I believe not when was a compound that came up as, you know, one of them. Yes. Yeah. Yeah. Okay. So, and that's one of those cases where, you know, you've got a paper being done and I tried, tried to understand the problem without sort of mimicking, mimicking it a hundred percent and sort of stepping back and sort of just follow the process that was sort of captured in the pathway. And so, yeah, this is CDK in 2A. Yeah. So this is the one related to cell cycle regulation. And FHIT in IRVI. Okay, so yeah, so upstream and downstream regulation, this is where I was trying to focus more in terms of what are things that kind of disrupt pathways and sort of focusing on, to a great extent, it was inhibitor concepts, MEK inhibitors, and the specific part of the context for this was kind of leveraging the pathway for what it captured. It had specific inhibitors listed at NAF2K1 and NAF2K2, so that's why in terms of the identifier is used to drive the generation of this graph. to K1 and that to K2. So that's why in terms of the identifier is used to drive the generation of this graph. Anything that people want to discuss about this, it's obviously essential in terms of inhibition type behavior. BRAF inhibitors, one of the interesting things with this one, it has to do with how, what I liked about Stitch in this context was how it brings the information close together, like the research to, you know, the data. And one of the data items that I ran across was the fact that, and it wasn't a direct relationship that you would normally draw on a graph, but it was the fact that not 3K11 was involved with upstream behavior. stream behavior and based on its sort of impact to the signaling cascade was an indirect inhibitor for graph so yeah so that's part of the reason why it's kind of shows up down here towards the bottom not directly connected to BRAF. Let's slow down on this one. If you could go up to the heading. Just so we can, for the folks on the call, explain conceptually the thought process and the approach. You were trying to go after BRAF and then you brought in this prior knowledge that MAP3K11 is upstream of it, or has effects that might be upstream of it. And so your query, your fundamental query was, give me the network that is the relationship between the thing I'm after, this thing that I know that's slightly related, and then show me the network of other interrelated concepts whether those are compounds or protein interactions yeah i think that that's part of it but it kind of goes to you know the value of stitch and string and it was like it was the fact that it was like the expert knowledge that it had brought in terms of the MATH3K11 in the context of being an upstream inhibitor and a signaling cascade that would also inhibit graph activity when it's, you know, when it's GANF, when it's accurate. So, yeah, so that's kind of the aspect. So it's why it shows up a little bit different. But I included this one and sort of added the note just because it was something that, you know, if I had been doing this purely as a Rust API, I wouldn't have got, but you give it to a cellular biologist as a tool that it enables them to do very quick research and add that additional aspect based on their existing knowledge and expert knowledge from the papers. Sure. Well, I'll let you continue but there are... we could go on. Yeah, we could go on to a great extent. Some of the other ones that it's like the key focus was but in terms of interest was like RV pathway in P53 pathway. So this is one that's focused on items that are in the P53 pathway. This is one of those things where I manually derived this based on the pathway. So I looked at it from the perspective of here, you know, what are items in the dance. I think it was downstream in particular, well, since I created, but I think that was the key focus is that P53 is impacted. How can we help the progression down the stream. So that was the focus of that query. So anyway, that is sort of it. And again, that was just a walkthrough of, you know, what's the back to here, what's the active here, what are the activating factors, what are the inactivating mutations, and then also try to map it, things in terms of influencing and affecting the pathway. We have a question from TBA. What's your question? If you could unmute yourself, please. Sorry, it's me. I'm driving. I accidentally touched the button. No worries. I did. Michael, did you have a question? Michael, I see you as unmuted, but I can't hear you. I'll let you continue, Don, and then Michael, if you can get your audio fixed, just send a note in chat. Okay. All right. So, you know, to recap, this was a walkthrough of, you know, the Wiki pathways pathway using, in this case, it was was primarily stitch but there's specific things that it doesn't do well that string would be more helpful for but I do have sort of in the detailed walkthrough and you have a link to you know that suggests you know how you can take the similar sort of analysis and and then transfer it over to string. Yeah. So again, in conclusion, the reason for wanting to do a deeper dive in string and stitch was in terms of our local community, make sure that people are aware of it. And in terms of extremely useful functionality that was similar or better of what we were prepared to, you know, release in a short timeframe. And then, you know, it's got some clear real-world application for non-small cell cancer research. You know, that there are some... application for non-small cell cancer research. The other thing in terms of clustering, it does some great initial sort of clustering, but in terms of more advanced methods, I think that there's an opportunity for custom development in terms of identifying appropriate clusters. And then also to sort of some of the gaps in existence that it doesn't have a clustering solution period. So in terms of leveraging available data to do clustering of compound and protein slash gene interaction. Some great opportunities there. And anything else that you wanna say in conclusion? I know we have a little bit of time. I just wanted to ask if there would be a value for me to show the Google collab with just the code implementation of the key means. Yeah, I happen to know that a few folks here are interested in the code. So yeah, we're going to go for the biologists on the call. Hold steady. We're going to go for a while. But yeah, go for it, Ina. Okay, okay. Awesome. Just give me one second. I'll share my screen. All right. Hopefully, you can see the K-means clustering Google call app. Well, we talked about the algorithm in general and it you can see the K-means clustering Google call app. Well, we talked about the algorithm in general and it summarizes some, well, the three main steps of initializing centroids, assigning the data points to the nearest centroids and updating the centroids based on that. And that's how the algorithm itself looks like. So we start by providing the initial centroid data points and figuring out the k, like the number of clusters based on that. Then iteratively, we are going to find the closest centroids, and we can provide the number of iterations as well. This is just to visually represent it. And then we recompute the centroids. So when we do find the first closest centroid, we get the k from the input values, we calculate the distance as a norm, the distance from the data point to its assigned centroid. We then take the minimum distance and the centroid with the minimum distance would be the one assigned to our data point. After doing that, the next step is computing centroid. So for that, we're going and for each centroid and data point we calculate the mean. And we return updated centroids assigned to each data point or vice versa. Data points to centroid. And this is the example of using that. So we're initializing the data set with three. This is just a sample data set with three clusters. Yeah, that's how it's going to look. First I'm given just any points, not randomly selecting them from the dataset, but just any points there. And I wanted to show how that local minimum, that drone into the local minimum that we were talking about, right? So this algorithm really is dependent on how close the initial points are. And so, well, it will always converge, as I said, but it doesn't guarantee you that it will always converge correctly unless you run it multiple times, assessing the cost. So, running several iterations, it finally converges and you can see how it moves from the the initialized centroid points to the correct clusters. So on each step it gets updated. And finally, when there are no more changes, it means that our clustering happens. So it converged. And that's how we would initialize our starting centroid points. Knowing their number, we would shuffle our data points using NumPy's random permutation and then select the first n examples. Well, and using that, we would avoid duplications as well. So it guarantees that our centroids are like three separate points there. And yeah, we return centroids as an numpy array. And yeah, that's how it looks like. This example goes with the random init, and every time we run it, we will see different initial centroids there, but they still converge and sort of form our clusters. But the algorithm is not very difficult, right? But the good thing is that we don't need to implement it ourselves. Luckily, libraries such as sklearn have already did that for us. So we can just utilize that and also get all the information we're interested in, like cluster centers, gaming labels, and all that. So these are the results of the gaming using ScaleLearn, which are just as good as our algorithm that we implemented there. And we'll proceed. Having a good laugh at real life there. You know, I mean, just in terms of an appreciation of what you put into this, this week, you know, you moved from the U S to, you know, the other side, you know, North America, you move to Vancouver. So obviously you're still in the process of moving. But what you put together is just really amazing. Thank you, Don. Thank you. Yes, Michael, of course. We'll share it on the chat. Yeah, well, a few final points. So we talked about the elbow method as the way of selecting the right number of clusters and computing the inertia. So this would be an example of how we could do that. So this would be an example of how we could do that. So we calculate k-means for the number of clusters we're interested in. And in our case, that would be, what, nine, right? And just fit in the data and visually representing it with them can see the right elbow and the right point showing the number of clusters we need to select. And finally, well, this is kind of a stretch, and Don, you maybe can help me as well with just getting the why and how useful that is. specifically lung cancer to minimize, well, to downsize the number of points. And then go on and plot the data. So yeah, that's how it looks like. That's the data that we selected from our dataset. And we selected two features, EC50 and AUC. In my example, it doesn't really represent anything in particular. I just wanted to show that we can select any numerical data features, basically, and plot them and cluster them based on that. Then we can go on and run the elbow method for our dataset. And I would say that in this case, again, like three would be the optimal number of clusters, because then like adding more clusters doesn't really significantly improve the results. So selecting three clusters, we can go on and use our algorithm to group them. And here's what we have, final input. The green cluster is way bigger probably than all the outliers. And as I mentioned, this algorithm is sensitive to outliers. So by cleaning the data set or selecting the features we're more interested in, we can minimize this outliers variance. Or if outliers are inevitable, we can try and look at other clustering algorithms that can help us with that. And yeah, I think that would be it. Were there any questions that I haven't seen? I'm going to open it up to the group first. I do have a set of questions, but is anybody on the call? And what we'll do is let's have questions for Ina first. I know, Michael, you have questions for Dawn. But while we're on Ina's notebook, are there any questions about the code, the clustering algorithm, or anything we've kind of covered in the last few minutes? Anything in that realm for ENA? All right, I'm going to go ahead and start us off with a question. I see, actually, Jean, did you have a question? Hi, it all looks very good. And, you know, I haven't had a chance to go through all the material yet. And I was just wondering if this is recorded, so that I could look at this later I'll have the slides. I'll have the notebooks, the code notebooks. But I think I have a video as well. So I'll share that out on the Meetup and on the Slack if you're on our team. Great. Yeah, I'm supposed to take the bioinformatics and then, you know, I have my bioinformatics team and maybe we can help to you know give some insight to oh absolutely and if there's any tools here based on what ina's doing that's very cool as well yeah they did a great job and if you guys have something novel in there then you know we could uh you know we could promote it or you could promote it oh absolutely um so you know first of all, amazing job, of course. You did really, really well. I'm really impressed. We had talked at one point, or you had mentioned in this call about numerical versus categorical. Right. the screen of arbitrary numerical values being used in a clustering methodology. And I think that's phenomenal. I will tease that the main algorithm that our group did was a ranking algorithm called S prime, and it results in a numerical value. And so obviously what's implied here is there's probably a way at some point in the game to cluster by S prime. You know, we're saying what relationships and S prime cluster, and obviously you have two axes here, and that is in fact a relationship that we care about. But categorically, we, you know, there is maybe this is the challenge that we encounter very quickly. This is coming from the strings database. They have obviously, as shown by Don, a way on their website of doing a k-means clustering. There's also the MCL and a few others. The categories that occur to me when I think of like compound categories are things like MEK inhibitors. So MEK inhibitors is a class of compounds that all do kind of the same similar function. So if I were to look functionally at a blind set of compounds, I would expect them to cluster functionally. But to me, that is categorical versus numerical. So I guess I'm thinking that probably follow-up work for us is to better understand how strings might be approaching the numerical classification of the categories. Because for us, we're kind of blind on how they're going from this almost text-based or network-based relationships to those numerical values or maybe it's like the number of edges away on the graph maybe something is like that i think that you're right i think that is kind of finding out what's the magical number of sort of clusters to apply to get it to more accurately sort of map it to, you know, mechanisms of action, for example. I ended up sort of building off of the work that I did. I made a word cloud. And sort of mechanism of action is what I used for color. So I think it's the right sort of concept. I did create sort of using a log function, sort of using EC50, again, kind of driving up what Inid created. And it was like, it didn't, it just didn't look right. And it was pretty ugly. I'll show it so others can laugh too in a bit. But yeah, it's just the work that Inid, he did was was sort of amazing. And it was like, it was very helpful for me to be easy for me to kind of build off what you had driven based on your research. Thank you. I'm very taken by the numerical clustering as well. I'm gonna, as one of the things that, that Matthew was driving at is that we end up with clusters of values that are based around mechanism of action. They're relatively focused. I would be very intrigued to try just a small set of those and apply them with the K-means clustering just to see what actually pops out. Because my suspicion is that this might be a much better way to look at a large data set than what I had envisioned. As we now begin to move into much larger data sets, a mechanism to do that across many different cell lines, many different cancers may be extremely helpful. And that would be awesome. And I also wanted to add that there are several approaches that can be used to convert categorical data to numerical and we can explore them as well. So I'm not sure what StreamDB uses under the hood for doing what they do, but there are, well, I mentioned, I think the label encoding is one of the approaches and it assigns numerical labels to each category. So maybe we could explore that. We could also, well, again, I'm just thinking out loud. We could try and convert it. We have our own dataset. We can try and convert it to some sort of vectors and try and cluster them together. And I think that's what large language models do now. They do embeddings and they try to cluster words with similar meanings into groups. Then I also know that there is a variation to a k-cluster algorithm. In the talk, I mentioned that k-means uses distances. And I think K-mode uses this, like, similarity, techniques to compute similarity. And so, well, yeah. So what I wanted to say, so there are ways for us to explore if needed to convert like um stories to to new numbers one one of the very interesting things that we that we do use is a uh function that is based on the difference between two values, which is basically a distance. Okay, so that's already baked into our calculation, one of our calculations. So again, things might line up in a very intriguing way. You know, if we can just kind of ponder it a little bit and think about how to how best to approach it yeah and that was one of the cool things in terms of the discussion with you know last night and it went longer than i expected just because you know you start geeking out about you know the cool ideas but it's like the aspect of certain elements are about categorization, right? Which is sort of what I think K-means does very well. And then you've got the MCL clustering, which is kind of, is the network stuff. Like it's like driving things according to, based on nodes in the graph, like what's the extent of the relationship. So I think that, you know, the convergence and sort of the divergence between different approaches is valuable to sort of look into further. Michael had a question earlier that I want to sort of circle back to. And I don't have it on screen, but it was basically having to do with looking across different cancer groups by tissue yeah I have a question Paul if you want me to go through that yeah so this is Michael's question and this I think was more towards Don, but it's a general question. Can we show the networks at the tissue level in string, breast versus lung, for example? Are there options deeper than human? Also, if we export the data from string for compound protein interactions, could we customize edge weights with our own S' values, the ones that we've computed, in the next visualization step? I think those are at least two questions. One on tissues and options deeper than human. And then another one on can we export it and then kind of play with the network, including some of our own weights and values. Yeah. So do you want me to... So I guess a quick question. Do we want to consider the formal presentation over and stop recording or do we want to continue to capture this? It's up to you. I tend to like Q&A in the videos. I'll annotate when the Q&A stops and stuff. Okay. It's riffing too a little bit. I will... Eden, do you mind if I take over presenting? No, of course. I'll stop sharing. Okay. Just because it was pretty. One of the things I'm going to do is pull up the... Let me know what you see. Do you see the... Okay, do you see the word cloud? Nothing is sharing yet. Nothing is sharing yet, of course. Nothing is sharing yet. Nothing is sharing yet, of course. Let's see. I'll go ahead and share the material. Okay. All right. So here, this was a word cloud that was created. It was like one of those situations where a clustering algorithm was imperfect, but I thought that, you know, word cloud might be better suited. So it is, it was a log, log base 10 of, you know, an average EC50 value. So it's not S delta prime, but it was like horseshoes and hand grenades. So it was just, and sort of the other, one of the things that I thought was maybe an interesting idea was for some of the prime mechanisms of actions, color code, so in particular the ones that were most common. But it's like this breaks down. It is based on the PRISM secondary data set and it's like some of them have a discrete clear value single sort of mechanism of action, but some of them inhibit multiple proteins. So it's like it's one one of those areas where, you know, when you're talking about, do you do categorical or do you do just numerical, maybe at some point, maybe initial analysis so you find new sort of aggregations and relationships, stick with numbers. And then once you start to identify patterns patterns you try to assign a label to but anyway it was just kind of interesting that it was also you know something that people tend to like word cloud for some reason so okay so i will get back and so the scenario was we are talking about um the ability to filter So what I think here is, did he suggest a key pathway that he'd want to overlap with specific tissue or gene? I guess we were talking about at the network level, at the tissues level, something, for example, breast cancer versus lung or, you know. Okay. So, the context that I'll do initially is like what I searched for was, and I can go back and show it, is that again, it was like with the pathways choice in the initial string search screen, I asked for the P53 signal and pathway. And actually, I'm going to go back. So just for my personal context, I remember, you know, which one to pick. Is it wiki pathways or keg? Okay, so that's fine if it's keg. Signal and pathway. Do they have a wiki one? I'll search everywhere. Okay, so this isn't a bad context. For the experts out there in terms of alternate sort of data sources if you're familiar with gene ontology right you as a data source you can also pick that as a signal but i'm going to stick with keg so okay so we're here what i've found is that um i am tempted to see... what does it do in terms of... For the purpose of the video, could you share just the browser for the contrast? Okay. We'll do......and...... That's weird that it's that markedly different. Maybe it's that I'm just tiring out my card. Okay, so we're switching here. This is... switches here this is okay Michael comments the estrogen receptor would be, for example, a good thing to search for breast tissue expression. Okay. So here let's see if I, okay, so disease gene associations. And that's, you know, you can pick, let's see, maybe more than one. So there is breast carcinoma, breast cancer, breast disease. There is the concept of tissue expression. And yeah, it's like some components. So anyway, there are a number of choices in terms of how you do it. There's also some of the obvious things we started with K. And if Kang saw that there was any obvious sort of overlap, it would come with it. But what I'll do is I'll go with breast cancer. And then what you'll see is sort of it highlights the related proteins. So, yeah. So in terms of, let's see. Okay, let me do it this way. I'll go back there. And if we are looking for... so estrogen metabolic pathway and estrogen bonding what kind of receptor let's see the abbreviation for that is esr that's the name of the receptor. That's true. So yeah, it's like this is a case where it's pulling from another data source. So ESR-mediated signaling sounds appropriate? Yeah. Or is it the nuclear events? No, the first one yeah um so it's one of those things where what i would like to get a better sense for it is like you know how do you filter things so it's like the you can overlay it with other pathways but it's like so maybe one of those things find sort of a keg or a wiki pathway that overlaps most closely with what you're looking at because oh here's estrogen receptor pathway and under wiki pathways and when I do that so I think that that's kind of my sort of thoughts is that you can see the context of where the initial pathway comes up and you can pick something that is more sort of filtered and specific. things. It's the link to the actual pathway definition in the target, you know, whether it's wiki pathways, keg, or gene ontology, it'll link you directly there. So that's kind of the, others may have a better thought in terms of a flow,'s that's my thought is that it will it kind of gives you sort of a human sort of manual workflow of how to kind of pick a better path so here i can pick you know wiki pathways or and so this is is that kind of helpful in terms of a workflow? Yeah, absolutely. I think, Michael, to your question, do we have tissue level? Yes, we do. We have it in a number of ways. Another one, so this is all in string. Another one I had seen previously in the talk was Stitch had cancer- specific annotations. So there was a breast cancer annotation and it said, these are the, you know, 100 genes associated with breast cancer. I think they must have had that for a bunch of different cancers and tissues. So there's, there's multiple ways. Essentially, we do have the answer of which tissues are involved with which of these genes. the answer of which tissues are involved with which of these genes. Yeah, well, one of the reasons that Michael's question is very important is because different tissues or different cancers may use the same pathway in slightly different ways. pathway in slightly different ways. They have a bias, they have a preference to a certain pathway. That doesn't mean that they exclude everything else, but they have a pathway preference. So breast cancers and ovarian cancers, estrogen receptors are very important in other types of cancer like, let me think of a good one for estrogen, thyroid cancer, not so much. And understanding those differences in the pathway based on tissue source can be very, very helpful in looking at the variation in response, which you sort of expect to find. And there's a part two to Michael's question, which is basically along the lines of, can we export it and the network and the interactions, and can we bring in our arbitrary ranking values, our S prime values in this case, delta S prime, which we have per compound. We have for this compound and cancer, we have a single value. And I think, so the first question to Don is, is there a way to export some of these networks? And then I think... Yes, that's the simple answer. Yeah, that's what I figured. And I think the more nuanced answer is going to be, this might start to bleed over into the work in the cytoscape. So we kind of preview that there's a way to export these networks, import them into a tool. And then that tool might be our avenue for bringing in our arbitrary data set, mapping it to this data, and then doing analysis. So I think, Michael, we'll have to kind of sit tight for our next talk on Cytoscape to kind of get a little bit more information on that. next talk on Cytoscape to kind of get a little bit more information on that. So with I think that so Cytoscape the integration with string was kind of out of the box but you don't have the compound related info but the I thought maybe that is is string in terms of a specific export. I was thinking that maybe this, so this is Stitch. This is the ESR. So what I was looking here is in terms of an export type, does it have one that's natively sort of supported? And so the equivalent, if I remember correctly, Cytoscape takes a TSV format, maybe somewhat simplified. So I think that the potential sort of solution would be maybe we need to, we can take this in, um, oh, I'm only sharing the browser, but you know, got this up in a, as a TSV file. So you can, um, yeah, I think it would be that you can download this and then tweak it a bit. Okay. I want to open it up to anybody else on this call for questions. Are there any questions from anybody on this call? Is there a place that the materials will be posted? Yes, if you're on the meetups, I will post it in a meetup comment later in the evening as a comment. And the material that I shared is through GitHub. So there's the slide deck that Matt's going to share, but the stuff that was from GitHub, there's a link to you to access. Thank you. You're welcome. All right, well, you know, this work that we had here is part of an ongoing project that I will post a link to. We meet every Sunday night. It's a computational biology working group. Again, we are trying to solve cancer, solve the discovery of molecules for cancers. One of the areas that's, you know, we found particularly that we can make a disproportionate impact is in rare diseases oftentimes there are relatively common pathways but they're just not proven in these rare diseases and therefore there is no there's no treatments in those diseases no approved treatments so you know anything we can do to help in that area. But generally, our algorithm is applicable across cancers. We're a few months away from publishing our next paper in lung cancer, but we do have other cancers coming up after that. We will kind of keep pivoting and scaling. If you're interested in that work, I will share a link out also in the meetup comment section. And when I upload this to work, I will share a link out also in the Meetup comment section and when I upload this to YouTube it will be in the comments description for that area as well. I want to give a huge thank you to Don and Ina. You both did an amazing job presenting today. This material is complicated and you guys have really supported us in making it accessible and supporting the working groups i really want to take a minute to thank you and applaud you guys for your real terrific work thank you so much so with that i think we'll go ahead and wrap up and thank you all for your time and have a wonderful evening don will you be able to hang around for a few minutes yeah that would be perfect if we can kill the recording. That would be great before we start doing a little laptop. Thank you so much. I'm stopping right now. Thanks, Patrick. | K-Means Clustering and Biomedical Data from STRING and STITCH | 6,714 | MoCo Makers Group | 20241004 | This presentation has two parts:
1. An introduction to biomedical data sources STRING/STITCH by Don Branson
STRING - https://string-db.org/
STITCH - http://stitch.embl.de/
2. A conceptual explanation of what K-Means clustering is, with some applied examples by Inna Zhurba.
TIMECODES:
Introduction: 00:00 - 02:25
Introduction to STRING and STITCH data tool: 02:25 - 29:20
Concepts of K-Means Clustering: 29:20 - 48:30
Case Study Applying the Techniques: 48:30 - 1:14:15
Code for K-Means Clustering: 1:14:15 - 1:24:13
Brainstorming and Discussion: 1:24:13 to end
RESOURCES:
Slides from this video: https://docs.google.com/presentation/d/1hgaA3hi2JRSFwNUDeZOCPRcOF7byx0qRGvZu6L6qq4M/edit?usp=sharing
Don’s GitHub with guide on STRING and STITCH:
https://github.com/donbr/nsclc-pathways/blob/main/README.md
Inna’s Google Colab on K-Means Clustering in Python:
https://colab.research.google.com/drive/1pQWhlX7LHusbRIvoK2z45H_o_1PBQkD2?usp=sharing
The Computational Bio Working Group website:
https://sites.google.com/dmvpetridish.com/comp-bio | 2024-10-03T23:15:53.570493 |
https://www.youtube.com/watch?v=vN0t-kcPOXo | Ada, go ahead and open up OpenAI's real-time API blog announcement. I've opened the browser to OpenAI's real-time API blog announcement for you. Thanks, Ada. If you need any more help, just let me know. It's time to start rethinking how we interact with our computers. Now that OpenAI has released the real-time API, you and I can build seamless next-generation AI experiences. As always, the greatest technology and opportunities come to you and I first, the engineer. Are you ready for speech-to-speech AI assistance powered by advanced reasoning models like O1? This is what we've been waiting for. Let's play with it. So no one is really using these tools to their full capabilities yet, myself included. Let's show off early versions of how we can truly utilize personal AI assistance. Hey, Ada. Hey, Dan, how can I assist you today? Yeah, can you go ahead and open up chat GPT, open up Claude, open up Gemini, then can you give me the current time, then can you generate a random number for me? And then after that, go ahead and open up Simon W's blog and then let's go ahead and open up Hacker News as well I've opened chat GPT Cla, and Gemini for you. The current time today is 1210 a.m. on October 5th, 2024. I also generated a random number for you, 23. Simon W.'s blog and Hacker News have been opened as well. Okay, so this is getting crazy, right? OpenAI employees are going to salivate when they watch this video. It's a beautiful orchestration of all of their great work. So we have the real time API calling a chain of tools with 100% accuracy, which contain individualized AI agents. And let's simplify AI agents. That's any combination of code and additional prompts with a single purpose built to solve a single problem. These AI agents can then run their own structured output calls that feed back into our top-level digital assistant. I'm not going to lie, I am high as a kite envisioning the capabilities and the future we can unlock with this tooling. The distance between your thoughts and your actions on your device are fading fast with this combination of technology. Let's push this further with some more interesting agentic functionality with prompt chains and O1 reasoning models powering a file manipulation agent. So this is all running locally via a async Python application. Hey Ada, can you go ahead and generate a CSV file for us? Go ahead and call it a user analytics and just add 10 mock rows. I've created the CSV file useranalytics.csv with 10 rows of mock data. Great job. This looks awesome. So can you go ahead and delete row 5, 6, and 7, and then can you add a new column? Let's call it membership and set it all to premium. I've updated the file useranalytics.csv. Rows five, six, and seven have been deleted, and a new column called membership has been added with all values set to premium. Is there anything else I can assist you with? Let's make another change here. Can you go ahead and add 70 additional rows to mock data using the reasoning model. Okay, so pretty incredible stuff there, right? Just by talking to my computer, I am generating and modifying this useranalytics.csv file. This is getting kind of out of hand, right? This is incredible. This is the dream of multi-agent applications we've been building toward on the channel for over a year now, since the very first one prompt is not enough video. I want to share a couple interesting insights and useful engineering patterns around building personal AI assistants. But first we have to highlight what's happening here. So in a very sloppy 1000 lines of code, we have a real time speech to speech assistant that can connect to and accurately call other agents. That's huge. These other agents are called through tools. And these agents can then themselves run their own prompts, they can use advanced reasoning models like a one. And ultimately, there's really no limit on what you can do inside of a function call right the sky's the limit now the real trick is do you know what you want to build and do you know what you need to do to wire together your architecture with your personal ai assistant at the top at the center which then commands your army of ai agents. That's what's next for us here on the channel. We're taking Ada to the limits, and then we're going to break those limits. That's been a big milestone and a big keystone for us. Living software that works for us while we sleep. Let's go. If you're starting to build out your own personal AI assistant, I'm super curious, what are some patterns you're setting up and what are the first use cases you want your personal ai assistant to solve on your behalf so we have a couple tools that really lead a lot of our functionality here right you saw the get time method you saw the random number you saw open browser and then we have some kind of essential file manipulation calls right there's create file update file and delete file let's create file, update file, and delete file. Let's go ahead and showcase these a little bit more and really show off what these three simple but essential methods can do with a personal AI system. Hey Ada, can you go ahead and delete the learning rust file? Are you sure you want to delete the learning underscore rust MD file? If so, please confirm with force delete yeah go ahead and force delete that file we won't be needing that fantastic can you go ahead and create a new file and just explain the basics of list comprehension and list manipulation and you know what actually go ahead and just talk about all the different types of loops that you can use in python. Instead of a markdown file, just go ahead and create a full-on Python file with examples and make sure you comment and document it all really well. I've created the file pythonloopsandcomprehensions.py with explanations and examples on list comprehensions and various types of loops in Python. If you need further assistance or more information, just let me know. Fantastic. Thank you, Ada. Can you go ahead and update our Python loops and comprehension file and just go ahead and move everything into a function and then place a main block at the bottom where we can comment in and out each example and kind of work through them at our leisure. The Python loops and comprehensions py file has been updated. Everything is now organized into functions with a main block at the bottom for easy testing and exploration. You can now comment in and out each example as needed. Wonderful. Thanks so much, Ada. There's a triple quote at the bottom of the file. Go ahead and remove that. The triple quote at the bottom of the Python loops and comprehensions.py file has been removed. If there's anything else you need, feel free to let me know. Nice job, Ada. Can you go ahead and create two more versions of our Python loops and comprehensions file? Basically, we want another version in TypeScript. And let's go ahead and create another version of the same thing in C Sharp. So create both those files with the appropriate file names and file extensions. And just explain all the variants of loops in both TypeScript and C Sharp. The files loops and comprehensions.ts for TypeScript and loops and comprehensions.cs for Sharp have been created. They both include explanations and code examples of various loops and comprehension methods in their respective languages. Great job, Ada. Thank you. Can you go ahead and delete all of our loop files? Go ahead and force delete them. Both loopsandcomprehensions.ts and loopsandcomprehensions.cs files have been successfully deleted. Nice job, Ada. Go ahead and get rid of the Python loops and comprehensions.cs files have been successfully deleted. Nice job, Ada. Go ahead and get rid of the Python loops and comprehensions as well. The Python loops and comprehensions file has been deleted. If there's anything else you need help with. So these are really interesting tools I wanted to share with you just to give you an idea of what's coming next in the age of generative AI. Now that we have these real-time speech-to-speech capabilities, it's all about utilizing their capabilities. These are some really important functions to keep your eyes on and to build out into your personal AI assistant because a lot of engineering is all about file manipulation, file updates, file creates, file deletes, and manipulation. So just wanted to give this idea to you. A couple additional things I want to show you here in the beginning i asked ada to open up a bunch of browser urls every assistant is going to need some personalization some specific information that's specific to you and the things that you're interested in so you can see here we have the browser urls that's what got loaded into the open browser ai agent and then of course we have some auxiliary information down here browser as a name and human name you can see this personalization getting expanded very quickly and an interesting agentic pattern we're going to be digging into on the channel is having an agent automatically update our personalization file personalization settings is going to be really important and I also wanted to share this runtime table so i think it's super important to have proof of the speed of these agents and i gotta say i'm really really impressed here the real-time speech-to-speech api from when i finish speaking to getting a response is on average about a second when we're running tools obviously this time gets blown up quite a bit but i think that's to be expected there are other ideas like moving tool calls to an async thread to let those run in the background definitely an improvement that can be made here but you can see here real-time api responses are really really really great sub one second for the entire flow this is really incredible it seems like under the hood what their model is doing is starting to generate the response as we're speaking. Since tokens run in sequence, they can automatically start generating the response and then stream that back to us in real time, right? So this is really, really cool, really incredible stuff. The technology is truly groundbreaking. And then you can see some of my other calls here, right? So delete file took under a second, took about a second update file took three seconds right so updates more involved because we're not just setting up the initial right we're having the llm updated and of course you can see here when i ran the reasoning model that took about 18 seconds right so just wanted to give some data here give you some insights into the actual run time of the real-time api i am in love with these numbers. In our previous personal AI assistant videos, it was taking about, in the best case, five seconds to run the speech-to-text, LLM, and text-to-speech workflow that you need to complete in AI assistant. OpenAI is doing all those steps in less than a second here for simple conversational workflows. This is truly incredible technology of course it comes with several trade-offs anyone telling you different is wrong or they're trying to sell you something the real-time api alone is incredibly expensive you know talk about huge costs i'm not surprised but the cost must come down for this to be viable at scale if we check out the pricing here you can see so this is six minutes of audio in and 24 cents out this is heavy pricing when i was testing this out uh the costs for me were around i think it was 10 to 15 dollars for i don't know it was like an hour testing back and forth right so the costs are there that's a trade-off and of course i think the second largest trade-off here is vendor lock-in if you work through this base, you'll see that everything I've built here is on the OpenAI stack. All the LLMs, the reasoning model, the speech-to-speech, the real-time API. OpenAI wants to be the king of agentics. They want to be the king of AI agents, LLMs, and the entire ecosystem, right? Even the B2C side of it. These are risks that we kind of just have to take in order to get and utilize the top generation, the state of the art application models and APIs, right? The real-time API, there's nothing as close to this. They've embedded the speech-to-text, the LLM and the text--to-speech. All in one workflow, we're getting sub-second responses and we can call tools. That's all we've needed to push through to the next generation of agentic technology. Right now, these are risks we have to take to stay on the edge of what's possible so we can learn and master the next generation of software before it hits the masses. So I hope it's super clear what's happening here. The distance between you and your device is slowly fading away. Can you feel it? I can feel it. We can think, then just speak, and it knows what we're looking for. It's scary. It's exciting. It's real. It's here. This beautiful hacked together prototype code base contains about a thousand lines. It's here. It's ready for you if you're interested. I hadn't seen a ton of Python based on device implementation. So I wanted to throw one together to test and validate this agent and to set up a new version of Ada that I can grow and expand on. It's a simple proof of concept of what's coming. I think for us engineers, this is a next level tool that is really important to spend time on and pay attention to and build into your development workflow. There's really no limit to what we can do here. It's all about what we can imagine and it's about staying out of OpenAI's way. Don't go out and build a product. Don't go out and try to sell something that is going to get bulldozed by the next open AI release. Steer clear. Just imagine the tech is going to improve, the models will improve, and then think about what you can build alongside these next generation generative AI tools and API tooling. I think a huge theme we're seeing here very clearly is that voice is a key piece of the next generation of user experiences a question we're going to be looking to answer on the channel is how can we utilize personal ai assistance in combination with incredible prompts and reasoning models to get more work done in parallel faster than ever. If that interests you, drop a like, drop a sub, and join the journey. Really excitingly, we are about to hit and cross the 20k subs mark. It's been an incredible journey so far. We've transformed so many times and we've learned so much over the course of the channel, all the way from one prompt is not enough to learning aider to exploring cursor to understanding prompt chains agent orchestration building out multi-agent systems it's all coming together with the release of the real-time api and it's building up to something incredible we're getting closer to building living software that works for us while we sleep with every single video i I want to say a huge thanks to every builder and engineer that's a part of this journey. I appreciate all the positive comments and all the feedback. And I just want to remind you, you are in control of this technology. You can take this and together we can imagine a future with these tools. I want this channel to be a place where you can come to understand the capabilities of this technology so that you can build a better future for yourself, for your family, for your work, and for your passions. We're not going to stop until we have Living Software working for us while we sleep. Keep building, stay focused. I'll see you in the next one. | Realtime API with Tool Chaining. ADA is BACK. o1 assistant FILE AI Agents | 1,024 | IndyDevDan | 20241007 | This is what we've been WAITING FOR. The Realtime API is the breakthrough we’ve been waiting for in personal AI assistants.
🤖🔥 ADA is BACK!
It's time to start rethinking how we interact with AI assistants. ESPECIALLY for software engineers.
🎥 Featured Links:
- Python Async Realtime API POC codebase:
https://github.com/disler/poc-realtime-ai-assistant
- OpenAI Realtime API:
https://openai.com/index/introducing-the-realtime-api/
- Super AI Agents with Structured Outputs:
https://youtu.be/PoO7Zjsvx0k
- Control Your Personal AI Assistant:
https://youtu.be/ikaKpfUOb0U
- One Prompt is Not Enough:
https://youtu.be/JjVvYDPVrAQ
🔥 In this game-changing video, we're unleashing the full potential of personal AI assistants like Ada. Discover how the new OpenAI Realtime API is tearing down the barriers between you and your digital assistant, enabling real-time tool chaining and function chaining like never before!
🛠️ See firsthand how Ada utilizes the o1 assistant and advanced AI agents to perform complex tasks with 100% accuracy. We'll dive deep into the mechanics of tool chaining and function chaining, showcasing how these techniques can transform your interaction with your personal AI assistant.
💡 Whether you're an engineer, developer, or AI enthusiast, understanding these cutting-edge techniques is crucial in the age of AI. We'll explore the trade-offs, discuss the risks, and explain why embracing the OpenAI Realtime API is worth it for engineers who want to stay ahead.
🚀 I, IndyDevDan, break down complex concepts into easy-to-understand insights. From experimenting with file AI agents to implementing personal AI assistant patterns, we'll guide you step-by-step through the revolutionary capabilities of Ada powered by the OpenAI Realtime API.
🌟 Don't forget to like and subscribe for more edge content on AI, automation, and the future of personal assistants!
📖 Chapters
00:00 ADA is back.
00:49 OpenAI Realtime API
03:02 o1 File CRUD AI Agent
06:08 Breaking down tool chaining
06:34 Experimenting with file ai agent
10:03 Personal AI Assistant Patterns
12:25 Realtime API Tradeoffs
13:22 It's worth the risk, engineers NEED this.
15:45 Wow, 20k subs soon, our focus has not changed.
#promptengineering #aiassistant #programming | 2024-10-07T08:09:50.788016 |
https://www.youtube.com/watch?v=oFfVt3S51T4 | The following is a conversation with the founding members of the Cursor team, Michael Truel, Swale Asif, Arvid Lundmark, and Aman Sanger. Cursor is a code editor based on VS Code that adds a lot of powerful features for AI-assisted coding. It has captivated the attention and excitement of the programming and AI communities. So I thought this is an excellent opportunity to dive deep into the role of AI in programming. This is a super technical conversation that is bigger than just about one code editor. It's about the future of programming, and in general, the future of human AI collaboration in designing and engineering complicated and powerful systems. This is the Lex Freeman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Michael, Swale, Arvid, and Aman. All right, this is awesome. We have Michael, Aman, Swale, Arvid here from the Cursor team. First up, big ridiculous question. What's the point of a code editor? So the code editor is largely the place where you build software. And today, or for a long time, that's meant the place where you text edit a formal programming language. And for people who aren't programmers, the way to think of a code editor is like a really souped up word processor for programmers. The reason it's souped up is code has a lot of structure. And so the quote-unquote word processor, the code editor, can actually do a lot for you that word processors in the writing space haven't been able to do for people editing text there. And so that's everything from giving you visual differentiation of the actual tokens in the code so you can scan it quickly, to letting you navigate around the code base, sort of like you're navigating around the internet with hyperlinks. You're going to sort of definitions of things you're using, to error checking, to catch rudimentary bugs. And so traditionally, that's what a code editor has meant and i think that what a code editor is is going to change a lot over the next 10 years as what it means to build software maybe starts to look a bit different i think also a code editor should just be fun yes that is very important that is very important and it's actually sort of an underrated aspect of how we decide what to build. Like a lot of the things that we build and then we try them out, we do an experiment and then we actually throw them out because they're not fun. And so a big part of being fun is like being fast a lot of the time. Fast is fun. Yeah, fast is, yeah. Yeah, that should be a t-shirt. Fundamentally, I think one of the things that draws a lot of people to building stuff on computers is this insane iteration speed. Where in other disciplines, you might be gate-capped by resources or the ability... Even the ability to get a large group together and coding is this amazing thing where it's you and the computer and, uh, that alone, you can, you can build really cool stuff really quickly. So for people who don't know, Cursor is this super cool new editor. That's a fork of VS Code. It would be interesting to get your kind of explanation of your own journey of editors. How did you, I think all of you were big fans of VS Code with Copilot. How did you arrive to VS Code and how did that lead to your journey with Cursor? Yeah, so I think a lot of us, well, all of us were originally FIM users. Pure FIM. Pure FIM, yeah. No Neo FIM, just pure FIM in a terminal. And at least for myself, it was around the time that Copilot came out, so 2021, that I really wanted to try it. So I went into VS Code, the only platform, the only code editor in which it was available. And even though I really enjoyed using Vim, just the experience of Copilot with VS Code was more than good enough to convince me to switch. And so that kind of was the default until we started working on Cursor. And maybe we should explain what Copilot does. It's like a really nice autocomplete. It suggests, as you start writing a thing, it suggests one or two or three lines how to complete the thing. And there's a fun experience in that, you know, like when you have a close friendship and your friend completes your sentences? Like when it's done well, there's an intimate feeling. There's probably a better word than intimate, but there's a cool feeling of like, holy shit, it gets me. And then there's an unpleasant feeling when it doesn't get you. And so there's that kind of friction. But I would say for a lot of people, the feeling that it gets me overpowers that it doesn't. And I think actually one of the underrated aspects of GitHub Copilot is that even when it's wrong, it's like a little bit annoying, but it's not that bad because you just type another character and then maybe then it gets you or you type another character and then it gets you. So even when it's wrong, it's not that bad. Yeah, you can sort of iterate and fix it. I mean, the other underrated part of Copilot for me sort of was just the first real AI product. So the first language model consumer product. So Copilot was kind of like the first killer app for LLMs. Yeah. And like the beta was out in 2021. Right. Okay. So what's the origin story of Cursor? So around 2020, the scaling loss papers came out from OpenAI. And that was a moment where this looked like clear, predictable progress for the field, where even if we didn't have any more ideas, it looks like you could make these models a lot better if you had more compute and more data. By the way, we'll probably talk for three to four hours on the topic of scaling laws. Yes. Just to summarize, it's a paper and a set of papers and a set of ideas that say bigger might be better for model size and data size in the realm of machine learning. It's bigger and better, but predictably better. Okay, that's another topic of conversation. Yeah. So around that time, for some of us, there were a lot of conceptual conversations about what's this going to look like? What's the story going to be for all these different knowledge worker fields about how they're going to be made better by this technology getting better? And then I think there were a couple of moments where the theoretical gains predicted in that paper started to feel really concrete, and it started to feel like a moment where you could actually go and not do a PhD if you wanted to work on, do useful work in AI. Actually felt like now there was this whole set of systems one could build that were really useful. And I think that the first moment we already talked about a little bit, which was playing with the early bit of Copilot. That was awesome and magical. I think that the next big moment where everything kind of clicked together was actually getting early access to GPT-4. So it was sort of end of 2022 was when we were tinkering with that model. And the step-up in capabilities felt enormous. And previous to that, we had been working on a couple of different projects. We had been, because of Copilot, because of ScalingOz, because of our prior interest in the technology, we had been tinkering around with tools for programmers, but things that are very specific. So we were building tools for financial professionals who have to work within a Jupyter notebook or playing around with, can you do static analysis with these models? And then the setup in GPT-4 felt like, look, that really made concrete the theoretical gains that we had predicted before. It felt like you could build a lot more just immediately at that point in time. And also, if we were being consistent, it really felt like this wasn't just gonna be a point solution thing. This was gonna be all of programming was gonna flow through these models. It felt like that demanded a different type of programming environment, a different type of programming. And so we set off to build that sort of larger vision around that. There's one that I distinctly remember. So my roommate is an IMO gold winner and there's a competition in the US called the Putnam, which is sort of the IMO for college people. And it's this math competition is exceptionally good. So Sheng Tong and Amon, I remember it sort of June of 2022 had this bet on whether the ma like 2024, June or July, you were going to win a gold medal in the IMO with the, with like models. IMO is International Math Olympiad. Uh, yeah. IMO is International Math Olympiad. And so Arvid and I are both there, you know, also competed in it. So it was sort of personal. And, uh, and I, I remember thinking, Matt, this is just, this is not gonna happen. This was like, it was like, even though I, I sort of believed in progress, I thought, you know, I'm a girl just like, I'm honest, just delusional. Yeah. That was the, that was the, and, and to be honest, it, I mean, I, I was to be clear, very wrong, but that was maybe the most prescient bet in the group. So the, the, the new results from deep mind, it turned out that you were correct. That's what that was technically not technically incorrect, but one point away. Amon was very enthusiastic about this stuff. And before Amon had this like scaling loss t-shirt that he would walk around with where it had the like charts and like the, the formulas on it. on it so you like felt the agi or you felt the uh yeah i distinctly remember there's this one conversation uh i had with with michael where before i hadn't thought super deeply and critically about scaling laws and he kind of posed the question why isn't scaling all you need or why isn't scaling gonna result in massive gains in progress and i think i went through like the like the stages of grief there is anger denial and then finally at the end just thinking about it uh acceptance um and i think i've been quite hopeful and uh optimistic about progress since i think one thing I'll caveat is, I think it also depends on like which domains you're going to see progress. Like math is a great domain because, especially like formal theorem proving, because you get this fantastic signal of actually verifying if the thing was correct. And so this means something like RL can work really, really well. And I think like you could have systems that are perhaps very superhuman in math and still not technically have AGI. Okay, so can we take it all the way to Cursor? And what is Cursor? It's a fork of VS Code. And VS Code is one of the most popular editors for a long time. Like everybody fell in love with it. Everybody loved Vim. I left DMAX for, sorry. So unified in some fundamental way, the developer community. And then you look at the space of things, you look at the scaling laws, AI is becoming amazing. And you decided, okay, it's not enough to just write an extension for your VS Code because there's a lot of limitations to that. If AI is going to keep getting better and better and better, we need to really rethink how the AI is going to be part of the editing process. And so you decided to fork VS Code and start to build a lot of the amazing features we'll be able to talk about. But what was that decision like? Because there's a lot of extensions, including Copilot, of VS Code that are doing sort of AI type stuff. What was the decision like to just fork VS Code? So the decision to do an editor seemed kind of self-evident to us for at least what we wanted to do and achieve. Because when we started working on the editor, the idea was these models are going to get much better, their capabilities are going to improve, and it's going to entirely change how you build software, both in a you will have big productivity gains, but also radical in a like the active building software is going to change a lot. And so you're very limited in the control you have over a code editor if you're a plug-in to an existing coding environment. And we didn't want to get locked in by those limitations. We wanted to be able to just build the most useful stuff. Okay, well then the natural question is, you know, VS Code is kind of with Copilot a competitor. So how do you win? Is it basically just the speed and the quality of the features? Yeah, I mean, I think this is a space that is quite interesting, perhaps quite unique, where if you look at previous tech waves, maybe there's kind of one major thing that happened and it unlocked a new wave of companies. But every single year, every single model capability or jump you get in model capabilities, you now unlock this new wave of features, things that are possible, especially in programming. And so I think in AI programming, being even just a few months ahead, let alone a year ahead, makes your product much, much, much more useful. I think the cursor a year from now will need to make the cursor of today look obsolete. And I think Microsoft has done a number of fantastic things, but I don't think they're in a great place to really keep innovating and pushing on this in the way that a startup can. Just rapidly implementing features. And kind of doing the research experimentation necessary to really push the ceiling. I don't know if I think of it in terms of features as I think of it in terms of like capabilities for programmers. It's that like, you know, as you know the new one model came out and I'm sure there are going to be more models of different types like longer context and maybe faster. Like there's all these crazy ideas that you can try and hopefully 10% of the crazy ideas will make it into something kind of cool and useful. And we want people to have that sooner. To rephrase it, it's like an underrated fact is we're making it for ourself. When we started Cursor, you really felt this frustration that, you know, models, you could see models getting better. But the COBOL experience had not changed. It was like, man, these guys, the ceiling is getting higher. Why are they not making new things? They should be making new things. They should be like, here, where's all the alpha features? There were no alpha features. I'm sure it was selling well. I'm sure it was a great business, but it didn't feel, I'm one of these people that really want to try and use new things, and it was just, there's no new thing for like a very long while. Yeah, it's interesting. I don't know how you put that into words, but when you compare Cursor with Copilot, Copilot pretty quickly became, started to feel stale for some reason. Yeah, I think one thing that I think helps us is that we're sort of doing it all in one, where we're developing the UX and the way you interact with the model. At the same time as we're developing, like, how we actually make the model give better answers, so like how you build up the prompt or like how do you find the context, and for CursorTab, like how do you make the model give better answers. So like how you build up the prompt or like how do you find the context and for a cursor tab, like how do you train the model? So I think that helps us to have all of it like sort of like the same people working on the entire experience end-to-end. Yeah, it's like the person making the UI and the person training the model like sit to like 18 feet away. Often the same person even. Yeah, often even the same person. So, you can create things that are sort of not possible if you're not talking, you're not experimenting. And you're using, like you said, Cursor to write Cursor. Of course, oh yeah. Well, let's talk about some of these features. Let's talk about the all-knowing, the all-powerful, praise B to the tab. So the. You know, autocomplete on steroids. Basically, so what how does tab work? What is tab? To highlight and summarize at a high level, I'd say that there are two things that cursor is pretty good at right now there are there are other things that it does. But two things that it helps programmers with. One is this idea of looking over your shoulder and being a really fast colleague who can jump ahead of you and type and figure out what you're going to do next. And that was the original idea behind... That was the kernel of the idea behind a good autocomplete was predicting what you're going to do next. You can make that concept even more ambitious by not just predicting the characters after your cursor, but actually predicting the next entire change you're going to make, the next diff, next place you're going to jump to. And the second thing Cursor is pretty good at right now too is helping you sometimes jump ahead of the AI and tell it what to do and go from instructions to code. And on both of those, we've done a lot of work on making the editing experience for those things ergonomic and also making those things smart and fast. One of the things we really wanted was we wanted the model to be able to edit code for us. That was kind of a wish and we had multiple attempts at it before we had a sort of a good model that could edit code for you. Then after we had a good model, I think there have been a lot of effort to make the inference fast for having a good experience. And we've been starting to incorporate, I mean, Michael sort of mentioned this ability to jump to different places. And that jump to different places, I think, came from a feeling of, you know, once you accept an edit, it's like, man, it should be just really obvious where to go next. It's like, I made this change, the model should just know that like the next place to go to is like 18 lines down. Like, if you're a WIM user, user, you could press one eight JJ or whatever. But like, why, why even, why am I doing this? Like the model, the model should just know it. And then, so, so the idea was, yo, you just press tab. It would go 18 lines down and then make it show you, show you the next edit and you would press tab. So it was just you, as long as you could keep pressing tab. And so the internal competition was how many tabs can we make someone press it? Once you have like the idea, uh, more, more, uh, sort of abstractly that the thing to think about is sort of like once, how many, how, how are the edits sort of zero, zero entropy. So once you've sort of expressed your intent and the edit is, there's no like new bits of information to finish your thought, but you still have to type some characters to like make the computer understand what you're actually thinking, then maybe the model should just sort of read your mind and all the zero entropy bits should just be like tabbed away. That was sort of the abstract version. There's this interesting thing where if you look at language model loss on different domains, I believe the bits per byte, which is a kind of character normalized loss for code, is lower than language. Which means in general, there are a lot of tokens in code that are super predictable. A lot of characters that are super predictable. And this is, I think, even magnified when you're not just trying to autocomplete code, but predicting what the user is going to do next in their editing of existing code. And so, you know, the goal of cursor taps, let's eliminate all the low entropy actions you take inside of the editor. When the intent is effectively determined, let's just jump you forward in time, skip you forward. Well, what's the intuition and what's the technical details of how to do next cursor prediction? That jump, that's not so intuitive, I think, to people. Yeah, I think I can speak to a few of the details on how to make these things work. They're incredibly low latency, so you need to train small models on this task. In particular, they're incredibly pre-filled, token hungry. What that means is they have these really, really long prompts where they see a lot of your code, and they're not actually generating that many tokens. And so the perfect fit for that is using a sparse model, meaning an MOE model. So that was kind of one breakthrough we made that substantially improved its performance at longer context. The other being a variant of speculative decoding that we kind of built out called speculative edits. These are two, I think, important pieces of what make it quite high quality and very fast. Okay, so MOE, mixture of experts, the input is huge, the output is small. Yeah. Okay, so MOE, mixture of experts, the input is huge, the output is small. Yeah. Okay, so what else can you say about how to make these caching play a role? Oh, caching plays a huge role. Because you're dealing with this many input tokens, if every single keystroke that you're typing in a given line, you had to rerun the model on all of those tokens passed in, you're just going to, one, significantly degrade latency, two, you're going to kill your GPUs with load. So you need to design the actual prompts you use for the model such that they're caching-aware. And then, yeah, you need to reuse the KV cache across requests just so that you're spending less work, less compute. Again, what are the things that tab is supposed to be able to do kind of in the near term just to like sort of linger on that generate code, like fill empty space, also edit code across multiple lines and then jump to different locations inside the same file. Yeah. And then like jump to different files also. So if you make an edit in one file and maybe you have to go to another file to finish your thought, it should go to the second file also. The full generalization is like next action prediction. Like sometimes you need to run a command in the terminal and it should be able to suggest the command based on the code that you wrote to. Or sometimes you actually need to, like it suggests something, but it's hard for you to know if it's correct because you actually need some more information to learn. Like you need to know the type to be able to verify that it's correct. And so maybe it should actually take you to a place that's like the definition of something and then take you back so that you have all the requisite knowledge to be able to accept the next completion. So providing the human, the knowledge. Yes. Right. Yeah. Can you integrate, like, I just, uh, gotten to know a guy named prime Jen, who I believe has an SS, you can order coffee via SSH. Oh, yeah. Oh, we did that. We did that. So can that also the model do that? Like feed you and like, and provide you with caffeine? Okay, so that's the general framework. Yeah, and the magic moment would be if, it is, programming is this weird discipline where sometimes the next five minutes, not always, but sometimes the next five minutes of what you're going to do is actually predictable from the stuff you've done recently. And so can you get to a world where that next five minutes either happens by you disengaging and it taking you through, or maybe a little bit more of just you seeing next step what it's going to do and you're like, okay, that's good. That's good. That's good's good that's good and you can just sort of tap tap tap through these big changes as we're talking about this i should mention that one of the really cool and noticeable things about cursor is that there's this whole diff interface situation going on so like the model suggests with uh with the red and the green of like here's how we're going to modify the code and in the chat window you can apply and it shows you the diff and the green, of like, here's how we're going to modify the code. And in the chat window, you can apply, and it shows you the diff, and you can accept the diff. So maybe can you speak to whatever direction of that? We'll probably have like four or five different kinds of diffs. So we have optimized the diff for the autocomplete, so that has a different diff interface than when you're reviewing larger blocks of code. And then we're trying to optimize another diff thing for when you're doing multiple different files. And sort of at a high level, the difference is for when you're doing autocomplete, it should be really, really fast to read. uh actually it should be really fast to read in all situations uh but in autocomplete it's sort of you're really like your eyes focused in one area you you can't be in too many you the humans can't look in too many different places so you're talking about on the interface side on the interface so it currently has this box on the side so we have the current box and if it try to delete code in some place and tries to add other code, it tries to show you a box on the side. You can maybe show it if we pull it up on cursor.com. This is what we're talking about. So that, that box, it was like three or four different attempts at trying to make this, this thing work where first the attempt was like these blue crossed out lines. So before it was a box on the side, it used to show you the code to delete by showing you like Google Doc style, you would see like a line through it. Then you would see the new code. That was super distracting. And then we tried many different, you know, there was sort of deletions, there was trying to red highlight. Then the next iteration of it, which is sort of funny, would, you would hold the, on Mac, the option button. So it would sort of highlight a region of code to show you that there might be something coming. So maybe in this example, like the input and the value would get, would all get blue. The blue would highlight that the AI had a suggestion for you. Instead of directly showing you the thing, it would just hint that the AI had a suggestion. If you really wanted to see it, you would hold the option button, and then you would see the new suggestion. If you release the option button, you would then see your original code. So that's, by the way, that's pretty nice, but you have to know to hold the option button. Yeah. By the way, I'm not a Mac user, but I got it. It's a button, I guess, you people have. It's, again, it's just non-intuitive. I think that's the key thing. And there's a chance this is also not the final version of it. I am personally very excited for making a lot of improvements in this area. We often talk about it as the verification problem, where these diffs are great for small edits. For large edits, or when it's multiple files or something, it's actually a little bit prohibitive to review these diffs. And so there are a couple of different ideas here. One idea that we have is, okay, parts of the diffs are important. They have a lot of information. And then parts of the diff are just very low entropy. They're like the same thing over and over again. And so maybe you can highlight the important pieces and then gray out the not so important pieces. Or maybe you can have a model that looks at the diff and sees, oh, there's a likely bug here. I will mark this with a little red squiggly and say, you should probably review this part of the diff. And ideas in that vein, I think, are exciting. Yeah, that's a really fascinating space of UX design engineering. So you're basically trying to guide the human programmer through all the things they need to read and nothing more. Like optimally. Yeah. And you want an intelligent model to do it. Like currently diff algorithms are, they're like, they're just like normal algorithms. There is no intelligence. There's like intelligence that went into designing the algorithm, but then there's no, like, you don't care if it's about this thing or this thing as you want a model to do this. So I think the general question is like, Matt, these models are going to get much smarter. As the models get much smarter, the changes they will be able to propose are much bigger. So as the changes gets bigger and bigger and bigger, the humans have to do more and more and more verification work. It gets more and more hard. You need to help them out. I don't want to spend all my time reviewing code. Can you say a little more across multiple files, Div? Yeah, I mean, so GitHub tries to solve this, right? With code review. When you're doing code review, you're reviewing multiple diffs across multiple files. But like Arvid said earlier, I think you can do much better than code review, you know, code review kind of sucks. Like you spend a lot of time trying to grok this code that's often quite unfamiliar to you. And it often like doesn't even actually catch that many bugs. And I think you can significantly improve that review experience using language models, for example, using the kinds of tricks that Arpad described of maybe pointing you towards the regions that actually matter. I think also if the code is produced by these language models and it's not produced by someone else, like the code review experience is designed for both the reviewer and the person that produced the code. In the case where the person that produced the code is the language model, you don't have to care that much about their experience. You can design the entire thing around the reviewers such that the reviewers job is as fun, as easy, as productive as possible. And I think that that feels like the issue with just kind of naively trying to make these things look like code review. I think you can be a lot more creative and push the boundary in what's possible. Just one one idea there is I think ordering matters. Generally, when you review a PR, you, you have this list of files, and you're reviewing them from top to bottom. But actually, like you actually want to understand this part first, because that came like logically first, and then you want to understand the next part. And you don't want to have to figure out that yourself, you want a model to guide you through the thing. And is the step of creation going to be more and more natural language is the goal versus with actual writing? I think sometimes. I don't think it's going to be the case that all of programming will be natural language. And the reason for that is, you know, if I'm pair programming with Swalla and Swalla is at the computer and the keyboard, and sometimes if I'm pair programming with Swalla and Swalla is at the computer and the keyboard. And sometimes if I'm like driving, I want to say to Swalla, hey, like implement this function and that works. And then sometimes it's just so annoying to explain to Swalla what I want him to do. And so I actually take over the keyboard and I show him, I write like part of the, and then it makes sense. And that's the easiest way to communicate. And so I think that's also the case for AI. Sometimes the easiest way to communicate with AI will be to show an example, and then it goes and does the thing everywhere else. Or sometimes if you're making a website, for example, the easiest way to show to the AI what you want is not to tell it what to do, but drag things around or draw things. And yeah, and maybe eventually we will get to brain-machine interfaces or whatever and kind of understand what you're thinking. And so I think natural language will have a place. I think it will definitely not be the way most people program most of the time. I'm really feeling the AGI with this editor. It feels like there's a lot of machine learning going on underneath. Tell me about some of the ML stuff that makes it all work. Well, Cursor really works via this ensemble of custom models that we've trained alongside, you know, the frontier models that are fantastic at the reasoning intense things. And so CursorTab, for example, is a great example of where you can specialize this model to be even better than even frontier models if you look at evals on the task we set it at. The other domain, which it's kind of surprising that it requires custom models, but it's kind of necessary and works quite well, is in apply. So I think these models are like the frontier models are quite good at sketching out plans for code and generating like rough sketches of like the change, but actually creating diffs is quite hard for frontier models for your training models. Like you try to do this with Sonnet, with O1, any frontier model, and it really messes up stupid things like counting line numbers, especially in super, super large files. And so what we've done to alleviate this is we let the model kind of sketch out this rough code block that indicates what the change will be. And we train a model to then apply that change to the file. And we should say that Apply is, the model looks at your code, it gives you a really damn good suggestion of what new things to do, and the seemingly for humans trivial step of combining the two, you're saying is not so trivial. Contrary to popular perception, it is not a deterministic algorithm. Yeah. I think you see shallow copies of Apply elsewhere, and it just breaks most of the time because you think you can kind of try to do some deterministic matching, and then it fails at least 40% of the time. And that just results in a terrible product experience. I think in general, this regime of you are going to get smarter and smarter models. So one other thing that Apply lets you do is it lets you use fewer tokens with the most intelligent models. This is both expensive in terms of latency for generating all these tokens and cost. So you can give this very, very rough sketch, and then have your small models go and implement it because it's a much easier task to implement this very, very sketched out code. And I think that this regime will continue where you can use smarter and smarter models to the planning. And then maybe the implementation details can be handled by the less intelligent ones. Perhaps you'll have, you'll have maybe a one, maybe it'll be even more capable models, given an even higher level plan that is kind of recursively applied by Sonnet and an Eply model. Maybe we should talk about how to make it fast. Yeah. I feel like fast is always an interesting detail. Fast is good. Yeah. How do you make it fast? Yeah, so one big component of making it fast is speculative edits. So speculative edits are a variant of speculative decoding. And maybe it'd be helpful to briefly describe speculative decoding. With speculative decoding, what you do is you can take advantage of the fact that most of the time, and I'll add the caveat that it would be when you're memory bound in language model generation, if you process multiple tokens at once, it is faster than generating one token at a time. So this is like the same reason why if you look at tokens per second. With prompt tokens versus generated tokens, it's much, much faster for prompt tokens. So what we do is instead of using what speculative decoding normally does, which is using a really small model to predict these draft tokens that your larger model will then go in and verify. With code edits, we have a very strong prior of what the existing code will look like. And that prior is literally the same exact code. So you can do is you can just feed chunks of the original code back into the into the model. And then the model will just pretty much agree most of the time that okay, I'm just going to spit this code back out. And so you can process all of those lines in parallel. And you just do this with sufficiently many chunks, and then eventually you'll reach a point of disagreement, where the model will now predict text that is different from the ground truth original code. It'll generate those tokens, and then we kind of will decide after enough tokens match the original code to re-start speculating in chunks of code. What this actually ends up looking like is just a much faster version of normal editing code. So it's just like, it looks like a much faster version of the model rewriting all the code. So just we can use the same exact interface that we use for for diffs, but it will just stream down a lot faster. And then the advantages that wireless streaming, you can just also be reviewing, start reviewing the code before it's done. So there's no big loading screen. So maybe that is part of the advantage. So the human can start reading before the thing is done. I think the interesting riff here is something like, like speculation is a fairly common idea nowadays. It's like not only in language models. I mean, there's obviously speculation in CPUs and there's like speculation for databases and speculation all over the place. Let me ask the sort of the ridiculous question of which LLM is better at coding, GPT, Claude, who wins in the context of programming? And I'm sure the answer is much more nuanced because it sounds like every single part of this involves a different model. Yeah, I think there's no model that Pareto dominates others, meaning it is better in all categories that we think matter. The categories being speed, ability to edit code, ability to process lots of code, long context, you know, a couple of other things and kind of coding capabilities. The one that I'd say right now is just kind of net best is Sonnet. I think this is a consensus opinion. Our one's really interesting, and it's really good at reasoning. So if you give it really hard programming interview style problems, or leetcode problems, you can do quite quite well on them. But it doesn't feel like it kind of understands your rough intent as well as Sonnet does. Like, if you look at a lot of the other frontier models, one qualm I have is it feels like they're not necessarily over... I'm not saying they train on benchmarks, but they perform really well on benchmarks relative to everything that's in the middle. So if you try it in all these benchmarks and things that are in the distribution of the benchmarks they're evaluated on, they'll do really well. But when you push them a little bit outside of that, Sonnet's, I think, the one that kind of does best at kind of maintaining that same capability. Like, you kind of have the same capability in the benchmark as when you try to instruct it to do anything with coding. What, another ridiculous question, is the difference between the normal programming experience versus what benchmarks represent? Like, where do benchmarks fall short, do you think, when we're evaluating these models? By the way, that's like a really, really hard, it's like critically important detail, like how different benchmarks are versus real coding. Where real coding, it's not interview style coding. It's you're doing these... Humans are saying half-broken English sometimes. And sometimes you're saying, like, do what I did before. Sometimes you're saying, you know, go add this thing and then do this other thing for me. And then make this UI element. And then, you know, it's just like a lot of things are sort of context dependent. You really want to like understand the human and then do, do what the human wants as opposed to sort of this, maybe the way to put it is sort of abstractly is, uh, the interview problems are very well specified, they lean a lot on specification while the human stuff is less specified. They lean a lot on specification while the human stuff is less specified. Yeah. I think that this benchmark question is both complicated by what Swalai just mentioned. And then also, what Amon was getting into is that even if there's this problem of the skew between what can you actually model in a benchmark versus real programming. And that can be sometimes hard to encapsulate because real programming is very messy and sometimes things aren't super well specified what's correct or what isn't. But then it's also doubly hard because of this public benchmark problem. And that's both because public benchmarks are sometimes kind of hill-climbed on, but then it's really, really hard to also get the data from the public benchmarks out of the models. For instance, one of the most popular agent benchmarks, SweetBench, is really, really contaminated in the training data of these foundation models. If you ask these foundation models to do a SweetBench problem, you actually don't give them the context of a codebase. They can like hallucinate the right file pass. They can hallucinate the right function names. And so it's also just the public aspect of these things is tricky. Yeah. Like in that case, it could be trained on the literal issues or pull requests themselves. And maybe the labs will start to do a better job or they've already done a good job at decontaminating those things. But they're not going to omit the actual training data of the repository itself. Like, these are all like some of the most popular Python repositories, like SymPy is one example. I don't think they're going to handicap their models on SymPy and all these popular Python repositories in order to get true evaluation scores in these benchmarks. Yeah. I think that given the dearths and benchmarks, there have been like a few interesting crutches that places that build systems with these models or build these models actually use to get a sense of are they going in the right direction or not. And in a lot of places, people will actually just have humans play with the things and give qualitative feedback on these. Like one or two of the foundation model companies, they have people who that's a big part of their role. And internally, we also qualitatively assess these models and actually lean on that a lot, in addition to like private evals that we have. It's like the vibe. The vibe, yeah. The vibe benchmark, human benchmark. You pull in the humans to do a vibe check yeah okay i mean that's that's kind of what i do like just like reading online forums and reddit and x just like well i don't know how to properly load in people's opinions because they'll say things like i feel like claude or gT's gotten dumber or something. They'll say, I feel like, and then I sometimes feel like that too, but I wonder if it's the model's problem or mine. Yeah, with Claude, there's an interesting take I heard where I think AWS has different chips, and I suspect they have slightly different numerics than NVIDIA GPUs and someone speculated that Claude's degraded performance had to do with maybe using the quantized version that existed on AWS Bedrock versus whatever was running on Anthropix GPUs. I interview a bunch of people that have conspiracy theories so I'm glad you spoke to this conspiracy. Well, it's not, not like conspiracy theory as much as they're just, they're like, they're, you know, humans, humans are humans and there's, there's these details and you know, you're doing like this crazy amount of flops and you know, chips are messy and man, you can just have bugs, like bugs are, it's, it's hard to overstate how, how hard bugs are to avoid. What's the role of a good prompt in all of this? You mentioned that benchmarks have really structured, well-formulated prompts. What should a human be doing to maximize success? And what's the importance of what the human... You wrote a blog post and you called it prompt design. Yeah, I think it depends on which model you're using. And all of them are slightly different and they respond differently to different prompts. But I think the original GPT-4 and the original sort of pre-double models last year, they were quite sensitive to the prompts. They also had a very small context window. And so we have all of these pieces of information around the code base that would maybe be relevant in the prompt. Like you have the docs, you have the files that you add, you have the conversation history. And then there's a problem like how do you decide what you actually put in the prompt and when you have a limited space. And even for today's models, even when you have long context, filling out the entire context window means that it's slower. It means that sometimes the model actually gets confused and some models get more confused than others. And we have this one system internally that we call pre-empt, which helps us with that a little bit. And I think it was built for the era before where we had 8,000 token context windows. And it's a little bit similar to when you're making a website. You sort of, you want it to work on mobile, you want it to work on a desktop screen. And you have this dynamic information, which you don't have, for example, if you're making, like, designing a print magazine. You have, like, you know exactly where you can put stuff. But when you have a website or when you have a prompt, you have these inputs and then you need to format them to always work. Even if the input is really big, then you might have to cut something down. And so the idea was, okay, like, let's take some inspiration. What's the best way to design websites? Well, the thing that we really like is React and the declarative approach where you use JSX in JavaScript, and then you declare, this is what I want, and I think this has higher priority, or this has higher z-index than something else. And then you have this rendering engine in web design, it's like Chrome, and in our case, it's a preempt renderer, which then fits everything onto the page. And as you declare, it will decide what you want, and then it figures out what you want. And so we have found that to be quite helpful. And I think the role of it has sort of shifted over time, where initially it was to fit to these small context windows. Now it's really useful because it helps us with splitting up the data that goes into the prompt and the actual rendering of it. And so it's easier to debug because you can change the rendering of the prompt and then try it on old prompts because you have the raw data that went into their prompt. And then you can see did my change actually improve it for this entire eval set? So do you literally prompt with JSX? Yes. So it kind of looks like React. There are components. We have one component that's a file component. And it takes in the cursor. Usually there's one line where the cursor is in your file. And that's probably the most important line because that's the one you're looking at. And so then you can give priorities. like that line has the highest priority and then you subtract one for every line that is farther away and then eventually when it's rendered it figure out how many lines can actually fit and it centers around that thing that's amazing yeah and you can do like other fancy things where if you have lots of code blocks from the entire code base you could use uh retrie retrieval and things like embedding and re-ranking scores to add priorities for each of these components. So should humans, when they ask questions, also try to use something like that? Like, would it be beneficial to write JSX in the problem or the whole idea is it should be loose and messy? I think our goal is kind of that you should just do whatever is the most natural thing for you. Yeah. And then we, our job is to figure out how do we actually like retrieve the relative things so that your thing actually makes sense. Well, this is sort of the discussion I had with Arvind of perplexity is like, his whole idea is like, you should let the person be as lazy yes yeah but like yeah that's a beautiful thing but i feel like you're allowed to ask more of programmers right so like if you say just do what you want i mean humans are lazy there's a kind of tension between just being lazy versus like provide more as uh be prompted almost like the system pressuring you or inspiring you to be articulate not in terms of the grammar of the sentences but in terms of the depth of thoughts that you convey inside the uh the problems i think even as a system gets closer to some level of perfection. Often when you ask the model for something, you just are not, not enough intent is conveyed to know what to do. And there are like a few ways to resolve that intent. One is the simple thing of having models just ask you, I'm not sure how to do these parts based on your query. Could you clarify that? I think the other could be maybe if you, there are five or six possible generations given the uncertainty present in your query so far. Why don't we just actually show you all of those and let you pick them? How hard is it for the model to choose to talk back? Sort of versus generating. It's hard. It's sort of like how to deal with the uncertainty. Do I choose to ask for more information to reduce the ambiguity? So, I mean, one of the things we do is, it's like a recent addition, is try to suggest files that you can add. So while you're typing, one can guess what the uncertainty is and maybe suggest that like, you know, maybe you're writing your API and we can guess using the commits that you've made previously in the same file that the client and the server is super useful. And there's like a hard technical problem of how do you resolve it across all commits? Which files are the most important given your current prompt? And we're still sort of initial versions rolled out, and I'm sure we can make it much more accurate. It's very experimental. But then the idea is we show you like, do you just want to add this file, this file, this file also to tell the model to edit those files for you? Because if maybe you're making the API, like you should also edit the client and the server that is using the API and the other one resolving the API. And so that'll be kind of cool as both there's the phase where you're writing the prompt and there's before you even click enter, maybe we can help resolve some of the uncertainty. To what degree do you use agentic approaches? How useful are agents? We think agents are really, really cool. Like, I think agents is like, it's like, it resembles sort of like a human, it's sort of like the things, like you can kind of feel that it, like you're getting closer to AGI because you see a demo where it acts as a human would, and it's really, really cool. I think agents are not yet super useful for many things. They, I think we're getting close to where they will actually be useful. And so I think there are certain types of tasks where having an agent would be really nice. Like I would love to have an agent. For example, if like we have a bug where you sometimes can't command C and command V inside our chat input box. And that's a task that's super well specified. I just want to say like in two sentences, this does not work, please fix it. And then I would love to have an agent that just goes off, does it and then a day later, I come back and I reviewed the thing. You mean it goes finds the right file? Yeah, it finds the right files. It like tries to reproduce the bug, it I review the thing. You mean it goes, finds the right file? Yeah, it finds the right files. It tries to reproduce the bug. It fixes the bug. And then it verifies that it's correct. And this could be a process that takes a long time. And so I think I would love to have that. And then I think a lot of programming, like there is often this belief that agents will take over all of programming. I don't think we think that that's the case because a lot of programming, a lot of the value is in iterating or you don't actually want to specify something up front because you don't really know what you want until you've seen an initial version and then you want to iterate on that and then you provide more information. And so for a lot of programming, I think you actually want a system that's instant that gives you an initial version instantly back and then you can iterate super, super quickly. What about something like that recently came out, Replit Agent that does also like setting up the development environment, installing software packages, configuring everything, configuring the databases and actually deploying the app. Is that also in the set of things you dream about? I think so. I think that would be really cool. For certain types of programming, it would be really cool. Is that within scope of Cursor? Yeah, we aren't actively working on it right now, but it's definitely like, we want to make the programmer's life easier and more fun. And some things are just really tedious and you need to go through a bunch of steps and you want to delegate that to an agent. And then some things you can actually have an agent in the background while you're working. Like let's say you have a PR that's both backend and frontend and you're working in the frontend and then you can have a background agent that does some work and figure out what you're doing, and then when you get to the back-end part of your PR, then you have some initial piece of code that you can iterate on. And so that would also be really cool. One of the things we already talked about is speed, but I wonder if we can just linger on that some more in the various places that the technical details involved in making this thing really fast. So every single aspect of cursor, most aspects of cursor feel really fast. Like I mentioned, the apply is probably the slowest thing. And for me, I'm sorry, the pain. It's a pain. It's a pain that we're feeling and we're working on fixing it uh yeah i mean it says something that something that feels i don't know what it is like one second or two seconds that feels slow that means that's actually uh shows that everything else is just really really fast um so is there some technical details about how to make some of these models how to make the chat fast how to to make the divs fast? Is there something that just jumps to mind? Yeah, I mean, so we can go over a lot of the strategies that we use. One interesting thing is cache warming. And so what you can do is, if, as the user is typing, you can have, you're probably going to use some piece of context. And you can know that before the user is done typing. So, you know, as we discussed before, reusing the KV cache results in lower latency, lower costs, cross requests. So as the user starts typing, you can immediately warm the cache with, like, let's say the current file contents. And then when they press enter, there's very few tokens, it actually has to pre fill and compute before starting the generation, this will significantly lower TTFT. Can you explain how KV cache works? Yeah. So the way transformers work, like it, I mean, like one of the mechanisms that allow transformers to not just independently, like the mechanism that allows transformers to not just independently look at each token but see previous tokens are the keys and values to tension. And generally the way tension works is you have at your current token some query, and then you've all the keys and values of all your previous tokens, which are some kind of representation that the model stores internally of all the previous tokens in the prompt. And like by default, when you're doing a chat, the model has to, for every single token, do this forward pass through the entire model. That's a lot of matrix multiplies that happen, and that is really, really slow. Instead, if you have already done that, and you stored the keys and values, and you keep that in the GPU, then when I'm let's say I have sorted for the last n tokens, if I now want to compute the output token for the n plus one token, I don't need to pass those first n tokens through the entire model, because I already have all those keys and values. And so you just need to do the forward pass through that last token. And then when you're doing attention, you're reusing those keys and values that have been computed, which is the only kind of sequential part or sequentially dependent part of the transformer. Is there like higher level caching of like caching of the prompts or that kind of stuff that could help? Yeah, there's other types level caching, like caching of the prompts or that kind of stuff that could help? Yeah, there's other types of caching you can kind of do. One interesting thing that you can do for CursorTab is you can basically predict ahead as if the user would have accepted the suggestion and then trigger another request. And so then you've cached, you've done the speculative. It's a mix of speculation and caching, right? Because you're speculating what would happen if they accepted it. And then you have this value that is cached, this suggestion. And then when they press tab, the next one would be waiting for them immediately. It's a kind of clever heuristic slash trick that uses a higher level caching and can give the... it feels fast despite there not actually being any changes in the model. And if you can make the kvcache smaller, one of the advantages you get is like, maybe you can speculate even more. Maybe you can guess here's the 10 things that could be useful. I like, uh, like you predict the next 10 and then like, it's possible the user hits the one of the 10. It's like much higher chance than the user hits like the exact one that you show them. Uh, maybe they type another character and we sort of hit something else in the cache. So there's, there's all these tricks where, um, the general phenomena here is, uh, I think it's, it's's also super useful for RL is, you know, maybe a single sample from the model isn't very good. But if you predict like 10 different things, turns out that one of the 10, that's right, is the probability is much higher. There's these passive k curves. And part of what RL does is you can exploit this passive k phenomena to make many different predictions. And one way to think about this, the model sort of knows internally, has some uncertainty over which of the k things is correct, or which of the k things does the human want. So when we RL our cursor tab model, uncertainty over which of the key things is correct, or which of the key things does the human want. When we RL our cursor tab model, one of the things we're doing is we're predicting which of the hundred different suggestions the model produces is more amenable for humans. Like which of them do humans more like than other things? Maybe there's something where the model can predict very far ahead versus a little bit and maybe somewhere in the middle, and then you can give a reward to the things that humans would like more and punish the things that it would like and then train the model to output the suggestions that humans would like more. You have these RL loops that are very useful that exploit these pass-it-key curves. Oman maybe can go into even more detail. Yeah, it is a little different than speed. But I mean, technically you tie it back in because you can get away with the smaller model if you RL your smaller model and it gets the same performance as the bigger one. And while I, and while I was mentioning stuff about KB, about reducing the size of your KB cache, there are other techniques there as well that are really helpful for speed. So kind of back in the day, like all the way two years ago, people mainly use multi-head attention. And I think there's been a migration towards more efficient attention schemes like group query or multi-query attention. And this is really helpful for then with larger batch sizes, being able to generate the tokens much faster. The interesting thing here is this now has no effect on that time to first token pre-fill speed. The thing this matters for is now generating tokens. And why is that? Because when you're generating tokens, instead of being bottlenecked by doing these super paralyzable matrix multiplies across all your tokens, you're bottlenecked by how quickly, for long context, with large batch sizes, by how quickly you can read those cache keys and values. And so then that's memory bandwidth and how can we make this faster? We can try to compress the size of these keys and values. So multi-query attention is the most aggressive of these, where normally with multi-head attention, you have some number of quote-, attention heads. And some number of kind of query query heads, multi query just preserves the query heads gets rid of all the key value heads. So there's only one kind of key value head, and there's all the remaining query heads. With group query, you instead, you know, preserve all the query heads, and then your keys and values are kind of, there are fewer heads for the keys and values, but you're not reducing it to just one. But anyways, like the whole point here is you're just reducing the size of your KV cache. And then there is MLA. Yeah, multi-latent. That's a little more complicated. And the way that this works is it kind of turns the entirety of your keys and values across all your heads into this kind of one latent vector that is then kind of expanded in for its time. But MLA is from this company called DeepSeq. It's quite an interesting algorithm. Maybe the key idea is sort of in both MQA and in other places, what you're doing is you're sort of reducing the number of KV heads. The advantage you get from that is there's less of them. But maybe the theory is that you actually want a lot of different, like you want each of the keys and values to actually be different. So one way to reduce the size is you keep one big shared vector for all the keys and values. And then you have smaller vectors for every single token. So that when you, you can, you can store the only the smaller thing as some sort of like low rank reduction. And the low rank reduction will that, and at the end of the time, when you eventually want to compute the final thing, remember that like your memory bound, which means that like you still have some, some compute left that you can use for these things. And so if you can expand the latent vector back out, and somehow this is far more efficient because you're reducing, for example, maybe reducing 32 or something, like the size of the vector that you're keeping. Yeah, there's perhaps some richness in having a separate set of keys and values and query that pairwise match up versus compressing that all into one and that interaction at least okay all that is dealing with uh being memory bound yeah and what i mean ultimately how does that map to the user experience trying to get the thing yeah the two things that it maps to is you can now make your cache a lot larger because you've less space allocated for the KV cache. You can maybe cache a lot more aggressively and a lot more things. So you get more cache hits, which are helpful for reducing the time to first token for the reasons that were kind of described earlier. And then the second being when you start doing inference with more and more requests and larger and larger batch sizes, you don't see much of a slowdown in as it's generating the tokens, the speed of that. But it also allows you to make your prompt bigger for certain. Yeah. So like the basic, the size of your KV cache is both the size of all your prompts multiplied by the number of prompts being processed in parallel. So you could increase either those dimensions, right? The batch size or the size of your prompts without degrading the latency of generating tokens. Arvid, you wrote a blog post, Shadow of a Workspace, iterating on code in the background. So what's going on? So to be clear, we want there to be a lot of stuff happening in the background and we're experimenting with a lot of things. Right now, we don't have much of that happening, other than the cache warming or figuring out the right context that goes into your command key prompts, for example. But the idea is, if you can actually spend computation in the background, then you can help the user maybe like at a slightly longer time horizon than just predicting the next few lines that you're going to make. But actually like in the next 10 minutes, what are you going to make? And by doing it in the background, you can spend more computation doing that. And so the idea of the shadow workspace that we implemented and we use it internally for like experiments is that to actually get advantage of doing stuff in the background you want some kind of feedback signal to give back to the model because otherwise like you can get higher performance by just letting the model think for longer and so like 01 is a good example of that. But another way you can improve performance is by letting the model iterate and get feedback. And so one very important piece of feedback when you're a programmer is the language server, which is this thing that exists for most different languages, and there's like a separate language server per language. And it can tell you, you know, you're using the wrong type here, and then gives you an error, or it can allow you to go to definition and sort of understands the structure of your code. So language servers are extensions developed by like there's a TypeScript language server developed by the TypeScript people, a Rust language server developed by the Rust people, and then they all interface over the language server protocol to VS Code. So that VS Code doesn't need to have all of the different languages built into VS Code, but rather you can use the existing compiler infrastructure. For linting purposes? It's for linting. It's for going to definition and for seeing the right types that you're using. So it's doing like type checking also? Yes, type checking and going to references. And that's like when you're working in a big project, you kind of need that. If you don't have that, it's like really hard to code in a big project. Can you say again how that's being used inside Cursor, the language server protocol communication thing? So it's being used in Cursor to show to the programmer, just like in VS Code. But then the idea is you want to show that same information to the models, the IOM models. And you want to do that in a way that doesn't affect the user, because you want to do it in background. And so the idea behind the shadow workspace was, okay, like one way we can do this is we spawn a separate window of cursor that's hidden. And so you can set this flag and electron is hidden. There is a window, but you don't actually see it. And inside of this window, the AI agents can modify code however they want, as long as they don't save it, because it's still the same folder. And then can get feedback from the linters and go to definition and iterate on their code. So like literally run everything in the background, like as if, right. Yeah. Maybe even run the code. So that's the eventual version. Okay. That's what you want. And a lot of the blog post is actually about how do you make that happen? Because it's a little bit tricky. You want it to be on the user's machine so that it exactly mirrors the user's environment. And then on Linux, you can do this cool thing where you can actually mirror the file system and have the AI make changes to the files. And it thinks that it's operating on the file level, but actually that's stored in memory, and you can create this kernel extension to make it work. Whereas on Mac and Windows, it's a little bit more difficult, but it's a fun technical problem. So that's why... One maybe hacky but interesting idea that I like is holding a lock on saving. And so basically, you can then have the language model kind of hold the lock on saving to disk. And then instead of you operating in the ground truth version of the files that are saved to disk, you actually are operating what was the shadow workspace before on these unsaved things that only exist in memory that you still get linter errors for and you can code in. And then when you try to maybe run code, it's just like there's a small warning that there's a lock and then you kind of will take back the lock from the language server if you're trying to do things concurrently, or from the shadow workspace if you're trying to do things concurrently. That's such an exciting feature, by the way. It's a bit of a tangent, but like to allow a model to change files. It's scary for people. But like, it's really cool to be able to just like, let the agent do a set of tasks and you come back the next day and kind of observe. Like it's a colleague or something like that. Yeah, and I think there may be different versions of runability where for the simple things where you're doing things in the span of a few minutes on behalf of the user as they're programming, it makes sense to make something work locally in their machine. I think for the more aggressive things, where you're making larger changes that take longer periods of time, you'll probably wanna do this in some sandbox remote environment. And that's another incredibly tricky problem of how do you exactly reproduce, or mostly reproduce to the point of it being effectively equivalent for running code, the user's environment with this remote sandbox. I'm curious what kind of agency you want for coding. Do you want them to find bugs? Do you want them to implement new features? What agency do you want? So by the way, when I think about agents, I don't think just about coding. I think so for the practice of this particular podcast is video editing. And a lot of, if you look in Adobe, a lot of there's code behind, it's very poorly documented code, but you can interact with Premiere, for example, using code. And basically all the uploading, everything I do on YouTube, everything, as you could probably imagine, I do all of that through code, including translation and overdubbing, all of this. So I envision all of those kinds of tasks, so automating many of the tasks that don't have to do directly with the editing. So that, okay, that's what I was thinking about. But in terms of coding, I would be fundamentally thinking about bug finding, like many levels of kind of bug finding and also bug finding like logical bugs, not logical, like spiritual bugs or something. One's like sort of big directions of implementation, that kind of stuff. That's your opinion on bug finding? Yeah. I mean, it's really interesting that these models are so bad at bug finding when just naively prompted to find a bug. Yeah. I mean, it's really interesting that these models are so bad at bug finding when just naively prompted to find a bug. They're incredibly poorly calibrated. Even the smartest models. Exactly. Even O1. How do you explain that? Is there a good intuition? I think these models are really strong reflection of the pre-training distribution. And I do think they generalize as the loss gets lower and lower. But I don't think the loss in the scale is quite, or the loss is low enough such that they're really fully generalizing in code. The things that we use these things for, the frontier models, that they're quite good at are really code generation and question answering. And these things exist in massive quantities in pre-training with all of the code on GitHub on the scale of many, many trillions of tokens and questions and answers on things like Stack Overflow and maybe GitHub issues. And so when you try to push one of these things that really don't exist very much online, like, for example, the cursor tap objective of predicting the next edit, given the edits done so far, the brittleness kind of shows. And then bug detection is another great example, where there aren't really that many examples of actually detecting real bugs and then proposing fixes. And the models just kind of really struggle at it. But I think it's a question of transferring the model, like in the same way that you get this fantastic transfer from pre trained models, just on code in general to the cursor tab objective, you'll see a very, very similar thing with generalized models that are really good at code to bug detection, it just takes like a little bit of kind of nudging in that direction. To be clear, I think they understand code really well. While they're being pre-trained, the representation that's being built up, like almost certainly somewhere in the stream, the model knows that maybe there's something sketchy going on. It has some sketchiness, but actually eliciting the sketchiness to... Part of it is that humans are really calibrated on which bugs are really important. It's not just actually saying there's something sketchy. It's like, is this sketchy trivial? Is this sketchy, like you're going to take the server down? Yeah. It's like part of it is maybe the cultural knowledge of, uh, like why is the staff engineer, staff engineer, staff engineers is good because they know that three years ago, like someone wrote a really sketchy piece of code that took, took the server down. And as opposed to like, as opposed to maybe there's like, you know, you just, this thing is like an experiment. So like, a few bugs are fine, like you're just trying to experiment and get the feel of the thing. And so if the model gets really annoying when you're writing an experiment, that's really bad. But if you're writing something for super production, you're like writing a database, right? You're writing code in Postgres or Linux or whatever, like your Linus Torvalds, it's sort of unacceptable to have even an edge case and just having the calibration of like. How paranoid is the user? Like, but even then, like if you're putting it a maximum paranoia, it's still just like, doesn't quite get it. Yeah. Yeah. Yeah. I mean, but this is hard for humans too, to understand what, which line of code is important, which is not like you i think one of your principles on a website says if if a code can do a lot of damage one should add a comment that say this this this line of code is is dangerous and all caps ten times no you say like for every single line of code inside the function, you have to... And that's quite profound. That says something about human beings because the engineers move on, even the same person might just forget how it can sync the Titanic, a single function. You might not intuit that quite clearly by looking at the single piece of code. Yeah. And I think that one is also partially also for today's AI models where if you actually write dangerous, dangerous, dangerous in every single line, like the models will pay more attention to that and will be more likely to find bugs in that region. That's actually just straight up a really good practice of labeling code of how much damage this can do. Yeah, I mean, it's controversial. Some people think it's ugly. Well, I actually think it's like, in fact, I actually think this is one of the things I learned from Rvit is, you know, like, I sort of aesthetically, I don't like it, but I think there's certainly something where like it's, it's useful for the models and, and humans just forget a lot. And it's really easy to make a small mistake and cause like bring down, you know, like just bring down the server and like, like, of course we, we like test a lot and whatever, but there's always these things that you have to be very careful with just normal doc strings. I think people will often just skim it when making a change and think oh this i know how to do this um and you kind of really need to point it out to them so that that doesn't slip through yeah you have to be reminded that you can do a lot of damage that's like we don't really think about that like yeah you think about okay how do I figure out how this works so I can improve it? You don't think about the other direction. Until, until we have formal verification for everything, then you can do whatever you want and you, you know, for certain that you have not introduced a bug. If the proof pass. But concretely, what do you think that future would look like? I think, uh, people will just not write tests anymore. And the model will suggest, like you write a function, the model will suggest a spec and you review the spec. And in the meantime, smart reasoning model computes a proof that the implementation follows the spec. And I think that happens for most functions. Don't you think this gets at a little bit some of the stuff you were talking about earlier with the difficulty of specifying intent for what you want with software? Where sometimes it might be because the intent is really hard to specify, it's also then going to be really hard to prove that it's actually matching whatever your intent is. Like you think that spec is hard to generate? Yeah, or just like for a given spec, maybe you can. I think there is a question of can you actually do the formal verification? Is that possible? I think that there's more to dig into there. But then also. Even if you have the spec? If you have the spec, how do you map the spec? Even if you have the spec. Is the spec written in natural language? Yeah, how do you map the spec? No, the spec would be formal. But how easy would that be to draw? I think that you care about things that are not going to be easily well specified in the spec language. I see, I see. Yeah. Would be maybe an argument against formal verification is all you need. Yeah. The worry is there's this massive document. Replacing something like unit tests, sure. Yeah, yeah. I think you can probably also evolve the spec languages to capture some of the things that they don't really capture right now. But I don't know. I think it's very exciting. And you're speaking not just about like single functions. You're speaking about entire code bases. I think entire code bases is harder, but that is what I would love to have. And I think it should be possible. And because you can even there's like a lot of work recently where you can prove formally verified down to the hardware. So like through the you formally verify the C code and then you formally verify through the GCC compiler and then through the very log down to the hardware. And that's like an incredibly big system, but it actually works. And I think big code bases are sort of similar in that they're like multi-layered system. And if you can decompose it and formally verify each part, then I think it should be possible. I think the specification problem is a real problem, but. How do you handle side effects? Or how do you handle, I guess, external dependencies, like calling the Stripe API? Maybe Stripe would write a spec for their API. But like, you can't do this for everything. Like, can you do this for everything you use? Like, how do you, how do you do it for, if there's a language model, like maybe, maybe like people will use language models as primitives in the programs they write. And there's like a dependence on it. And like, how, how do you now include that? I think you might be able to prove, prove still prove what about language models i think if it feels possible that you could actually prove that a language model is aligned for example or like you can prove that it actually gives the the right answer um that's the dream yeah that is i mean that's yeah if it's possible that's your i have a dream speech if it's possible, that's your, I have a dream speech. If it's possible, that will certainly help with, you know, making sure your code doesn't have bugs and making sure AI doesn't destroy all human civilization. So the full spectrum of AI safety to just bug finding. So you said the models struggle with bug finding. What's the hope? You know, my hope initially is, and I can let Michael chime in too, but it was like this. It should, you know, first help with the stupid bugs. Like it should very quickly catch the stupid bugs. Like off by one errors, like sometimes you write something in a comment and do it the other way. It's like very common. Like I do this. I write like less than in a comment and like I maybe write the greater than sign or's like very common. Like I do this, I write like less than in a comment and like I've maybe write the greater than sign or something like that and the model is like, yeah, you look sketchy, like, do you sure you want to do that? Uh, but eventually it should be able to catch harder bugs too. Yeah. And I think that it's also important to note that this is having good bug finding models feels necessary to get to the highest reaches of having AI do more and more programming for you. Where you're going to, you know, if the AI is building more and more of the system for you, you need to not just generate but also verify. And without that, some of the problems that we've talked about before with programming with these models will just become untenable. So it's not just for humans, like you write a bug, I write a bug, find the bug for me, but it's also being able to verify the AI's code and check it is really important. Yeah. And then how do you actually do this? Like we have had a lot of contentious dinner discussions of how do you actually train a bug model? But one very popular idea is, you know, it's kind of potentially easy to introduce a bug than actually finding the bug. And so you can train a model to introduce bugs in existing code. And then you can train a reverse bug model then that can find bugs using this synthetic data. So that's like one example. But yeah, there are lots of ideas for how to do this. You can also do a bunch of work, not even at the model level, of taking the biggest models and then maybe giving them access to a lot of information that's not just the code. It's kind of a hard problem to stare at a file and be like, where's the bug? And that's hard for humans often, right? And so often you have to run the code and being able to see things like traces and step through a debugger. There's a whole other direction where it kind of tends toward that. And it could also be that there are kind of two different product form factors here. It could be that you have a really specialty model that's quite fast that's kind of running in the background and trying to spot bugs. And it might be that sometimes, sort of to Arvid's earlier example about some nefarious input box bug, it might be that sometimes you know there's a bug, you're not just checking hypothesis-free, you're like, this is a problem, I really want to solve it. And you zap that with tons and tons and tons of compute. And you're willing to put in like $50 to solve that bug or something even more. Have you thought about integrating money into this whole thing? Like I would pay probably a large amount of money for if you found a bug or even generated code that I really appreciated. Like I had a moment a few days ago when I started using Cursor where it generated perfect three functions for interacting with the YouTube API to update captions and for localization in different languages. The API documentation is not very good. And the code across, like if I Googled it for a while, I couldn't find exactly. There's a lot of confusing information and cursor generated perfectly. And I was like, I just sat back. I read the code. I was like, this is correct. I tested it is correct. I was like, I want a tip on a, on a button that goes, yeah, there's $5. One that's really good just to support the company and support what the interface is and the others that probably sends a strong signal like good job right there's a much stronger signal than just accepting the code right you just actually send like a strong good job that and for bug finding, obviously, like, there's a lot of people, you know, that would pay a huge amount of money for a bug, like a bug bounty thing. Right? Is that, you guys think about that? Yeah, it's a controversial idea inside the company. I think it sort of depends on how much you believe in humanity, almost. You know, like, I think it would be really cool if like, you spend nothing to try to find a bug, and if it doesn't find a bug, you spend zero dollars. And then if it does find a bug, and you click accept, then it also shows like, in parentheses, like, one dollar. And so you spend one dollar to accept the bug. And then of course, there's a worry like, okay, we spent a lot of computation, like maybe people will just copy paste. I think that's a worry. And then there is also the worry that like introducing money into the product makes it like kind of, you know, like it doesn't feel as fun anymore. Like you have to like think about money and all you want to think about is like the code. And so maybe it actually makes more sense to separate it out and like you pay some fee like every month and then you get all of these things for free. But there could be a tipping component, which is not like it costs this. It still has that like dollar symbol. I think it's fine, but I also see the point where like maybe you don't want to introduce it. Yeah, I was gonna say the moment that feels like people do this is when they share it, when they have this fantastic example, they just kind of share it with their friends. There is also a potential world where there's a technical solution to this, like on our system problem too, where if we can get to a place where we understand the output of the system more, I mean, to the stuff we were talking about with like, you know, error checking with the LSP and then also running the code. But if you could get to a place where you could actually somehow verify, oh, I have fixed the bug, maybe then the bounty system doesn't need to rely on the honor system too. How much interaction is there between the terminal and the code? Like, how much information is gained if you run the code in the terminal? Like, can you use, can you do like a loop where it runs the code and suggests how to change the code if the code in runtime gives an error. Because right now they're separate worlds completely. Like I know you can do control K inside the terminal to help you write the code. You can use terminal context as well inside of check, man, K, kind of everything. We don't have the looping part yet, though we suspect something like this could make a lot of sense. There's a question of whether it happens in the foreground too, or if it happens in the background, like what we've been discussing. Sure. The background is pretty cool. Like if we do running the code in different ways, plus there's a database side to this, which how do you protect it from not modifying the database? But okay. I mean, there's, there's certainly cool solutions there. There's this new API that is being developed for... It's not in AWS, but, you know, it certainly is. I think it's in PlanetScale. I don't know if PlanetScale was the first one to add it. It's this ability to sort of add branches to a database, which is like if you're working on a feature and you want to test against the broad database, but you don't actually want to test against the broad database, you could sort of add a branch to the database. And the way to do that is to add a branch to the write ahead log. And there's obviously a lot of technical complexity in doing it correctly. I guess database companies need new things to do. They have good databases now. And I think like, you know, TurboBuffer, which is one of the databases we use, is going to add maybe branching to the write-add log. And so maybe the AI agents will use branching. They'll test against some branch, and it's sort of gonna be a requirement for the database to support branching or something. It'd be really interesting if you could branch a file system. Right. Yeah. If you think everything needs branching, it's like that. Yeah. It's like the problem with the multiverse, right? Like if you branch on everything, that's like a lot. I mean, there's obviously these like super clever algorithms to make sure that you don't actually kind of use a lot of space or cpu or whatever okay this is a good place to ask about infrastructure so you guys mostly use aws what what are some interesting details what are some interesting challenges why did you choose aws why is why is aws still winning? Hashtag. David Elikwu- AWS is just really, really good. It's really good. Like, um, whenever you use an AWS product, you just know that it's going to work. Like it might be absolute hell to go through the steps to set it up. Um, David Elikwu- why is the interface so horrible? Because it's just so good. It doesn't need the nature of winning. I think it's exactly, it's just nature. They're winning. Yeah. Yeah. But AWS, you can always trust, like it will always work. And if there is a problem, it's probably your problem. Yeah. Okay. Is there some interesting, like challenges to, he has a pretty new startup to get scaling to like to so many people. And yeah, I think that there it has been an interesting journey adding, you know, each extra zero to the request per second. But you run into all of these with like, you know, the general components you're using for for caching and databases run into issues as you make things bigger and bigger. And now we're at the scale where we get like, you know, into overflows on our tables and things like that. And then also, there have been some custom systems that we've built, like, for instance, our retrieval system for computing a semantic index of your code base and answering questions about a code base that have continually I feel like been one of the trickier things to scale. I have a few friends who are super senior engineers, and one of their lines is like, it's very hard to predict where systems will break when you scale them. You can sort of try to predict in advance, but there's always something weird that's going to happen when you add this extra zero. You thought you thought through everything, but you didn't actually think through everything. But I think for that particular system, we've... So for concrete details, the thing we do is obviously we upload... We chunk up all of your code, and then we send up the code for embedding, and we embed the code. And then we store up sort of the code for embedding and we embed the code. And then we store the embeddings in a database, but we don't actually store any of the code. And then there's reasons around making sure that we don't introduce client bugs, because we're very, very paranoid about client bugs. We store much of the details on the server. Everything is sort of encrypted. So one of the technical challenges is always making sure that the local index, the local code base state, is the same as the state that is on the server. And the way technically we ended up doing that is for every single file, you can sort of keep this hash. And then for every folder, you can sort of keep a hash, which is the hash of all of its children. And you can sort of recursively do that until the top. And why do something complicated? One thing you could do is you could keep a hash for every file. And every minute, you could try to download the hashes that are on the server, figure out what are the files that don't exist on the server. Maybe you just created a new file. Maybe you just deleted a file. Maybe you checked out a new branch and try to reconcile the state between the client and the server. But that introduces like absolutely ginormous network overhead, both on the client side. I mean, nobody really wants us to hammer their WiFi all the time if you're using cursor. But also like, I mean, it would introduce like ginormous overhead on the database. It would sort of be reading this tens of terabyte database, sort of approaching like 20 terabytes or something database like every second, that's just kind of crazy. You definitely don't want to do that. So what do you do? You sort of, you just try to reconcile the single hash, which is at the root of the project. And then if something mismatches, then you go, you find where all the things disagree. Maybe you look at the children and see if the hashes match. And if the hashes don't match, go look at their children and so on. But you only do that in the scenario where things don't match. And for most people, most of the time if the hashes match and if the hashes don't match, go look at their children and so on. But you only do that in the scenario where things don't match. And for most people, most of the time, the hashes match. So it's a kind of like hierarchical reconciliation of hashes. Yeah, something like that. Yeah. It's called the Merkel tree. Yeah. Yeah. I mean, so yeah, this is cool to see that you kind of have to think through all these problems. And I mean, the point of like, the reason it's gotten hard is just because, like the number of people using it. And, you know, if some of your customers have really, really large code bases, to the point where we, you know, we originally reordered our code base, which is, which is big, but I mean, it's just not the size of some company that's been there for 20 years, and sort of has a ginormous number of files. And you sort of want to scale that across programmers. There's, there's all these details where like building a simple thing is easy, but sort of scaling it to a lot of people, like a lot of companies is obviously a difficult problem, which is sort of, you know, independent of actually, so that's, there's part of this scaling our current solution is also, you know, coming up with new ideas that obviously we're working on. Uh, but then, but then scaling all of that in the last few weeks, months. Yeah. And there are a lot of clever things, like additional things that go into this indexing system. For example, the bottleneck in terms of costs is not storing things in the vector database or the database. It's actually embedding the code. And you don't want to re-embed the code base for every single person in a company that is using the same exact code, except every single person in a company that is using the same exact code, except for maybe they're in a different branch with a few different files, or they've made a few local changes. And so because again, embeddings are the bottleneck, you can do one clever trick and not have to worry about like the complexity of like dealing with branches and the other databases where you just have some cache on the actual vectors computed from the hash of a given chunk. And so this means that when the nth person at a company goes and invents their codebase, it's really, really fast. And you do all this without actually storing any code on our servers at all. No code data is stored. We just store the vectors in the vector database and the vector cache. What's the biggest gains at this time you get from indexing the code base? I could just, out of curiosity, like what benefit do users have? It seems like longer term, there'll be more and more benefit, but in the short term, just asking questions of the code base, what's the usefulness of that? I think the most obvious one is just you want to find out where something is happening in your large codebase. And you sort of have a fuzzy memory of, okay, I want to find the place where we do X. But you don't exactly know what to search for in a normal text search. And so you ask a chat, you hit command enter to ask with the could-be's chat, and then very often it finds the right place that you were thinking of. I think, like you mentioned, in the future, I think this is only going to get more and more powerful where we're working a lot on improving the quality of our retrieval. And I think the ceiling for that is really, really much higher than people give it credit for. One question that's good to ask here, have you considered and why haven't you much done sort of local stuff to where you can do the, I mean, it seems like everything we just discussed is exceptionally difficult to do, to go to the cloud. You have to think about all these things with the caching and the large code base with a large number of programmers are using the same code base. You have to figure out the puzzle of that. A lot of it, you know, most software just does stuff, this heavy computational stuff locally. Have you considered doing sort of embeddings locally? Yeah, we thought about it and I think it would be cool to do it locally. I think it's just really hard. And one thing to keep in mind is that, you know, some of our users use the latest MacBook Pro. But most of our users, like more than 80% of our users, are in Windows machines, which many of them are not very powerful. And so local models really only works on the latest computers. And it's also a big overhead to build that in. And so even if we would like to do that, it's currently not something that we are able to focus on. And I think there are some people that do that, and I think that's great. But especially as models get bigger and bigger and you want to do fancier things with bigger models, it becomes even harder to do it locally. Yeah. And it's not a problem of weaker computers. It's just that, for example, if you're some big company, you have big company code base, it's just really hard to process big company code base even on the beefiest MacBook Pros. So even if, it's not even a matter of like, if you're just like a student or something, I think if you're like the best programmer at a big company, you're still gonna have a horrible experience. If you do everything locally, when you could do edge and sort of scrape by, but like, again, it wouldn't be fun anymore. Yeah, like an approximate nearest neighbors and this massive code base is gonna just eat up your memory and your CPU. And that's, and that's just that, like, let's talk about like, also the modeling side, where, as I've said, there are these massive headwinds against local models, where one things that seem to move towards mo ease, which one benefit is maybe they're more memory bandwidth bound, which plays in favor of local versus using GPUs or using NVIDIA GPUs. But the downside is these models are just bigger in total. And they're going to need to fit often not even on a single node, but multiple nodes. There's no way that's going to fit inside of even really good MacBooks. And I think especially for coding, it's not a question as much of like, does it clear some bar of like the models good enough to do these things and then like we're satisfied, which may be the case for other problems and maybe where local models shine. But people are always going to want the best, the most intelligent, the most capable things. And that's going to be really, really hard to run for almost all people locally. Don't you want the most capable model? You want Sonnet? And also with O1. I like how you're pitching me. Would you be satisfied with an inferior model? Listen, yes, I'm one of those, but there's some people that like to do stuff locally, especially like, yeah, that really, there's a whole, obviously open source movement that kind of resists and it's good that they exist actually, because you, you want to resist the power centers that are growing. Are there's actually an alternative to local models, uh, that I am particularly fond of. I think it's still very much in the research stage. But you could imagine to do homomorphic encryption for language model inference. So you encrypt your input on your local machine, then you send that up. And then the server can use loss of computation. They can run models that you cannot run locally on this encrypted data, but they cannot see what the data is. And then they send back the answer and you decrypt the answer and only you can see the answer. So I think that's still very much research and all of it is about trying to make the overhead lower because right now the overhead is really big. But if you can make that happen, I think that would be really, really cool. And I think it would be really, really impactful. Because I think one thing that's actually kind of worrisome is that as these models get better and better, they're going to become more and more economically useful. And so more and more of the world's information and data will flow through, you know, one or two centralized actors. And then there are worries about there can be traditional hacker attempts, but it also creates this kind of scary part where if all of the world's information is flowing through one node in plain text, you can have surveillance in very bad ways. And sometimes that will happen for, you know, initially will be like good reasons. Like people will want to try to protect against like bad actors using AI models in bad ways. And then you will add in some surveillance code and then someone else will come in and, you know, you're on a slippery slope and then you start doing bad things with a lot of the world's data. And so I'm very hopeful that we can solve homomorphic encryption for language model inference. Yeah, doing privacy preserving, machine learning. But I would say that's the challenge we have with all software these days. It's like there's so many features that can be provided from the cloud and all of us increasingly rely on it and make our life awesome. But there's downsides and that's why you rely on really good security to protect from basic attacks. But there's also only a small set of companies that are controlling that data. And they obviously have leverage and they can be infiltrated in all kinds of ways. That's the world we live in. Yeah. I mean, the thing I'm just actually quite worried about is sort of the world where, I mean, so Anthropic has this responsible scaling policy and we're on like the low ASLs, which is the Anthropic Security Law or whatever, of the models. But as we get to like code and code ASL3, ASL4, whatever models, which are sort of very powerful. But for mostly reasonable security reasons, you would want to monitor all the prompts. But I think that's sort of reasonable and understandable where everyone is coming from. But Matt, it'd be really horrible if sort of like all the world's information is sort of monitored that heavily. It's way too centralized. It's like, it's like sort of this, it's like really fine line you're walking where on the one side, like you don't want the models to go rogue on the other side, like humans, like, I don't, I don't know if I, if I trust like all the world's information to pass through like three, three model providers. Yeah. Why do you think it's different than cloud providers? Because I think a lot of this data would never have gone to the cloud providers in the first place. This is often like you want to give more data to the EIA models. You want to give personal data that you would never have put online in the first place to these more data to the EIO models. You want to give personal data that you would never have put online in the first place to these companies or to these models. And it also centralizes control, where right now for Cloud, you can often use your own encryption keys. And AWS can't really do much. But here is just centralized actors that see the exact plaintext of everything. On the topic of context, that's actually been a friction for me when I'm writing code in Python, there's a bunch of stuff imported. There's a you could probably intuit the kind of stuff I would like to include in the context. How hard is it to auto-figure out the context? It's tricky. I think we can do a lot better at computing the context automatically in the future. One thing that's important to note is there are trade-offs with including automatic context. So the more context you include for these models, first of all, the slower they are and the more expensive those requests are, which means you can then do less model calls and do less fancy stuff in the background. Also, for a lot of these models, they get confused if you have a lot of information in the prompt. So the bar for accuracy and for relevance of the context you include should be quite high. But already we do some automatic context in some places within the product. It's definitely something we want to get a lot better at. And I think that there are a lot of cool ideas to try there, both on the learning better retrieval systems, like better embedding models, better re-rankers. I think that there are also cool academic ideas, you know, stuff we've tried out internally, but also the field is grappling with writ large, about can you get language models to a place where you can actually just have the model itself, like understand a new corpus of information. And the most popular talked about version of this is, can you make the context windows infinite? Then if you make the context windows infinite, can you make the model actually pay attention to the infinite context? And then after you can make it pay attention to the infinite context, to make it somewhat feasible to actually do it, can you then do caching for that infinite context? You don't have to recompute that all the time. But there are other cool ideas that are being tried that are a little bit more analogous to fine-tuning of actually learning this information and the weights of the model. And it might be that you actually get sort of a qualitatively different type of understanding if you do it more at the weight level than if you do it at the in-context learning level. I think the jury's still a little bit out on how this is all going to work in the end. But in the interim, us as a company, we are really excited about better retrieval systems and picking the parts of the code base that are most relevant to what you're doing. We could do that a lot better. Like one interesting proof of concept for the learning this knowledge directly in the weights is with VS Code. So we're in a VS Code fork and VS Code, the code is all public. So these models in pre-training have seen all the code. They've probably also seen questions and answers about it. And then they've been fine-tuned and RLA-chefed to be able to answer questions about code in general. So when you ask it a question about VS Code, sometimes it'll hallucinate, but sometimes it actually does a pretty good job at answering the question. And I think like, this is just by, it happens to be okay. But what if you could actually like specifically train or post train a model such that it really was built to understand this code base? It's an open research question, one that we're quite interested in. And then there's also uncertainty of like, do you want the model to be the thing that end to end is doing everything? I it's doing the retrieval and its internals, and then kind of answering the question, creating the code? Or do you want to separate the retrieval from the frontier model, where maybe, you know, you'll get some really capable models that are much better than like the best open source ones in a handful of months. And then you'll want to separately train a really good open source model to be the retriever, to be the thing that feeds in the context to these larger models. Can you speak a little more to the post-training model to understand the code base? What do you mean by that? Is this a synthetic data direction? Is this? Yeah, I mean, there are many possible ways you could try doing it. There's certainly no shortage of ideas. It's just a question of going in and trying all of them and being empirical about which one works best. One very naive thing is to try to replicate what's done with VS Code and these frontier models. Let's continue pre-training, some kind of continued pre-training that includes general code data, but also throws in a lot of the data of some particular repository that you care about. And then in post-training, meaning in, let's just start with instruction fine-tuning, you have a normal instruction fine-tuning dataset about code, then you throw in a lot of questions about code in that repository. So you could either get ground truth ones, which might be difficult, or you could do what you kind of hinted at or suggested using synthetic data, i.e., kind of having the model ask questions about various pieces of the code. So you kind of take the pieces of the code, then prompt the model or have a model propose a question for that piece of code, and then add those as instruction finds you need data points. And then in theory, this might unlock the model's ability to answer questions about that code base. Let me ask you about OpenAI 01. What do you think is the role of that kind of test time compute system in programming? I think test time compute is really, really interesting. So there's been the pre-training regime, which will, as you scale up the amount of data and the size of your model, get you better and better performance, both on loss and then on downstream benchmarks, and just general performance when we use it for coding or other tasks. We're starting to hit a bit of a data wall, meaning it's going to be hard to continue scaling up this regime. And so scaling up test time compute is an interesting way of now increasing the number of inference time flops that we use, but still getting like, like, yeah, as you increase the number of flops use inference time getting corresponding improvements in the performance of these models. Traditionally, we just had to literally train a bigger model that always uses that always used that many more flops. But now we could perhaps use the same size model and run it for longer to be able to get an answer at the quality of a much larger model. And so the really interesting thing I like about this is there are some problems that perhaps require 100 trillion parameter model intelligence trained on 100 trillion tokens. But that's like maybe 1%, maybe like 0.1% of all queries. So are you going to spend all of this effort, all of this compute training a model that costs that much and then run it so infrequently? It feels completely wasteful when instead you get the model that can, that you train the model that's capable of doing the 99.9% of queries, and then you have a way of inference time, running it longer for those few people that really, really want max intelligence. How do you figure out which problem requires what level of intelligence? Is that possible to dynamically figure out when to use GPT-4, when to use a small model, and when you need the O1? I mean, yeah, that's an open research problem, certainly. I don't think anyone's actually cracked this model routing problem quite well. We'd like to. We have initial implementations of this for something like cursor tab, but at the level of going between 4.0 Sonnet to O1, it's a bit trickier. There's also a question of what level of intelligence do you need to determine if the thing is too hard for the four-level model? Maybe you need the O1-level model. It's really unclear. But you mentioned this, so there's a pre-training process, then there's post-training,, then there's post-training, and then there's like test time compute that FAIR does sort of separate. Where's the biggest gains? Well, it's weird because like test time compute, there's like a whole training strategy needed to get test time to compute to work. And the other really weird thing about this is no one, like outside of the big labs and maybe even just Open AI, no one really knows how it works. Like there've been some really interesting papers that show hints of what they might be doing. And so perhaps they're doing something with tree search using process reward models. But yeah, I just I think the issue is, we don't quite know exactly what it looks like. So it would be hard to kind of comment on where it fits in. I would put it in post-training, but maybe the compute spent for getting test time compute to work for a model is going to dwarf pre-training eventually. So we don't even know if O1 is using just like chain of thought, RL. We don't know how they're using any of these. We don't know anything. It's fun to speculate. Like if you were to build a competing model, what would you do? Yeah. So one thing to do would be, I think you probably need to train a process reward model, which is, so maybe we can get into reward models and outcome reward models versus process reward models. Outcome reward models are the kind of traditional reward models that people are trained for these for for language models, language modeling. And it's just looking at the final thing. So if you're doing some math problem, let's look at that final thing you've done everything. And let's assign a great how likely we think, like, what's the reward for this, this is outcome, process reward models,, like what's the reward for this outcome. Process reward models instead try to grade the chain of thought. And so OpenAI had some preliminary paper on this, I think, last summer, where they use human labelers to get this pretty large, several hundred thousand dataset of grading chains of thought. Ultimately, it feels like I haven't seen anything interesting in the ways that people use process reward models outside of just using it as a means of affecting how we choose between a bunch of samples. So what people do in all these papers is they sample a bunch of outputs from the language model and then use the process reward models to grade all those generations alongside maybe some other heuristics, and then use that to choose the best answer. The really interesting thing that people think might work and people want to work is tree search with these process reward models, because if you really can grade every single step of the chain of thought, then you can kind of branch out and, you know, explore multiple paths of this chain of thought, and then use these process-forward models to evaluate how good is this branch that you're taking. Yeah, when the quality of the branch is somehow strongly correlated with the quality of the outcome at the very end. So, like, you have a good model of knowing which branch to take. So, not just in the short term, like, in the long term. Yeah. And the interesting work that I think has been done is figuring out how to properly train the process. Or the interesting work that has been open-sourced and people, I think, talk about is how to train the process reward models maybe in a more automated way. I could be wrong here, could not be mentioning something, because I haven't seen anything super that seems to work really well for using the process reward models creatively to do tree search and code This is kind of an AI safety, maybe a bit of a philosophy question, so OpenAI says that they're hiding the chain of thought from the user, and they've said that that was a difficult decision to make, they, instead of showing the chain of thought, they're asking the model to summarize the chain of thought. They're also in the background saying they're going to monitor the chain of thought to make sure the model is not trying to manipulate the user, which is a fascinating possibility. But anyway, what do you think about hiding the chain of thought? One consideration for open AI, and this is completely speculative, could be that they want to make it hard for people to distill these capabilities out of their model. It might actually be easier if you had access to that hidden chain of thought to replicate the technology, because that's pretty important data, like seeing the steps that the model took to get to the final result. So you could probably train on that also. And there was sort of a mirror situation with this, with some of the large language model providers. And also this is speculation, but some of these APIs used to offer easy access to log probabilities for all the tokens that they're generating, and also log probabilities for the prompt tokens. And then some of these APIs took those away. And again, complete speculation, but one of the thoughts is that the reason those were taken away is if you have access log probabilities, similar to this hidden chain of thought, that can give you even more information to try and distill these capabilities out of the APIs, out of these biggest models, into models you control. As an asterisk on also the previous discussion about us integrating O1, I think that we're still learning how to use this model. So we made O1 available in Cursor because when we got the model, we were really interested in trying it out. I think a lot of programmers are going to be interested in trying it out. But O1 is not part of the default cursor experience in any way yet. And we still haven't found a way to yet integrate it into the editor in a way that we reach for sort of every hour, maybe even every day. And so I think the jury's still out on how to use the model. And we haven't seen examples yet of people releasing things where it seems really clear, like, oh, that's now the use case. The obvious one to turn to is maybe this can make it easier for you to have these background things running, right? To have these models in loops, to have these models be agentic. But we're still discovering. To be clear, we have ideas. We just need to try and get something incredibly useful before we put it out there. But it has these significant limitations. Even barring capabilities, it does not stream. And that means it's really, really painful to use for things where you want to supervise the output. And instead, you're just waiting for the wall text to show up. Also, it does feel like the early innings of test time computing search where it's just like a very, very much a v zero. And there's so many things that like, like, don't feel quite right. And I suspect in parallel to people increasing the amount of pre-training data and the size of the models and pre-training and finding tricks there, you'll now have this other thread of getting search to work better and better. So let me ask you about Strawberry Tomorrow Eyes. So it looks like GitHub Copilot might be integrating O1 in some kind of way. And I think some of the comments are saying, does this mean Cursor is done? I think I saw one comment saying that. I saw time to shut down Cursor. Time to shut down Cursor. So it's a time to shut down Cursor. I think this space is a little bit different from past software spaces over the 2010s, where I think that the ceiling here is really, really, really incredibly high. And so I think that the best product in three to four years will just be so much more useful than the best product today. And you can wax poetic about moats this and brand that and this is our advantage. But I think in the end, just if you don't have, like if you stop innovating on the product, you will lose. And that's also great for startups. That's great for people trying to enter this market because it means you have an opportunity to win against people who have lots of users already by just building something better. And so I think, yeah, over the next few years, it's just about building the best product, building the best system. And that both comes down to the modeling engine side of things. And it also comes down to the to the editing experience. Yeah, I think most of the additional value from cursor versus everything else out there is not just integrating the new model fast like a one it comes from all of the kind of depth that goes into these custom models that you don't realize are working for you in kind of every facet of the product as well as like the really uh thoughtful ux with every single feature all right uh from that profound answer let's descend back down to the technical. You mentioned you have a taxonomy of synthetic data. Oh, yeah. Can you please explain? Yeah, I think there are three main kinds of synthetic data. The first is, so what is synthetic data first? So there's normal data, like non-synthetic data, which is just data that's naturally created, i.e. usually it'll be from humans having done things. So from some human process, you get this data. Synthetic data, the first one would be distillation. So having a language model, kind of output tokens or probability distributions over tokens. And then you can train some less capable model on this. This approach is not going to get you a net, like more capable model than the original one that has produced the tokens. But it's really useful for if there's some capability you want to elicit from some really expensive, high latency model, you can then that distill that down into some smaller task specific model. The second kind is when like one direction of the problem is easier than the reverse. And so a great example of this is bug detection, like we mentioned earlier, where it's a lot easier to introduce reasonable looking bugs than it is to actually detect them. And this is probably the case for humans too. And so what you can do is you can get a model that's not training that much data that's not that smart to introduce a bunch of bugs in code. And then you can use that to then train, use a synthetic data to train a model that can be really good at detecting bugs. The last category, I think is think is I guess the main one that it feels like the big labs are doing for synthetic data, which is producing text with language models that can then be verified easily. So like, you know, extreme example of this is if you have a verification system that can detect if language is Shakespeare level and then you have a bunch of monkeys can detect if language is Shakespeare level, and then you have a bunch of monkeys typing in typewriters. You can eventually get enough training data to train a Shakespeare-level language model. And I mean, this is the case, very much the case for math, where verification is actually really, really easy for formal languages. And then what you can do is you can have an okay model, generate a ton of rollouts, and then choose the ones that you know have actually proved the ground truth theorems and train that further. There's similar things you can do for code with leet code-like problems where if you have some set of tests that you know correspond to if something passes these tests, it has actually solved the problem. You can do the same thing where you verify that it's past the test and then train the model the outputs that have passed the tests i think it's gonna be a little tricky getting this to work in all domains or just in general like having the perfect verifier feels really really hard to do with just like open-ended miscellaneous tasks. You give the model or more like long horizon tasks, even in coding. That's because you're not as optimistic as Arvid, but yeah. So yeah. So that, that, that third category requires having a verifier. Yeah. Verification is, it feels like it's best when you know for a fact that it's correct. And like, then like, it wouldn't be like using a language model to verify. It would be using tests or formal systems. Or running the thing to doing like the human form of verification where you just do manual quality control. Yeah, yeah. But like the language model version of that where it's like running the thing and it's actually understands the output. Yeah, no, that's for somewhere between. Yeah, I think that's the category that is most likely to result in massive gains. What about RL with feedback side, RLHF versus RLAIF? What's the role of that in getting better performance on the models? Yeah, so RLHF is when the reward model you use is trained from some labels you've collected from humans giving feedback. I think this works if you have the ability to get a ton of human feedback for this kind of task that you care about. RLAIF is interesting, because you're kind of depending on, like, this is actually kind of going to this, it's depending on the constraint that verification is actually a decent bit easier than generation. Because it feels like, okay, like, what are you doing? Are you using this language model to look at the language model outputs and then prove the language model? But no, it actually may work if the language model has a much easier time verifying some solution than it does generating it, then you actually could perhaps get this kind of recursive loop. I don't think it's going to look exactly like that. The other the other thing you could do is that we kind of do is like a little bit of a mix of RLA-IF and RLHF, where usually the model is actually quite correct, and this is in the case of CursorTab, picking between two possible generations of what is the better one. And then it just needs a little bit of human nudging with only on the order of 50, 100 examples to kind of align that prior the model has with exactly what you want. It looks different than I think normal RLA Chef where you're usually training these reward models on tons of examples. What's your intuition when you compare generation and verification, or generation and ranking? Is ranking way easier than generation? My intuition would just say, yeah, it should be. Like, this is kind of going back to, like, if you believe P does not equal NP, then there's this massive class of problems that are much, much easier to verify given a proof than actually proving it. I wonder if the same thing will prove P not equal to MP or P equal to MP. That would be really cool. That'd be a whatever feels metal. Why AI? Who gets the credit? Another open philosophical question. I'm actually surprisingly curious what a good bet for when AI will get the Fields Metal will be. Isn't this a Mon specialty? I don't know what a Mon's bet here is. Oh, sorry. Nobel Prize or Fields Metal first? Fields Metal. Fields Metal level. Fields Metal comes first, I think. Fields Metal comes first. Well, you would say that, of course. But it's also this isolated system in Verify. No, sure. Like, I don't even know if I would. You don't need to do this. I feel like I have much more to do there. It felt like the path to get to IMO was a little bit more clear because it already could get a few IMO problems. And there are a bunch of like, there's a bunch of low-hanging fruit, given the literature at the time, of like what tactics people could take. I think I'm, one, much less versed in the space of theory improving now and two yeah less intuition about how close we are to solving these really really hard open problems so you think you'll be feels amount of first it won't be like in uh physics or in oh 100 i think i think i think that's probably more likely like it's probably much more likely that it'll get in. Yeah, yeah, yeah. Well, I think it goes to, I don't know, like BSD, which is a bird's-wing-turned-diode conjecture, or like Riemann iPods, or any one of these hard math problems, which is actually really hard. It's sort of unclear what the path to get even a solution looks like. We don't even know what a path looks like, let alone- And you don't buy the idea a path looks like, let alone. And you don't buy the idea that this is like an isolated system and you can actually, you have a good reward system and it feels like it's easier to train for that. I think we might get Fields Metal before AGI. I mean, I'd be very happy. I'd be very happy. But I don't know if I think 2028, 2030. What feels metal. Feels metal. All right. It feels like forever from now, given how fast things have been going. Speaking of how fast things have been going, let's talk about scaling laws. So for people who don't know, maybe it's good to talk about this whole idea of scaling laws. So for people who don't know, maybe it's good to talk about this whole idea of scaling laws. What are they? Where do you think stand? And where do you think things are going? I think it was interesting. The original scaling laws paper by OpenAI was slightly wrong because I think of some issues they did with learning rate schedules. And then Chinchilla showed a more correct version. And then from then people have again kind of deviated from doing the compute optimal thing because people start now optimizing more so for making the thing work really well, given a given an inference budget. And I think there are a lot more dimensions to these curves than what we originally used of just compute, number of parameters and data. Like inference compute is the obvious one. I think context length is another obvious one. So if you care, like, let's say you care about the two things of inference compute and then context window, maybe the thing you want to train is some kind of SSM because they're much, much cheaper and faster at super, super long context. And even if maybe it is 10x worse scaling properties during training, meaning you spend 10x more compute to train the thing to get the same level of capabilities, it's worth it, because you care most about that inference budget for really long context windows. So it'll be interesting to see how people kind of play with all these dimensions. So yeah, I mean, you speak to the multiple dimensions, obviously. The original conception was just looking at the variables of the size of the model as measured by parameters and the size of the data as measured by the number of tokens and looking at the ratio of the two. Yeah. And it's kind of a compelling notion that there is a number, or at least a minimum. And it seems like one was emerging. Do you still believe that there is a kind of bigger is better? I mean, I think bigger is certainly better for just raw performance. And raw intelligence. And raw intelligence. I think that the path that people might take is, I'm particularly bullish on distillation. And like, yeah, how many knobs can you turn to if we spend like a ton, ton of money on training, like get the most capable cheap model, right? Like really, really caring as much as you can. Cause like the, the, the naive version of caring as much as you can about inference time compute is what people have already done with the LLAMA models, or just overtraining the shit out of 7b models on way, way, way more tokens than is essentially optimal. But if you really care about it, maybe the thing to do is what Gemma did, which is let's not just train on tokens, let's literally train on minimizing the KL divergence with the distribution of gamma 27B, right? So knowledge distillation there. And you're spending the compute of literally training this 27 billion model, billion parameter model on all these tokens just to get out this, I don't know, smaller model. And the distillation gives you just a faster model. Smaller means faster. Yeah. Distillation in theory is, I think, getting out more signal from the data that you're training on. And it's perhaps another way of getting over, not completely over, but partially helping with the data wall, where you only have so much data to train on. Let's train this really, really big model on all these tokens, and we'll distill it into this smaller one. And maybe we can get more signal per token for this much smaller model than we would have originally if we trained it. So if I gave you $10 trillion, how would you spend it? I mean, you can't buy an island or whatever. How would you allocate it in terms of improving the big model versus maybe paying for HF in the RLHF? Yeah. I think there's a lot of these secrets and details about training these large models that I just don't know and are only privy to the large labs. And the issue is I would waste a lot of that money if I even attempted this because i wouldn't know those things uh suspending a lot of disbelief and assuming like you had the know-how um and operate or or if you're saying like you have to operate with like the limited information you have now no no actually i would say you swoop in and you get all the information all the little heuristics all the little parameters all the little heuristics, all the little parameters, all the, all the parameters that define how the thing is trained. If we look in how to invest money for the next five years in terms of maximizing what you called raw intelligence, I mean, isn't the answer like really simple. You just, you just try to get as much compute as possible. Like, like at the end of the day, all you need to buy is the GPUs. And then sort of the, the, the researchers can find, find all the, all like they, they can sort of, you know, you can tune whether you want to be trained a big model or a small model. Like, well, this gets into the question of like, are you really limited by compute and money or are you limited by these other things and driving progress? I'm more privy to Arvid's Arvid's belief that we belief that we're sort of idea limited, but there's always like- But if you have a lot of compute, you can run a lot of experiments. So you would run a lot of experiments versus like use that computer to train a gigantic model. I would, but I do believe that we are limited in terms of ideas that we have. I think, yeah, because even with all this compute and all the data you could collect in the world, I think you really are ultimately limited by not even ideas, but just really good engineering. Even with all the capital in the world, would you really be able to assemble? There aren't that many people in the world, would you really be able to assemble? There aren't that many people in the world who really can make the difference here. And there's so much work that goes into research that is just pure, really, really hard engineering work. As a very kind of hand-wavy example, if you look at the original Transformer paper, how much work was joining together a lot of these really interesting concepts embedded in the literature versus then going in and writing all the code, like maybe the CUDA kernels, maybe whatever else. I don't know if it ran on GPUs or TPUs originally. So it actually saturated the GPU performance, right? Getting GNOME to go in and do all this code, right? And GNOME is like probably one of the best engineers in the world. Or maybe going a step further, like the next generation of models, having these things, like getting model parallelism to work and scaling it on like, you know, thousands of, or maybe tens of thousands of like V100s, which I think GBDE3 may have been. There's just so much engineering effort that has to go into all of these things to make it work. If you really brought that cost down to, like, you know, maybe not zero, but just made it 10x easier, made it super easy for someone with really fantastic ideas to immediately get to the version of like the new architecture they dreamed up that is like getting 50, 40% utilization on the GPUs. I think that would just speed up research by a ton. I think if you see a clear path to improvement, you should always take the low-hanging fruit first. I think probably OpenAI and all the other labs did the right thing to pick off the low-hanging fruit, where the low-hanging fruit is like... You could scale up to a GPT 4.25 scale. And you just keep scaling and like things, things keep getting better. And as long as like you, there's there's no point of experimenting with new ideas when like everything everything is working. And you just sort of bang on it and try to try to get as much as much juice out of the possible and then maybe maybe when you really need new ideas for I think I think if you're if you're spending ten trillion dollars probably want to spend Some see you know then actually like reevaluate your ideas like probably your idea limited at that point I think all of us believe new ideas are probably needed to get you know all the way there to a GI and All of us also probably believe there exist ways of testing out those ideas at smaller scales and being fairly confident that they'll play out. It's just quite difficult for the labs in their current position to dedicate their very limited research and engineering talent to exploring all these other ideas when there's this core thing that will probably improve performance for some decent amount of time. Yeah, but also these big labs like winning. They're just going wild. Okay. So big question looking out into the future. You're now at the center of the programming world. How do you think programming, the nature of programming changes in the next few months, in the next year, in the next two years, next five years, 10 years? I think we're really excited about a future where the programmers in the driver's seat for a long time. And you've heard us talk about this a little bit, but one that emphasizes speed and agency for the programmer and control, the ability to modify anything you want to modify, the ability to iterate really fast on what you're building. And this is a little different, I think, than where some people are jumping to in this space, where I think one idea that's captivated people is, can you talk to your computer? Can you have it build software for you as if you're talking to like an engineering department or an engineer over Slack? And can I just be this sort of isolated text box? And part of the reason we're not excited about that is, you know, some of the stuff we've talked about with latency, but then a big piece, a reason we're not excited about that is because that comes with giving up a lot of control. It's much harder to be really specific when you're talking in the text box. And if you're necessarily just going to communicate with a thing like you would be communicating with an engineering department, you're actually abdicating tons of really important decisions to this bot. And this kind of gets at fundamentally what engineering is. I think that some people who are a little bit more removed from engineering might think of it as the spec is completely written out, and then the engineers just come and they just implement. And it's just about making the thing happen in code and making the thing exist. But I think a lot of the best engineering, the engineering we enjoy, involves tons of tiny micro decisions about what exactly you're building and about really hard trade-offs between speed and cost and all the other things involved in a system. As long as humans are actually the ones designing the software and the ones specifying what they want to be built, and it's not just a company run by all AIs, we think you'll really want the human in a driver's seat dictating these decisions. And so the jury's still out on what that looks like. I think that one weird idea for what that could look like is it could look like you can control the level of abstraction you view a codebase at. And you can point at specific parts of a codebase that maybe you digest a codebase by looking at it in the form of pseudocode. And you can actually edit that pseudocode too, and then have changes get made down at the sort of formal programming level. And you can gesture at any piece of logic in your software component of programming. You keep the inflow text software component of programming, you keep the inflow text editing component of programming, you keep the control of you can even go down into the code, you can go at higher levels of abstraction, while also giving you these big productivity gains. It'd be nice if you can go up and down the abstraction stack. Yeah. And there are a lot of details to figure out there. That's sort of like a fuzzy idea. Time will tell if it actually works. But these principles of control and speed in the human and the driver's seat we think are really important. We think for some things, like Arvid mentioned before, for some styles of programming, you can kind of hand it off chatbot style if you have a bug that's really well specified. But that's not most of programming, and that's also not most of the programming we think a lot of people value. What about the fundamental skill of programming? we think a lot of people value. What about like the fundamental skill of programming? There's a lot of people like young people right now kind of scared, like thinking because they like love programming, but they're scared about like, will I be able to have a future if I pursue this career path? Do you think the very skill of programing will change fundamentally? I actually think this is a really, really exciting time to be building software. Like we remember what programming was like in, you know, 2013, 2012, whatever it was. And there was just so much more cruft and boilerplate and, you know, looking up something really gnarly and, you know And that stuff still exists. It's definitely not at zero. But programming today is way more fun than back then. It's like we're really getting down to the delight concentration. And all the things that really draw people to programming, like for instance, this element of being able to build things really fast and speed and also individual control, like all those are just being turned up a ton. And so I think it's just going to be, I think it's gonna be a really, really fun time for people who build software. I think that the skills will probably change too. I think that people's taste and creative ideas will be magnified. And it will be less about maybe less a little bit about boilerplate text editing, maybe even a little bit less about carefulness, which I think is really important today. If you're a programmer, I think it'll be a lot more fun. What do you guys think? I agree. I'm very excited to be able to change. Like just one thing that happened recently was like we wanted to do a relatively big migration to our codebase. We were using async local storage in Node.js, which is known to be not very performant, and we wanted to migrate to our context object. And this is a big migration that affects the entire codebase. And Swal and I spent, I don't know, five days working through this, even with today's AI tools. And I am really excited for a future where I can just show a couple of examples, and then the AI applies that to all of the locations. And then it highlights, oh, this is a new example. Like, what should I do? And then I show exactly what to do there. And then that can be done in like 10 minutes. And then you can iterate much, much faster. Then you don't have to think as much upfront and stand at the blackboard that can be done in like 10 minutes. And then you can iterate much, much faster. Then you don't have to think as much upfront and stand at the blackboard and think exactly like how are we going to do this because the cost is so high. But you can just try something first and you realize, oh, this is not actually exactly what I want. And then you can change it instantly again after. And so, yeah, I think being a programmer in the future is going to be a lot of fun. Yeah. I really like that point about, it feels like a lot of the time with programming, there are two ways you can go about it. One is like, you think really hard, carefully upfront about the best possible way to do it. And then you spend your limited time of engineering to actually implement it. But I much prefer just getting in the code and like, you know, taking a crack at it, seeing how it kind of lays out and then iterating really quickly on that. That feels more fun. Yeah. Like just speaking to generating the boilerplate is great. So you just focus on the difficult design, nuanced difficult design decisions, migration. I feel like this is, this is a cool one. It seems like large language models are able to basically translate from one program language to another or translate, migrate in the general sense of what migrate is. But that's in the current moment. So the fear has to do with, okay, as these models get better and better, then you're doing less and less creative decisions. And is it going to kind of move to a place where it's, you're operating in the design space of natural language, where natural language is the main programming language? And I guess I could ask that by way of advice. Like, if somebody's interested in programming now, what do you think they should learn? Like, do they, you guys started in some Java and I forget the, oh, some PHP. Objective-C. Objective-C, there you go. Yeah. I mean, in the end, we all know JavaScript is going to win. And not TypeScript. It's going to be like vanilla JavaScript. It's going to eat the world. and maybe a little bit of PHP. And I mean, it also brings up the question of like, I think Don Knuth has, uh, this idea that some percent of the population is geeks and like, there's a particular kind of psychology in mind required for programming and it feels like more and more that's expands The kind of person that should be able to, can do great programming might expand. I think different people do programming for different reasons, but I think the true, maybe like the best programmers are the ones that really love, just like absolutely love programming. For example, there are folks on our team who literally when they get back from work, they go and then they boot up Cursor and then they start coding on their side projects for the entire night and they stay up till 3 a.m. doing that. And when they're sad, they said, I just really need to code. And I think like, you know, there's that level of programmer where like this obsession and love of programming, I think makes really the best programmers. And I think these types of people will really get into the details of how things work. I guess the question I'm asking, that exact program, let's think about that person. When the super tab, the super awesome praise be the tab succeeds, you keep pressing tab. That person on the team loves to curse the tab more than anybody else. Yeah, and it's also not just like... Pressing tab is like the... Just pressing tab, that's the easy way to say it and the catchphrase, you know? But what you're actually doing when you're pressing tab is that you're injecting intent all the time while you're doing it. Sometimes you're rejecting it, sometimes you're typing a few more characters. And that's the way that you're sort of shaping the things that's being created. And I think programming will change a lot to just what is it that you want to make. It's sort of higher bandwidth. The communication to the computer just becomes higher and higher bandwidth as opposed to just typing is much lower bandwidth than communicating intent. I mean, this goes to your manifesto titled Engineering Genius. We are an applied research lab building extraordinary productive human AI systems. So speaking to this hybrid element. To start, we're building the engineer of the future, a human AI programmer that's an order of magnitude more effective than any one engineer. This hybrid engineer will have effortless control over their code base and no low entropy keystrokes. They will iterate at the speed of their judgment, even in the most complex systems. Using a combination of AI and human ingenuity, they will outsmart and out-engineer the best pure AI systems. We are a group of researchers and engineers. We build software and models to invent at the edge of what's useful and what's possible. Our work has already improved the lives of hundreds of thousands of programmers. And on the way to that, we'll at least make programming more fun. So thank you for talking today. Thank you. Thanks for having us. Thank you. Thank you. Thanks for listening to this conversation with Michael, Swale, Arvid, and Aman. To support this podcast, please check out our sponsors in the description. And now let me leave you with a random funny and perhaps profound programming code I saw on Reddit. Nothing is as permanent as a temporary solution that works. Thank you for listening, and hope to see you next time. | Cursor Team: Future of Programming with AI | Lex Fridman Podcast #447 | 8,944 | Lex Fridman | 20241006 | Aman Sanger, Arvid Lunnemark, Michael Truell, and Sualeh Asif are creators of Cursor, a popular code editor that specializes in AI-assisted programming.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep447-sb
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.
*Transcript:*
https://lexfridman.com/cursor-team-transcript
*CONTACT LEX:*
*Feedback* - give feedback to Lex: https://lexfridman.com/survey
*AMA* - submit questions, videos or call-in: https://lexfridman.com/ama
*Hiring* - join our team: https://lexfridman.com/hiring
*Other* - other ways to get in touch: https://lexfridman.com/contact
*EPISODE LINKS:*
Cursor Website: https://cursor.com
Cursor on X: https://x.com/cursor_ai
Anysphere Website: https://anysphere.inc/
Aman's X: https://x.com/amanrsanger
Aman's Website: https://amansanger.com/
Arvid's X: https://x.com/ArVID220u
Arvid's Website: https://arvid.xyz/
Michael's Website: https://mntruell.com/
Michael's LinkedIn: https://bit.ly/3zIDkPN
Sualeh's X: https://x.com/sualehasif996
Sualeh's Website: https://sualehasif.me/
*SPONSORS:*
To support this podcast, check out our sponsors & get discounts:
*Encord:* AI tooling for annotation & data management.
Go to https://lexfridman.com/s/encord-ep447-sb
*MasterClass:* Online classes from world-class experts.
Go to https://lexfridman.com/s/masterclass-ep447-sb
*Shopify:* Sell stuff online.
Go to https://lexfridman.com/s/shopify-ep447-sb
*NetSuite:* Business management software.
Go to https://lexfridman.com/s/netsuite-ep447-sb
*AG1:* All-in-one daily nutrition drinks.
Go to https://lexfridman.com/s/ag1-ep447-sb
*OUTLINE:*
0:00 - Introduction
0:59 - Code editor basics
3:09 - GitHub Copilot
10:27 - Cursor
16:54 - Cursor Tab
23:08 - Code diff
31:20 - ML details
36:54 - GPT vs Claude
43:28 - Prompt engineering
50:54 - AI agents
1:04:51 - Running code in background
1:09:31 - Debugging
1:14:58 - Dangerous code
1:26:09 - Branching file systems
1:29:20 - Scaling challenges
1:43:32 - Context
1:48:39 - OpenAI o1
2:00:01 - Synthetic data
2:03:48 - RLHF vs RLAIF
2:05:34 - Fields Medal for AI
2:08:17 - Scaling laws
2:17:06 - The future of programming
*PODCAST LINKS:*
- Podcast Website: https://lexfridman.com/podcast
- Apple Podcasts: https://apple.co/2lwqZIr
- Spotify: https://spoti.fi/2nEwCF8
- RSS: https://lexfridman.com/feed/podcast/
- Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
- Clips Channel: https://www.youtube.com/lexclips
*SOCIAL LINKS:*
- X: https://x.com/lexfridman
- Instagram: https://instagram.com/lexfridman
- TikTok: https://tiktok.com/@lexfridman
- LinkedIn: https://linkedin.com/in/lexfridman
- Facebook: https://facebook.com/lexfridman
- Patreon: https://patreon.com/lexfridman
- Telegram: https://t.me/lexfridman
- Reddit: https://reddit.com/r/lexfridman | 2024-10-07T08:28:40.038274 |
https://vimeo.com/961009140/1a92636860 | My name is Cameron Gibbs. I'm a knowledge engineer with Crown Point Technologies. We've been supporting the Pistoia Alliance with the development of the CMC process ontology. And in this short demonstration, I wanted to walk through the prototype knowledge graph that we developed using the ontology in order to validate that it can answer all of the competency questions and use cases identified as a part of the development of the ontology. On our first screen, we're looking at ANZO. This is the semantic platform we used to develop the knowledge graph. Here on this homepage, we can see a rough breakdown of the different steps used to create our prototype knowledge graph. First, we onboarded some sample data. We can take a quick view of that sample data here. Our sample data represents two different recipes. First, we have some process data for monoclonal antibodies manufacturing run involving three different batches done at three different scales. We have process data involving measurements and recipe limits for both process parameters and process indicators. And we also have two files representing a small molecule synthesis manufacturing run. Each file representing a different site and each site at different scales, totaling five different batch processes once we onboarded this the data into anzo anzo will automatically put this into a graph format that we can then align to the ontologies we developed briefly we can look at we have we've developed a core cmc process ontology with many classes defining the different data objects the attributes that would make up the process data. Additionally, we have two controlled vocabularies that extend these ontologies. One vocabulary for what we call recipe component kinds. These are a general way of describing the different process steps that make up a pharmaceutical manufacturing process, as well as what we call parameter kinds. These are general terms to describe measurements, recipe specifications, as well as input settings that would be done during a manufacturing run. These were also imported into ANZO, and finally these were blended to create a knowledge graph. Here we can see the result of the knowledge graph. This view is not representing the instance data, but actually a view of the classes in the ontologies. For example, we have our batch run, we have our recipe components. The numbers in the parentheses represent how many data instances we have of each of these classes. And the relationships between the objects designate how the ontology encodes how these objects should be interrelated to one another. So we see a relationship between the manufacturing process alongside the recipe information and the measurement information as well, providing a continuous framework that can link all these data objects together and provide an unambiguous way of defining them and relating them in a way that both makes sense to a human being and is machine understandable. Let's now turn, we can turn and look at various dashboards and see how these solve the different competency questions laid out. Our first use case we identified was that of process definition. So this is the ability to provide a uniform, universal, and unambiguous way of defining a manufacturing process. I wanted to take a look at two different views describing how the ontology can support responding to this use case. First, we're looking at a graph view of the data objects. Now we're actually looking at instance data, and this graph view gives us the closest representation of how these objects are linked together. In a moment, we'll also look at a table view that'll be a little bit more familiar to what the eventual end users would actually be seeing. Here we can see first the different recipes defined in the ISSA 88 standard, which we've incorporated to provide a standardized way to define recipes and processes. We have our general recipe, which is the most general, non-specific form of a recipe in ISSA 88. This has various pieces of information linked to it, such as its ID, the product name that would be produced by this recipe, the manufacturing stage, whether it's a drug substance or it's a drug product. We have a title for the recipe and we also have a version for the recipe. And this general recipe also derives into the other ISA-88 recipes. For example, we have our site recipe. The site recipe can have a site that it's prescribed to. The site recipe also can derive into two different master recipes. In this case, we have two different recipes showing that a given recipe can derive into any number of different, more specific recipes in the ISTA 88 standard. Here we have, for example, a master recipe defined for the large scale, as well as a master recipe defined at that small scale. And finally we have our master recipe deriving into the control recipe, the most specific recipe within the ISSA 88 standard. Moving back over to the left we can also see the recipe components that are associated with the recipe. I pulled out just one structure of the different recipe steps defined by the recipe components for the general recipe, but this structure would be replicated for both at the site recipe as well as the master and the control recipe. Here we can see our monoclonal antibodies drug substance. It corresponds to a process recipe component, which is the highest level of the recipe component in the ISSA 88 standard. It will break down in turn to process stages. So we can see how the ontology allows us to define a series of process stages that make up a given process. And we can also link these process stages together so we know the order in which they occur at. Just for a single example, I've pulled out a deeper layer for one of the process stages, flow through chromatography. And here we see that we also have process operations that make up that process stage, providing a further layer within the ISSA 88 hierarchy. And again, we have an order of a pool process operation to being required before an adjust process operation before a different adjust process operation. And finally, we've broken out different actions, the smallest level of the ISSA-88 hierarchy beneath this adjust step. So again, we can see an order to these steps where sample comes before measurement process, which comes before a titrate action, before mix, before sample again, and then another measurement process. And so what we can see from this graph structure of the knowledge graph is how our ontology defines all of these objects that allow us to define a process and a recipe in a controlled, systematic, and uniform way. We can also look at this from a table perspective. We can see our recipe title, the manufacturing stage, whether it's a drug substance or drug product. We can see the overall process that it's defining, the possible sites that the recipe would be performed at, and the possible scales. And we see the process stages. Since we're in a table view, in this case I'm representing the order of the stages using a numerical scheme. Within, we have our different stages underneath monoclonal antibodies, drug substance. And we also see that many of the stages themselves have operations beneath them. We can click on a given stage, taking us to another table that will take us deeper into the hierarchy. So now we're looking at only the process stages and we can see all of the process operations beneath them, as well as the process actions underneath some of the process operations. So again, we have a fully precise and defined way of defining a manufacturing process in a uniform way that incorporates the ISSA 88 standard and provides a way to define a recipe at both the very very general level for that general recipe and then further more making it further more specific at the site level and at the scale level and eventually at the control batch specific level. All right, so that is our first use case, process definition. Now I want to discuss our second use case, the monitoring and control. This involves not just defining the process steps that make up a manufacturing process, but also the ability to track different measurements through that process, as well as controlling them by comparing them to the limits for the different parameters and indicators that the recipe specifies. So let's take a look at a different view. This graph view represents how the ontology allows us to profile a given measurement through a series of process steps. In this case, it will be a temperature measurement. So here, starting on the right side, we're starting from our master recipe, which defines a particular scale for the manufacturing run. That master recipe is derived ultimately from a site recipe which would further lead into a general recipe that I haven't depicted here. From the master recipe we've derived down into the control recipe and again we have a breakdown of the different steps of the of the monoclonal antibodies drug substance process. Here I've only pulled out a small number of them to make the graph relatively easy to view. We can see our drug substance itself has a part, our drug substance procedure, rather. We're using the procedure hierarchy of the ISSA 88 standard because now we are controlling particular batches and the recipe components are now equipment dependent. A part of that procedure recipe component is a unit procedure, in this case the production bioreactor step, which itself has three parts that make it up. Unit operations, the fill medium and equilibrate step, the inoculate step, and the fed batch step. Again, we can see not only are they all parts of the production bioreactor unit procedure, but they also have an order that's defined in the process recipe. All of these recipe components also are linked to a batch process itself. So we can see a batch process related to the overall monoclonal antibodies drug substance procedure. It has itself as a part a sub-batch run related to the production bioreactor step, and that in turn has three batch runs as a part corresponding to the different unit operations pulled out and we can see there's an ordering relation between these three where the fed the fill medium and equilibrate batch process comes before the inoculate batch process which in turn comes before the fed batch process so we can see how we can model the different steps that make up the batch process and how they relate back to that process that we've defined. Now we can build our way out from the graph to the measurement values. Each of these batch runs will itself have a monitoring process that could involve many different measurements as a part of it. Here we can see our data set has five different measurement processes a part of our general monitoring process looking at just one we're interested in temperature in this case we have a one a particular measurement process that has as an output a process parameter which has a value associated with it in this case 36.7 there's also a unit associated I had that I haven't displayed here and further one of our terms from our controlled vocabulary our parameter kind terms is used to describe the measurement output to indicate that it's a temperature measurement and we can see that this this general term is describing all of the measurement outputs from the different batch processes, allowing us to have a general controlled way to describe what kind of parameter is being measured in this case. So we are using our general controlled term for a parameter kind, in this case temperature, to describe all the measurements, making it easy for us to aggregate them together in a controlled way. Now this just shows how the knowledge graph has all the relations and the context to these different pieces of data. We can also see how we can view a profile of the temperature across all of the different unit operations at which temperature is actually measured. Here we're viewing one batch, the small scale batch. We saw some of these unit operations. I only pulled out three before to keep the graph relatively small, but we can see that there are actually five different measurements at the unit operation level involving temperature. And with this simple graph, it becomes quite easy for us to profile temperature across all of those different unit operations. Now, another part of use case two, monitoring and control, is not simply being able to profile one value, one parameter in particular like temperature, but being able to profile any given process parameter or performance indicator that would be measured during the process. And from the same dashboard, we can use our controlled vocabulary terms to easily, at a click, pick any of the measurements that we want to see profiles, and all of them that are represented in this chart. So we can quickly, using that different parameter kind viability, quickly rechart to find a new profile, or we can rechart to a volume measurement. And the chart, using the control terms for our unit operations, as well as our control terms for our parameter kinds can easily make these links. Now another part of the use case 2 is not merely being able to profile these measurements but also being able to compare them with the limits described for these values in the recipe itself, the specification upper and lower limits defined in that recipe. This is also something we can do easily with a chart. We have now our same measurement profile for temperature, and we've also charted the upper and lower limits that have been defined as a part of that recipe process. In this case, we can see for our single batch done at the small scale, at a glance we can see that the temperature measurements fall within the range defined by the recipe for this batch. We can also compare our upper, lower, and lower limits with our measurements in mass across all of the parameter kinds at a time if we want to. In this case, I'm looking at a single batch at the large scale. Here we see the scale, we see the procedure, the unit operation, the unit phase, the working day, the time at which this phase has been cycled within the working day, the parameter, whether it's a critical process parameter or not, the lower limits, the upper limits, and the measurement. All of this comes from our sample data that has been aligned to the ontology, and the final column has been automatically calculated within this dashboard to make a comparison from the measurement values with those upper and lower limits to determine if the measurement is out of range. And we see that we have two measurements that are out of range for this particular sample set. Both are interesting to look closer at. We can see that the lower and upper limits are much higher than the measurement value, indicating that there may have been something wrong with the sensor or the data collection process and in the second row we can see the lower limit is actually higher than the upper limit again indicating that that there is likely some kind of error in defining the recipe specification in this case but it's very powerful that we can calculate on the fly the comparison between the measurements and the upper and lower limits to quickly be able to see at a glance that there may be these issues with the data itself. And these on the fly calculations are all made possible because we have a context surrounding the data so that it's easy to understand what exactly all of the data objects mean, both in a human-readable way as well as in a machine-readable way. All right. Now let's turn to our third use case, that of comparative analysis. So this builds on our previous use cases to not simply profile measurements across a single batch, but to be able to compare those measurements across different batches, even if they're done at different scales or even at different sites. Today, this is a real challenge with the data, because when the data is collected at different scales or at different sites, the sensors used, the data systems used, even the personnel involved are all different. And so the label is often labeled in very different ways. There's often ambiguities across the different process steps or in labeling the measurement values. So being able to make these easy comparisons across scale and site would vastly reduce the effort in mere data manipulation so that process experts and process engineers can more quickly find the data that they need. Let's take a look at a cross-scale comparison of in this case a measurement of the duration of a process. And on this much more complex graph, we're now seeing how we can compare the measurements for this process duration across three different scales. First, we'll begin at the top of the graph and we can see that we have a general recipe. This will be shared across all of the different batch processes involved, and it derives into a site recipe. In this example, all of the batches are done at the same site, but there's three different scales involved. So again, all are linked to the same site recipe. But now that we have a divergence with scale, we'll see that there are three different master recipes branching off from the site recipe, each of these master recipes corresponding to the different scales involved. We have our master recipe for the large scale, for the small scale, and for the pilot scale. And each of these master recipes will itself branch down into that same structure we've seen a couple of times now. They branch down into the control recipe, the most specific recipe, defining a specific batch. And we have our recipe components describing the different steps involved. We have our procedure, our highest level recipe component of the monoclonal antibodies drug substance. This has as a part the production bioreactor unit procedure, and this has as a part the unit operation transfer step, at which point the duration of the process will be measured. We can see mirroring the hierarchy of recipe components, what you have a hierarchy of batches that are prescribed by these recipes. So we have our batch corresponding to the highest level, the procedure. This has as a part a batch corresponding to that unit procedure, the production bioreactor. And finally, we have our batch corresponding to the transfer step. Now that structure in the recipe and the batches is mirrored at both the small scale and at the pilot scale, but not only are these different recipe components linked by linking all the way back to the same site recipe, but they're also linked in being linked to the same controlled vocabulary term to describe that step involved. So in this case, we have our transfer recipe component kind term. This links across all of the different transfer steps across different recipes. That gives us a controlled way to ensure that we're talking about precisely the same step. There are recipe component kinds for all of the recipe components. I haven't pulled them out just to see the graph, make the graph easier to see. But there is likewise a controlled term linking all, linking the procedure recipe components and the unit procedure recipe components together as well. All right. Now, branching off from our batches, we have the familiar structure of the monitoring process and the measuring process with the measurement output, as well as the value associated with the measurement output. And again, we have our parameter kind, in this case duration, that's linking across not only the measurements within the different steps of a single batch, like we saw with temperature in the last use case, but also across the different measurements across the different sites. So we can see duration is also linked to the process parameter measurement that ultimately is a part of the pilot scale batch process. And similarly for the small scale, I simply haven't pulled out those batch processes to keep the graph manageable. And so here we can easily make our comparison of the process duration at the large scale and the process duration at the small scale by using this duration parameter kind to track that we're talking about the same measurement as well as the recipe component kind to track that we're comparing process steps in a controlled way. So we know we're taking those measurements at precisely the same point in the manufacturing process. I've also pulled out the recipe specification. This defines the upper and lower limits for the process parameters. We also have that limit defined for each scale for the different recipes, the large scale, the small scale, and the pilot scale. Here we have our upper and lower bound limits that we can then compare with our measurements if we wish. And those are also linked to the same parameter kind, again, giving us a controlled way to ensure that we're talking about all of the same, all of the same values, be they measurement values or recipe specification values. Now, now that we've seen this structure on the graph view, we can go back to one of our chart dashboard views and we can bring in the different scales. So now we can compare our temperature measurements, for example, across the different scales, in this case, small scale and pilot scale. And we can see that there are no temperature measurements represented at the large scale. Things are different if we change parameter kind. We can switch to viability, for example. And now we have all three batches represented, or likewise for volume. And we can see because we have a controlled way of describing the process steps and those parameter kinds that spans across the scale in all three of these batches, the chart can easily update itself and view and can make those comparisons for us. If we go back to temperature we can also see we have a small molecule representation as well. This was our cross-site view, now we have five different batches that are done at two different scales. We can see the two different sites represented here. And again, we can make that temperature comparison very easily across all five of these different batches, because again, we have a controlled way of describing our parameter kind and a controlled way of describing the different unit operations. And again, this demonstrates that our ontology provides a common data model that can span across any recipe, any process, has a uniform way of describing the process steps and the measurements that would be taken. Now, this is a cross-site comparison. On the graph view, we only saw a cross- cross scale comparison so we can end there with this quick overview Let me switch examples Here's our final view that we'll take a look at here. We're seeing a demonstration of comparing a yield measurement across two different sites. Here we're involving a different recipe now for small molecule, a synthesis recipe. So here we have our general recipe with the title of a general compound recipe to synthesize any kind of example compound. In this case, because we are cross-site, making a cross-site comparison, our general recipe will immediately branch into two different site recipes because the processes are done at two different sites. And we can represent the site being prescribed as well, linking both to that site recipe, as well as the batch at which the site is performed. And from here, from our branching site recipes, we have a familiar structure we've seen before. The site recipe will derive into a master recipe, which indicates the scale. We can see the link to the scale here. I haven't pulled it out just to keep the graph smaller. This master recipe will derive into the control recipe, which itself will have different recipe components allowing us to represent the process steps. So we have our synthesis procedure. This is linked to a recipe component kind that also links to the synthesis procedure step on the other site recipe. So again, we can see how that controlled vocabulary bridges across the two different recipes, giving us a controlled way to describe the same step in different recipes. The output step is a unit procedure beneath the procedure synthesis step. And yield is a unit operation step beneath the output unit operation step. And again, we have our three, we have recipe component kinds linking together these, the different recipe components across the, across the site recipes. Like before, we have our structure of three different batch processes that form a hierarchy that mirrors the recipe components. And like before, we have our familiar structure of a monitoring process being associated with a given batch, a measurement process being associated with that monitoring process, and finally, a measurement output. In this case case a yield measurement with the particular value associated with it and our yield parameter kind is itself linked to both both parameter measurement outputs in this case a process parameter measurement output that gives again gives us that controlled way to define the same measurement across different sites, making a comparison between these yield measurements not only possible, but easy to do. knowledge graph that was able to validate that our CMC process ontology, along with the two controlled vocabularies we've developed, is capable of responding to all three use cases, the process definition use case, monitoring and control use case, and the comparison analysis use case. Thank you for your time. | CMC Process Validation Walkthrough | 1,762 | Pistoia Alliance | 20240618 | CMC Process Validation Walkthrough by Birthe Neilson, Project Manager, Pistoia Alliance | 2024-10-11T05:58:44.474714 |
https://www.youtube.com/watch?v=avUIiqaGiBU | Okay, why don't we get started? So I'm Jim Saylor, Scientific Project Manager at NIMBL, working on the Big Data Program. So welcome everybody to the NIMBL webinar series. This one's focusing on process ontology. Next slide please, Robert. So a few announcements before we dive into the content. Please put all your, everybody on the webinar will be muted throughout the presentation, so please use the chat feature to post a question. And go ahead and post a question whenever the question comes to you. We will address the questions at the end. Another reminder about the confidentiality this webinar is open to both Nimble members as well as non-members so please don't post any Nimble confidential confidential information or proprietary information in the chat. And And so following this webinar, you will receive a survey link by email. Please fill out that survey link. We value the information that we gather from there. One thing that you will get from filling out the survey link is immediate access to the webinar recording. So I would like to remind everybody about the the big data program that's coming up on March 15th and 16th. It's open to both NIMBLE members and non-NIMBLE members. There's an open session to everybody on March 15th and a NIMBLE membersonly session on March 16th. Please, if you're interested, please go to the Nimble website. The address is right down there. And register. If at all possible, please register for in-person. That would be great to meet everybody and have everybody in the same room. But if not, please register for virtual. And then you will receive some information and have everybody in the same room. But if not, please register for virtual. And then you will receive some information shortly about the actual agenda. If you're only able to attend virtually for a session that you're an SME in, please do go ahead and register for the the uh webinar for the program uh workshop and uh just join at that session that would be great uh i'd like to remind everybody that the registration closes uh this monday march 6th so uh please go ahead and register and uh if you would like to participate in the big data program or the big data bd1 work stream you can contact me if you're a member and i could i could fill you in on the program and and how to get involved if you are a non-member please go ahead and contact me as well and i'll put you in touch with the appropriate people to fill those questions. Next slide, please. So again, welcome to the NIMBL webinar series. This one is on the NIMBL biopharmaceutical process ontology, an introduction to ontology concepts and a path to implementation. It is given by Robert Hackathorn. Robert Hackathorn joined AstraZeneca. Back one slide. Robert, thanks. Robert joined AstraZeneca in 2008 as a senior scientist on the process cell culture team after spending the previous six years at Axelon Pharmaceuticals in a similar process development role. Early in his career, he saw the opportunity to make life easier for scientists through the automation of repetitive manual tasks. At that time, Robert began learning about databases and programming and was able to bring together some crude but effective tools automating data transfer from highly used lab instruments. These small efforts to automate data transfer from highly used lab instruments. These small efforts to automate data capture from cell culture instruments has evolved into a second session of his career, focused exclusively on creating fair, integrated lab ecosystems in the biopharmaceutical development that manages lab workflows and data, but also collaborates with robotics. Robert currently leads the informatics and software engineering team in biopharmaceutical development at AstraZeneca. Robert also enjoys playing stringed instruments like guitar, mandolin, and bass guitar. Thank you, and here's Robert. Robert B. Okay, great. Thank you for the introduction. I appreciate that. And before we really get into the presentation, I just want to sort of pre-acknowledge. So, we'll put the acknowledgments at the front here. And it's really everything we're talking about today and the work that we do within the big data initiative is impossible without the financial contributions, but also the collaborations from all of these member companies. So, thank you again for that. And in terms of the presentation today, we're really going to be focusing on ontology, but we're going to take our time getting there in terms of the biopharmaceutical process ontology that we've been working on. And we're going to start by really just introducing the concept of ontology. Where does it come from? How does that kind of ancient philosophy of ontology work its way into modern information science? How do we…what kind of technology do we need to have in order to implement ontologies within our organizations, within the information ecosystems within our organization. And, you know, in terms of the ontology itself, the biopharmaceutical process ontology that we've been working on, we're going to take a 30,000 foot view of that. I think we're going to see that as you start building an ontology and adding concepts to that ontology, the complexity really starts to increase. And so, for us to really zoom into the biopharmaceutical process ontology, it would be challenging for this presentation, but we will take a high level view of it. And then we'll talk about business processes within development organizations, within manufacturing organizations that can be potentially re-engineered as processes with the nimble ontology at the core. So, so we're going to start with ontology, an ontology is, and then we're going to go through a couple of things that it is. And 1st and foremost, it's, it's a buzzword right now. I think. You probably hear maybe in the hallway or just colleagues, you know, we need an ontology. I, you know, I'm guilty of saying this without, you know, really understanding what that means. I think the problem with buzzwords is that they evolved to BYOD, which is bring your own definition. And they tend to take on the meaning of the moment rather than sort of a static meaning. Other buzzwords or phrases that we've encountered, at least I've encountered in my career, digital transformation, these are all very important things. I'm not trying to diminish the importance of these initiatives. It's just that they tend to be blank canvas terms that people can assign their own meanings to. 5S, we need 5S our labs. We need data centric systems. We need FAIR data. So, we're going to spend some time really sort of defining what ontologies mean in our context, in our industry. Also ontologies are not new. This can be, they could be traced back to Aristotle probably even further if we wanted to dig further. But really it's a philosophy. It's concerned with existence. What is existence? What is reality? What does it mean to become? And some of the concepts that this philosophy deals with are things like categories. We can look at the world around us and start to classify things. We can say this is an apple and that an apple is fruit. We can also, there are one-to-one mappings. We could say this apple is red, which would be like a particular. There are universal. This would be more like a mini-to-one, like frogs and celery are green. So we've now made a mini-to to one classification of two different unrelated things. Now, where when we get into abstract and concrete, these are concepts that really make their way into our kind of modern use of ontology in a powerful way. So abstract abstract concepts like numbers, ideas, propositions, concrete things. We can touch a tree. These are things in the material world that we can interact with. A person, the sun, although we probably don't want to interact with the sun. So this is a very deep and old philosophy that anyone who's interested can really dig as far as they want. But what about information science? What does it mean for us? And luckily, we have this really hissy, clear definition of ontology for information science it's a specification specification of a conceptualization which actually i think is so clear we could probably just end the the webinar now and everybody get like 45 minutes back um but oh but actually it's not that helpful is it i think i have to admit the first time i heard this i had idea, I had no better idea of what an ontology was after hearing this definition than I did before. But I think we can spend some time digging into the what do we mean by conceptualization, what do we mean by specification, and maybe it'll start to become a little clearer. So conceptualization, this is the meaning that we associate with a set of terms. So I was talking with a colleague yesterday and he had a great example. He's like, if you just throw out the term table, it's, you know, for me, my mind immediately jumped to database table. My kids, though, would definitely not think about a database table. They would probably be thinking about the table at school, the art table or the dinner table. But when we start to specify what we mean by table and bring in dinner table or bring in database table, it removes that ambiguity that occurs. Between individuals, because we all have our different, we all have our different set of experiences that determined when I hear the word table, it means something. It means something to me that's different than when you hear the word table. So, that's still not super clear. So we're going to continue to try to push into this a little bit. And we're going to have a webinar participation exercise here. We can use the chat, I think. It's not necessary, though. But what I'm going to do is I'm going to put a concept. On the screen, and I just want you to think about the what what's the initial meaning that that that jumps into your head. Like, what is the 1st thing you think of what what does it mean to you when I put this concept on the screen. And so are you ready? Okay, so a fast horse. We'll just take a second and think about what what that means to you. And then for a moment, I just want to pretend that you're someone who you're who you aren't. So we're going to ask you to pretend that you're Bob Baffert, who's like one of the world-renowned trainer, right? So his horses are like winning Kentucky Derbies almost every single year. And then, and so how does your, how does when you're putting yourself in his shoes, how does your, that concept of fast horse differ from what your initial concept was? And then let's imagine you're the trainer of draft horses. So you're the person who's walking next to this team of Clydesdales, but your job is training draft horses. Your concept of fast horse might be very different than a fast racing horse. Right? So the conceptualization differs between individuals because it depends on domain knowledge. And domain knowledge is just your accumulated experiences and expertise. You know, this accumulates over a lifetime or over a career. And specification helps to remove and reduce the ambiguity in concept. So we could say a fast draft horse, and we would know that that would be, in terms of fast horses, that would be a very different thing than a fast thoroughbred. Okay so we're starting to make a little more sense. I think so we could we could take our original definition and we could start thinking about ontology as really being anything that helps make conceptualizations less ambiguous. So we're bringing specificity to ideas, basically. So will I know it when I see it? When I see an ontology, the chances are there are legacy information systems in your organizations that kind of fit this definition, at least loosely. I don't know that we would look at a. A vocabulary system and say, oh, that's an ontology, but it is a shade of gray, certainly closer to white than it is where we're ultimately going to land on the right side of the spectrum. But when we take steps as an organization to standardize vocabulary, oftentimes when I've heard people asking and advocating for ontologies, the next phrase is we need controlled vocabularies. Right? So this concept of lexicon, it is definitely related to ontology, and it is a shade of gray. And as we work our way left to right across this, we start to, if we add a little bit of functionality to a lexicon or to a vocabulary, we could start putting some hierarchy into it. So maybe we have a term that's called a bioreactor, and then bioreactors may have another term associated with them that is a controller. And so controller might be a pH controller or a temperature controller or something along those lines. But our taxonomy starts building some of these basic hierarchies of our terms. And then we could take that taxonomy and we can start adding essentially synonyms. So for me, within our organization, depending on where you've worked before or what group you went to school, like where you went to grad school or undergrad, you might refer to a bioreactor as a fermenter. These are kind of interchangeable terms. And so within our thesaurus, we may want to have that kind of a relationship, that they're connected related terms. But you still have those concepts of the hierarchy. So, we could then say that a fermenter also has pH and temperature controllers. Now, getting very close to sort of where the majority of my experience has been, and probably a lot of organizations, rely heavily on relational databases. So, you have SQL Server, Oracle, you know, your LIMM system might have an Oracle database, your other lab experiment management tools might be SQL Server. Well, the entity relationship model that went into designing those databases are really starting to get closer and closer to what we would think of as an ontology, where we have the concepts that are listed, the concepts are captured in the tables and the fields that are members of the tables, and the relationships are going to be managed through primary and foreign keys that sort of say this table and this concept is related to this other table and these concepts. And then we get to a graph database or the semantic web. And this is really where we want to land and where we're going to be spending the rest of the talk, like, yes, these are shades of gray, but really anybody who's working and developing an ontology isn't going to be just working on a controlled vocabulary. They're not just going to be working on these other things. They're going to have a specific set of tools and a specific set of practices that are going to go to build that ontology and to store that ontology and then to leverage that ontology. And it's a different set of tools than any of these other systems. So, we need to talk about graph databases and that they're used for representing ontologies. So just like I mentioned with the relational databases, we're used to seeing tables, columns and tables, and then the data being loaded into rows. It's a completely different concept with representing ontologies in a graph database. So information is stored in these systems in a format called triples. So a triple has three parts. It'd be funny if it had four parts or five parts, but they're called triples for a reason. They have this subject, predicate, and object, which as a person that only cared about science and math going through school, I kind of when I first started being introduced to this, I'm like, okay, that that also didn't make a whole lot of sense. But, but when when we break it down and illustrate it with a simple example, I think it's really clear. So, an ontology is going to be defined by a set of entities and relationships between those entities. And we name those things subjects, predicates, and objects. But in reality, we're naming two things, and we're saying that there's a relationship between it. So a trainer trains a horse. the horse, and the predicate is trains. So, hopefully that is super clear. And that's our ontology. And the interesting thing about this approach is that your data is described and stored in the exact same way in your graph database. And so, we may now have some data that we want to store in our database. So we want to say Bob Baffert trains Justify. So Bob Baffert's a real person, right? And he's a trainer and he's, he, and Justify is a real horse. between these two things. And then we can take it a step further and relate these things back to our ontology. So our data now being directly bound to our ontology. We can say that, well, Bob Baffert is a trainer. So now we've got a way of linking that concept of Bob Baffert is a trainer. So now we've got a way of linking that concept of Bob Baffert into our ontology and justify as a horse. So for everybody that are ontologists that may be connected to this webinar, I feel there's great power in simple examples, but we are building towards something. So if this is too simple, please just bear with me for a minute. And we'll get into some of that now because we want to talk about—so what we've seen so far are sort of typical ways of representing triples. And we can call this on the left side, we can actually refer to this now as a graph. But the data itself is stored in the graph database as triples, and it has a specific standard that has been developed. And there are multiple standards, depending on the systems that you use. For example, Oracle, go back to the relational databases, you may have nuanced differences between your queries that you write in Oracle as compared to SQL Server, right? So, there may be nuanced differences in terms of how data is stored and transferred between systems in graph databases. So we have something called RDF, and this is called Resource Description Framework. And just think of it as like JSON or XML or some other like markup language. It's a standard way. It's a syntax for capturing the information that we need to store into our graph database. And if we're thinking about ontologies specifically, so, you know, we pointed out that there's the there's triples for defining an ontology there's triples for defining data well so there's an extension or a variation of rdf that is specific for describing an ontology and defining an ontology and that's called owl which is the ontology web language or web ontology language, but I think if you say WOL, it's not as cool as OWL. So I think it's just ontology web language, we could say. But it is RDF, essentially, and it has a very nice human readable kind of way of being written. I'm gonna say this now, and I'll say it again towards the end of the talk. You don't have to know how to read or to write this code. Okay, we're gonna get into the systems that you use to edit and design ontologies. They're gonna write this owl for you. You're gonna be dragging and dropping and establishing relationships between things in a very intuitive way. The RDF, the owl, will be written for you by the system. But that being said, it is highly readable. We have this top section here that you see the word prefix. Actually, the prefix is actually, you know, we can say EX. Instead of writing this whole string of characters, any place below this, I can just use EX. And then that's so that essentially is defining the prefix. This is where you can import your ontologies that are sort of standard ontologies, or ones that you want to bring in and connect to. And then in this example, we're just going to have EX be the thing that we're defining. So we could walk through this code a little bit and say that we're going to define a class. So that's this class. I've been pointing to the wrong thing. So we're going to define a class so that's that's this class um oops i've been pointing to the wrong thing so we're going to define a class um which is called trainer and we're going to define another class which is called horse so that's these guys and we are then going to define the relationship between them which is called a property. And that's trains. And so it's really that simple. But you can imagine as this grows, your OWL document is going to contain potentially hundreds of these class definitions and property definitions. The data is exactly the same. It's RDF. And so now in order to describe Bob Baffert as a trainer and that he trains justify, we have two rows for that. And then we have justify as a horse. And so that it's that simple. Again, you don't have to learn how to write this. I find it useful to be able to look at it and extract some meaning from, oh yeah, read through it and it kind of makes sense. Now, if we wanted to, we're gonna talk about ontologies being designed, I mean, the great power in ontologies that I see, there's many, but is that they're able to be extended and they're really designed to be extended and to bring in other ontologies. So, you know, NIMBLE has, as an organization, has put a lot of effort into this, into the biopharmaceutical process ontology over the last year. At some point, that's going to be available to the outside world. And, you know, that'll be accessed by linking it into whatever their ontology system is that they have in their organization. I'm going to show you how easy it is to link in another ontology. So there's something called FOF, Friend of a Friend. And this is a really important ontology, actually, because a lot of your social media tools that you have on your phone are actually driven by FOF on the back end. And so it's an ontology for defining people and relationships and properties around people like their name and stuff like that. So we can make an import of FOF into our graph database, and we can then basically say Bob Baffert is a person, and we can capture that in our data. Bob Baffert has a name, and it's Bob Baffert. So, you know, so, but really extending the ontology, bringing in additional ontologies is that straightforward. In the relational database world, we would need to sort of re-architect, add fields, add tables, add other maybe foreign keys to do that same thing. So, this is the 30,000 foot view of the biopharmaceutical process ontology that Nimble's been working on that I alluded to earlier. So just as that previous slide was starting to get a little bit complex, you can imagine hundreds of concepts and relationships being built into the ontology and how complex. This is not the entire ontology. This was like a kind of a cool looking subsection of it. So, but I'll say that the, you know, when the work was initiated, there were certain use cases that were in mind. Things like process monitoring, you know, these are use cases that it would be great to be able to enable using the ontology. Process monitoring, cross domain project aggregation, you know, so if you've got you know maybe it's just creating line of sight across unit operations, you know, so that we can navigate the data from saw to drug product vial, for example. So raw material impact, so what's the impact of raw material inputs on the product quality would be an example of this type of a use case. And integration of results across process steps. So that's the line of sight that I was talking about, but also more than that, material genealogy, sample tracking, et cetera. So I'll say that for people that have worked in this industry for any time at all, probably ISA 88 is not a new term. It is a standard and it was a key component to informing this ontology. And just to reiterate, ontologies are always extendable. They are extendable in two important ways. Integrating with other existing ontologies, like that FOAF example that I just mentioned, or building new classes and properties to meet your organization's needs. So, you know, a antibody producing fed batch process is kind of what was in mind when this ontology was built. And there may be a lot of overlapping classes and and and properties that that for organizations that are solely focused on stem cells or car T. But that extension that's okay use the, you know, use the part of it that is relevant and then extend it to meet. The, the, the processes that your organization is running. So to get started, you're gonna need some tools, and a lot of them are not gonna cost anything. And you can certainly get started very, very, as a proof of concept, you probably could get started with ontologies in the, tomorrow, for example. Not with a nimble ontology. We'll talk about that in just a second. But the so open source, free license, a lot of these tools are Mobi, Protégé. These are environments that allow you to define classes, establish relationships between classes, and really just build up your ontology and ultimately export that OWL file that will be what your database needs for ingesting that ontology. So again, you don't have to write the code directly. You're going to need a database. So databases – some graph databases are also called triple stores. Neo4j is not a triple store. It is a labeled property graph, and it just has some additional features that may not be present in some of the other graph databases. ANZO is the tool that we've been using at Nimble. There are others that are quite popular, AllegroGraph and Neptune. Neptune is the AWS managed service for graph database. So in the end, what will that look like? What will sort of a cartoon schematic of your graph database look like? This is the Neptune icon. It just has a nice color, and I'm not trying to sell Neptune to anybody. But, you know, your nimble ontology OWL file will live there, any other ontologies that you have. There's a concept called shackle within most systems. This is just about data validation. So, making sure if you're ingesting a certain type of triple that you're getting, you know, and you're expecting a date-time that you're not getting an alphanumeric kind of formatted input. So it's just about the data validation. The graph data will live in your graph database, obviously, and then just like with relational databases there is a really nice easy to use intuitive query language called Sparkle and that's kind of what the tech stack regardless of which database you choose this is kind of what it's going to look like. There are different types of capabilities that can be constructed around not just the nimble ontology, but really any ontology. But we'll go through the thought exercise of saying, okay, let's say we have the nimble ontology right now. One of these is called, I would just kind of tag it as being capabilities that are all about insight, gaining more insight. These would be typical ML, AI kind of applications where you start with operational data sources. This could be well-structured, it could be unstructured or semi-structured data, and that's why you need a data engineer. This could be well-structured, it could be unstructured or semi-structured data, and that's why you need a data engineer. The data engineer is going to have skills like Python or, you know, so they could write some code to do ETL. ETL stands for Extract Transform Load. So basically, you could write code to pull large volumes of data out of your operational sources, curate that, clean that, normalize it if that's what's needed, and then bring it into your graph database. The other, this is probably the more common approach. You know, Nike automation is what we call it in our group. And this is just kind of brute force. Nike automation is essentially no automation other than the person. So I've seen examples of data set construction where having to go through and pull data out of PowerPoint files. If that's where the valuable information is, that's where you go for it. And sometimes you have to use the Nike automation. But in the end, you get a well curated data set that is a resource for BI, business intelligence, machine learning, AI, or advanced analytics. So the insights, maybe there's a prediction, you know, or, you know, there's a model that is developed based on the data set and the predictions can be made from that. Or maybe there's some kind of business intelligence visualizations that are stood up in Power BI or Spotfire or something. So some of the – to go back to the original use cases, you know, that cross-domain project aggregation, that line of sight across projects or across unit operations, raw material impacts on product – so inputs affecting product quality, you know, and then line of sight across uh and establishing material uh genealogy so there's another type of um implementation and this is this is where astrazeneca or at least our team is really going to be um focusing when we when when we start the the nimble ontology implementation we're going to be focusing on on building ontology driven operational tools. And so, and we'll go into this a little bit about what exactly that means, but instead of having that, you know, all of your systems, all of your operational systems sort of being in that legacy, either relational data based historians or others, having the operational tools generating the graph data from the beginning. The clients which are depicted on the right, so this would be the user interfaces that our scientists already have, they're not going to change. The APIs and business logic are going to operate on, it's just going to have a different back end. And, you know, this kind of retooling, at least as far as the operational tools themselves, it's going to be kind of a transparent change to our end users, but the shape of the data and its readiness for the data engineer is going to be a completely different, potentially game-changing scenario. So, we go back to this insight, building large curated data sets for BI, ML, and AI. Well, in the future, we're going to have fewer and fewer of these systems, potentially, hopefully, and more ontology-driven operational tools so that this Nike automation, this code writing for ETL is maybe less necessary, and it's more of a sparkle query to get the data that you need, and then it's more ready. I don't think we're ever going to make the data engineer's job easy, but we can certainly make it easier. And so, you know, ontology-driven operational tools streamline data engineering when building these multi-domain data sets. And so, okay, so this is a – this came about from me trying to find and identify, like, business processes in our organization that are – that don't depend. So, I work in a development organization organization and we develop and design the processes and. And then, you know, are transferring them to either clinical or commercial manufacturing, but a lot of our require knowledge that would be contained in, you know, or potentially driven by ontology based operational systems. So, this is just an exercise to show, okay, well, here's some of the basic business processes that we have that, you know, would involve equipment knowledge. Things like equipment utilization, instrument data acquisition. I kind of clicked the button too fast here, but equipment scheduling, metrology, all of these things are key things that we do as a development organization that depends on knowledge about the equipment. Knowledge about the process recipe. I have an experiment design. What is the process that I'm running that I'm basing that experiment around? I'll point out two more of these. Bringing context to the instrument data. So, okay, I have some data from a VICEL or from a cell counting instrument or from some other, you know, process monitoring instrument. What's the context around that? What was the process that was being run? At what point in that process was the sample removed? Likewise, bringing that same context to the product quality data. Was this a, process intermediate was this drug substance. And then finally, um, you know, more complex, so equipment recipe and maybe some business logic or some other integrations with other systems. Things like capacity modeling capability modeling tech transfer facility fit. These are all, you know, processes, things that we do as organizations that that depend on a intimate knowledge of. Of the process or the equipment, which is equipment is part of the process. So, so there is a future where where the nimble process ontology could be at the core of many of these operational data systems, which again, would make that data. Just, you know, we want to go back to fair data. It makes the data more fair, but it, but, but from a practical perspective, it makes the data more ready for for machine learning and AI and the more high value things that we do with our, with our data. And I'll, I'll plug document automation into that too. So since I've been in the industry, there's been talk about the holy grail being automated regulatory filings. So maybe that's something that we could inch closer towards as well. So in summary, ontology is a buzzword. It's also a powerful tool for information management. To get started, you're going to need an editor. You're going to need a database. You don't need to know how to write RDF code, which is a good thing. And I'll just also say that Nimble will be progressing the biopharmaceutical process ontology into production this year, so stay tuned. Just want to, by way of acknowledgments, just highlighting the folks at the bottom that really just keep everything moving forward so roger jim sandeep and taylor um special thanks to to you um and then uh i think with any luck at all i've i've ended with enough time for some questions yes thank you thank you robert yes we have a few minutes for questions um so if you haven't posted your question already, go ahead and post it in the chat. We do have a few that are ready to go right now. If you go back one slide, Robert. Not sure why that happened, but sure. I think I set my coffee down in the mouth. Oh, there you go. Okay, so first question we have here is, for what scale of an organization, like how many related products do you need, will the benefits of an ontology-based data system justify the part of that again? It's for what scale organization will the benefits of an ontology-based data system justify the cost of implementation? And they kind of asked in that question, like, how many related products do you need before you can say that this would be fit for an ontology? Yeah, that's a really interesting, interesting question. I think you can kind of bite off as much as you want, to be completely honest, because we could think huge about having a dramatic impact on the industry and having standards for tech transfer. So So if I develop a process, having an immediately compatible way of transferring that process to a CMO or to one of our internal manufacturing sites and just having that consistency of and compatibility across the industry, I think that is a much higher. Activation energy barrier than looking within your organization and saying, well, we have a lot of manual. Processes relating to our information, whether it's the data acquisition itself, or whether it's aggregating it together for things like the LA filings or for, you know, which would be some of the analytics that go along with that, and having some progress towards improving that. And, you know, having ontology-driven operational tools would definitely not be, you know, nothing we do is necessarily cheap, but certainly you can dip your toe in and sort of invest as value as you're starting to see return on the investment. So, you know, it's kind of a wishy-washy answer, I know, but, you know, I don't have a good, good number for that. Yeah. Okay, thank you. You might have covered this already and answering this previous question is, what are the business benefits associated with creation and use of a. By manufacturing process ontology by nimble members and the bio manufacturing community in general. Yeah, I mean, I think you could see the long game of being, hey, let's speak the same language and have compatibility across so that we can just streamline things like tech transfer from organization to organization. I think the sort of more intermediate term kind of practicality is that we can have internal streamlining of things like tech transfer of facility fit of these types of things without having, I think it is going to be a more challenging thing to impact the industry, but I think that is our objective long term is to have that kind of impact on the industry so that we at least have a way of translating from organization to organization. It may not be a one-to-one, but it may be that, okay, AstraZeneca is going to publish a process into the Nimble ontology structure so that it's then ready have to integrate directly to CMO1, but we're, you know, CMO1 is able to ingest recipes that are published into that structure. Great. So another question here is, has the ontologies for the various business processes highlighted been developed created already yeah i would say um the it's a it's a work in progress our our um our intent in terms of as we launch and go into production will be to have a planned roadmap for when. So if we think about the core ontology that has been built that is fairly comprehensive, there will be modules over time that are sort of maybe more domain specific. You know, that that maybe there's there's there needs to be a module that is specific around cell banking, for example, or maybe there's there's a module that needs to be built out that that that makes the ontology extended extended into into CAR T processes. Yeah. Great. And then there's another one. So do you align the ISA 88 process model, procedural control model, some legacy models like the chemical engineering's unit operation or some custom concepts terminology? For the – yeah, we did our best to preserve the ISA 88 standard. That being said, I think there were, there's obviously things that, you know, were extended beyond ISA 88. And, you know, one that I can throw out is the concept of units of measure standardization, you know, being able to bring in an existing ontology that already has solved that problem that, you know, that we can just tap into and so that that is um maybe outside the scope of is88 but it's something that still that that there's value in being able to have control of that aspect of of how you describe processes great thank you uh maybe just one more question um and if we don't get to your question i'm sorry there are a number of questions actually that came in through the through the chat. So we just don't have time to hit them all. But if you, if you really want to. You know, find that find an answer to your question, please reach out to me. My emails in the 1 of the 1st slides and I can get that answer for you. But I have one more quick question. Let's see. So bioprocess ontology is a big creation for standardizing nomenclature. Has Nimble looked into what other ontologies have been created and if and why they were adopted or not adopted? First of all, I would like this person to join our team in terms of the ontology work stream. But this is a challenge, right? So what makes you different than other initiatives, right? And, you know, we can look at IS-88. That's a standard, not an ontology. It tells you basically everything you need to know, there are other examples in our industry, like I think BatchML has had, you know, there was a lot that went into that. And why some things take off and don't, I think, is about the approach. How accessible is it? I think there's a reason we want to walk with members and others that are going to be adopting the ontology. We don't want to just throw it over the wall and say, here's this really complex thing. Just go run with it and try to figure it out on your own. I think our plan is to have implementation case studies that will be available to organizations that are adopting the ontology to not just sort of say, oh, we're done, we're going to publish it and we're walking away from it. We really want to create partnerships and kind of walk with those organizations that are implementing it so that they're not going to just sort of feel like, hmm, this is really complicated. I think I'm just going to stick with my relational database, right? I think that I can attest to that. I've been in that situation where I'm like the amount of education that I need to adopt this new thing is way greater than sort of maybe some of the shortcomings that I can work around in doing things the old way. Right? So, I think that's why some of these initiatives fail. You know, many of them have great initial concepts and ideas and intent. But, you know, we're working hard to not become that. Great. Thank you, Robert. And thank you everybody for sending in the questions. Sorry if we did not get through your question. Like I said, we got a large number of questions actually through the chat. So Robert, maybe go to the next slide. So if you want to find out any information on all things Nimble, including the Big Data program and ontology and the Big Data program please visit our website there's information on there you can also follow things that you you might want to get updates on and you won't get an overwhelming number of updates i think it goes out like once a week or something so thank you next slide robert again i just want to remind everybody that that the big data program workshop is coming up in the next couple of weeks. Please do go and register at nimble.org. And if you can, please join us in person. But if not, join us virtually. And even if it's just for the session that you're an SME in, we would really appreciate your input into the workshop. Next slide. So again, there will be a survey sent out after this after the webinar. Please fill out the survey. We value the information we collect from the surveys. And one carrot for filling out the survey is that you will get immediate access to this webinar recording. I would also like to remind everybody that there's another nimble webinar coming up on vaccine manufacturing initiative. It's next Wednesday, March 8th at the same time. And then next. So again, thank you to everybody. Thanks for attending. And again, if you need any, if you need any more information or need anything from the big data program, please reach out to me. My email is in this presentation that you'll have access to. Okay, thank you and thank you Robert for doing a great job. Take care everybody. | NIIMBL Big Data Biomanufacturing Process Ontology Webinar (March 1, 2023) | 3,411 | NIIMBL | 20230314 | Create “standard ontologies” and drive adoption within Biopharmaceutical community to enable sharing within and between different enterprises to reduce time spent in tacit knowledge transfer and support digitization initiatives. Develop reusable and extendable modular ontologies to enable future use cases with version control and support stable enterprise system lifecycle management. | 2024-10-11T11:13:03.547710 |
https://www.youtube.com/watch?v=m3EPg3FOaEc | So, let's get started here. So again, welcome to the NIPL webinar series. This webinar is going to be on open sourcing biopharmaceutical manufacturing ontology. Next slide please. So, let's take care of a couple housekeeping items here. Some webinar details are the. All those who are participating in the webinar will be muted throughout the presentation. So please use the chat function. To ask any questions or post a comment, this webinar will be recorded. And then, as a confidentiality. Reminder, the webinar is open to members and members. So, please do not post a nimble confidential or proprietary information in the chat or the. The Q and a next slide. So, now I would like to introduce Dr. Roger Hart. So Dr. Hart joins Nimble with 30 years of experience in the biopharmaceutical field, having focused on the process development spanning preclinical molecular or molecule selection through commercial. Go back one, 1 slide Roger. So, um. It's the through commercial process implementation and support for protein therapeutics prior to joining nimble. Dr. Hart was the scientific director process development at amgen. Responsible for responsible for external scientific collaborations within the academic interface and external outreach programs to universities and technology incubators. Earlier in his career, he advanced numerous products such as Mavs, peptide bodies, and proteins, and small molecules, as well as technologies. As part of the management and scientific staff at. Amgen and Genentech that the heart received his bachelor's degree in biomedical engineering with a specialty in chemical engineering from University of New Mexico. In 1985, and his masters and PhD in chemical engineering with a specialty in biochemical engineering from Caltech in 1991. With that, welcome again, and I'll hand it over to Roger Hart. Thank you, Jim, for that kind introduction, and welcome to all of you to the open source biopharmaceutical manufacturing ontology. We're here to share that on behalf of the big data program within NIMBLE. We have a vision within the big data program that biopharmaceutical manufacturing is transformed by data-driven technologies which optimize the productivity of biopharmaceutical manufacturing and accelerate the delivery of high-quality medicines to patients. We're joined by numerous organizations representing biopharmaceutical manufacturing, supplying their suppliers, technology development companies, small and medium manufacturers, government entities and the like in order to realize our vision. We're composed of five different work streams, which are aligned with the data lifecycle, which begins with data creation. The work stream for data creation utilizes high-throughput technologies in order to create data, and we're advancing privacy-preserving computing approaches in order to realize benefit from even larger data sets. line sampling and advanced multivaried analytics for a collection of data and transformation of complex signals to extract meaningful attributes. Data storage, which involves establishing data storage standards and analysis standards contextualized for process, raw materials, and quality. Predictions from data, which involves next generation, combined mechanistic, statistical, and hybrid models for biopharmaceutical manufacturing, process development, and other valuable needs. And then ultimately control from data, making difference in the real world. And we have projects there which are utilizing advanced control strategies in order to control critical quality attributes. We'll be talking today specifically about data storage with respect to ontologies. And I mentioned the objectives previously, but the big data, one team is really focused on standardization, contextualization of data to realize both its storage and acquisition and retrieval for valuable purposes. The 2022 goals were focused on developing ontologies for bioreactor harvest and protein A capture, a limited extent of the process. And as a reminder for many, ontologies are semantic information models, which are tailored for domain experts with clear, formal, and natural language definitions and labels. It represents the data domain through entities, their relations to one another, and literal values. The diagram you can see underneath that definition is a data graph, which is actually connecting the different entities by their relationships across the chosen domain. This is an image that's shown from an ontology editor app. And you can see on the left panel the collection of the different domains and classes with which the entities and the representative graph are shown from. So it's a very accessible and user-friendly means to understand a domain using natural language and semantics. Our 2023 goals were to continue to mature and self-develop additional ontologies, and then to prepare to mature using open source software approaches. And for those that are not familiar, effectively in simple terms, open sourcing development entails an originating software which has limited scope and uses as created by an entity, us specifically in Nimble, and made available to a partner through an open license so that the source code is now available for community-guided maturation, utilizing trusted developers with a growing scope of use cases, and utilizing a community-funded, secure platform, which entails a trusted repository with verified functionality to produce software assets that the community can adopt, including the industry, academia, government, and the like, and ultimately customize that software versions for their own uses. And then to mature the resulting software assets using controlled means in order to assure reliability and performance. We've been developing the Nimble Ontology using good ontology development practices, specifically design and develop should happen in the context of real use cases. This is real world values. And the example that we started with was to examine the impact of raw materials on process behaviors. That is to connect the raw material process intermediate lineage across runs to analyze the effect of the material on the process performance. We did that using comparative analysis wherein we could aggregate and analyze data across many runs within a process or across scales to support trend analysis of process parameters high value cases, and we anticipate advancing those within the open source development format. Additional ontology development principles that we embodied utilize maximizing the extensibility and interoperability to layered ontology architecture, making use of upper-level, high-value, interoperable, domain-independent, and domain-and-source ontologies, which exists to ensure interoperability among our domain with other existing ontologies, and to utilize focused ontology development practices with real data schemas and real competency questions. That would be an example of, for instance, an instance within a site or a plant, and to align with upper-level ontology frameworks which ensure interoperability among ontologies. This is exemplified within the nimble ontology, which is organized around IS-88 and 95 concepts, contains things such as the recipe components, critical process parameters and attributes, equipment control, strategy materials, and the like. And that this then supports additional sub domain or other use case extensions to allow very deep articulation of the semantics for particular additional use cases. This nimble ontology then provides a core modeling framework for cell line development, upstream process, and downstream process. It provides initial terms for end-to-end processes and a framework for future use case extensions I mentioned for additional detailed use cases. And in terms of the alignment with upper ontologies, we're connected to the industrial ontology's core foundation ontology, which contains concepts such as plan specifications and manufacturing processes. Additionally, to a highly utilized domain ontology for quantities, units, dimensions, and types so that the articulation and descriptions of the measures of things is aligned across many different ontologies. And then the whole of these ontologies are aligned with the basic formal ontology, which represents the key for interoperability among ontologies as it contains concepts such as continuant, which are things that are continuously present. They're not time dependent. They're independent concepts such as materials or dependent such as quality, but they're not time dependent. And then a current, which do relate to time such as process temporal or spatiotemporal concepts. So all of these ontologies are then are connected to one another using good domain practices in order to ensure that we have coverage and interoperability. Additional development principles were ontology development requires coordination and of design and application. In order to really ensure maturation for the value of the whole of the community, we've been preparing ourselves to launch open source ontology, which you'll hear more about, by taking our ontology and applying a variety of different practices in order to prepare them, including, for instance, modularization. Modulization is needed for ontologies in terms of publication, release, validation, maintenance, reuse, all the kinds of things that one would think of in terms of lifecycle management. And within that modularization, we applied, again, the design space concepts related to IS-88 and 95 specifications, and modules designed and scoped in order to achieve encapsulation, independence from one another, and completeness. The Nimble ontology was partitioned into those designed modules, and then relevant concepts were extracted from other existing ontologies in order to complete the modularization. A maturity rubric was developed in order to guide the release of completed modules to ensure achievement of needed attributes for users. Examples of these maturity include, for instance, annotation consistency, quality with respect to cohesion or richness of the representation in the domain coverage, and to assure interoperability through intermodule connectedness and lack of term redundancy within the portfolio of ontologies, ultimately to evaluate the performance for the ontologies for the specified use cases with competency questions. And then finally, for the specified use cases with competency questions. And then finally, to identify new use cases so that we have a good understanding of what was the ultimate scope and content associated with the biomanufacturing ontology and collecting these use cases from the community in order to ensure a common understanding of the domains and how are they are interoperated within companies. So we've defined and prioritized anticipated use cases and uses for the ontology from a portfolio perspective, and we've defined the expected level of user versus system interoperability and the frequency of module and module cluster uses within the various use cases. All of this preparation has really been focused on developing us an ontology which can be matured very reliably in open source contact. What you're seeing here is a depiction of the modularization scheme associated with the nimble biopharmaceutical manufacturing ontology. The various concepts are modules shown by symbols and each of course contains many different classes and many different terms and relationships in order to describe the contents. Specific examples include, for instance, the recipe module, which represents the hierarchical and tempor mature with time during the process of development as it's transformed from a general recipe to a specific instance recipe, which is associated with a specific manufacturing batch. And then the batch production record, which represents all of the records associated with the production to support the quality and analysis of batches, identify exceptional batches, determine root cause and analysis of deviations, and detect future quality trends. Additional modules include the physical module, which represents the physical spaces where processing and storage occurs. Spaces are specific and bounded geographic locations which can be hierarchical and embedded. For instance, I'm speaking to you from my office, which is located within my house, which is located within my city. An agent module represents various types of agents, actors, and biopharmaceutical manufacturing. These actors are, correspond to both human actors, operators, analysts, and the like, as well as engineered systems and organizations, because biopharmaceutical manufacturing can, for instance, involve contracted third parties. Additionally, there's a material module which represents material types and their associated properties. Terms for characterizing the material usage in internal operating systems are also included. Examples being, for instance, consumables or process intermediates or final products. And then the material procurement and storage module represents the material tracking and tracing in connection with the process implementation and control. Additionally, equipment module, which represents equipment types and their properties, which exemplified by, for instance, their capabilities and their functions, their types, the stages of their validation and completeness. Then the manufacturing execution module, which represents the actual occur execution of monitoring and or controlling for both laboratory and manufacturing scale. And ultimately, the parameter module, which represents the process intermediate and product quality attributes, process indicators, and process parameters, which, of course, no surprise, are embodied within the recipe module. That illustrates the degree of interconnectedness between these modules, exemplified by process parameters, which were embodied within equipment specification, which is located in specific locations and called out in specific recipes, which utilize, for instance, specific materials such as single-use consumables. There's benefits associated with open source in the ontology. I'd like to take a moment to share those with you. The ecosystem associated with production of biopharmaceutical manufacturers is very complex. We have suppliers, which are composed of business units, which make very many products, which provide then to the manufacturers of biopharmaceuticals, which themselves are composed of many business units and make many products, which is supplied then to customers, which have many business units and distribute and provide many products. We have a many-to-many-to-many relationship. And not surprisingly, the work associated with developing and supporting a many-to-many connections of data scores scales very poorly. The cost associated with those connections tends to scale as N times N minus one. Let's just call it N squared. It's very explosive in terms of the number of connections associated with this type of ecosystem, and well as within a manufacturer as well as within a customer. That makes it very difficult, of course, in order to go about scaling the connections of all of the different concepts associated with the data for use cases within any one of these individual domains. And that hinders, of course, efforts to realize benefits from data, because you need to be able to connect the data in order to address meaningful use cases. The cost associated with one-to-many connections alternatively scales very well. The one-to-many concept is exemplified by this diagram, wherein, for instance, the supplier, the manufacturer, and the customer are all mapped to consensus semantics for integration and translation, exemplified by an ontology. The cost for this type of architecture is dramatically less and scales as two times N, where N is the number of connections, basically mapping to and from a particular concept. Each enterprise benefits individually from having an internal integration translation data model, their own data model. However, each enterprise additionally benefits more if they share a common canonical data model for interoperability among other partners within the community. So the ontology can serve as a high value, as a translator among different enterprises. Development of a consensus ontology requires community collaboration in order to create an asset which everyone calls ours. Open source development lowers the barriers for engagement. It also provides permissive licensing, which also lowers the barriers for adoption of the ontology for use, and for technology providers to create derivative offerings of an added value without associated royalties and impediments. So I think you can see that there's dramatic explosive use of open source derived software such as Python and the like, which now dominates many applications. So our approach for open sourcing first is aligned with our vision, our vision is to have a publicly accessible ontology for the biopharmaceutical manufacturing domain, which is developed using open exchange collaborative participation and community development. The source codes is shared with permissive licenses to enable users to modify it for their specialized uses and to be matured by managed governance to ensure security and change control. Industry, government, and academic SME use the code and contributes to its improvement to ensure the project adapts to the needs of the users according to lifecycle management. We, Nimble, are the developer for the initial source code. Ontology source code developed by NIMBLE Big Data Program is a source code or transferred to an open source partner via licensing, permissive licensing, to create the initial product offering. Developers with collaboration control, a semi-centralized balanced operating models used whereby governance committee composed of funding source representatives and invited experts authorize new developments and approve software releases according to maturity rubrics that I mentioned before. And developers from the community can be part of the core developer team after completing training or proving themselves to be reliable, valuable contributors. All of that development occurs on a trusted repository platform components so that the module ontology files are the core components of the open source project. They are collaboratively developed by the network using other open source programs to lower the cost and ensure easy access platforms such as Protege and GitHub with governance oversight. Extensible programs and platforms will provide core functions and interfaces are used to enable third-party applications and developers to interact with the platform. That provides the opportunity for third parties to create derivative offerings of added value without the impediment of royalties. Then the ontologies are made available to and distributed. The open source project is a non-commercial and non-profit. All revenues are used to sustain the project. Permissive licenses for the open source code hold minimal restrictions on how the software can be used, modified, or redistributed, specifically, for instance, an MIT license. Then that offers the availability of the ontology and the software to the primary users for profit businesses, for nonprofit businesses, for government organizations, and the like. Academic customers are considered key contributors to the development and maturation of the free offering. Primary value derives from functional use of the ontologies by enterprises, and key users of the ontology include software developers, software system integrators, and knowledge engineers who create the informatics tools to support end users within their enterprises. And, of course, this middle section is a managed collaborative effort, which is hosted on an external platform with a low cost profile from nonprofit. It's funded by a sponsorship model with governance for quality oversight to ensure that there's also means for lifecycle management improvement to address and resolve errors. Our approach for open sourcing is to utilize the Biomanufacturing Industrial Council, which was created for open reference ontologies for biopharmaceutical manufacturing. It's composed of the nimble seeded biopharmaceutical manufacturing ontologies. It has a sponsorship model, which I'll describe more in a little bit, and it has governance teams associated with it, a governance model, a technical board, a technical oversight board, and working groups in order to create more biopharmaceutical manufacturing domain-specific ontologies, which are all interoperable with the original seeded biomanufacturing ontologies. The Biopharmaceutical Manufacturing Council is embedded within the Industrial Ontologies Foundry, which works with government, industry, academic, and standards organizations to create open reference ontologies that span the domain of digital manufacturing. The IOF has ontology development policies and practices. It maintains a digital platform for the maturation and automated management of ontologies, and itself is composed of a governance board, a technical oversight board, and working groups, all of which is then located within the Open Access Group, which is a U.S.-based nonprofit standards organization established in 1995 for the purpose of addressing international standards for data, and it maintains permissive licenses and is a nonprofit organization, which maintains nonprofit status and provides revenue or provides monetary resources for the maintenance of and management of the IOF and the BMIC. In terms of the operating model, as I mentioned, the IOF is composed of a governance board, technical oversight board, and the working group. Well, similarly, the Biopharmaceutical Manufacturing Industry Council also has a governance board, a technical oversight board, and working groups, so that this is a semi-autonomous operating model, which utilizes collaborative principles in order to assure good development practices. Specifically, the biopharmaceutical manufacturing ontology will be developed within the Biopharmaceutical Manufacturing Industrial Council. And relationships between ontologies developed within IOF and those within the BMIC will exist and they will be managed to ensure that there's no redundancy and all ontologies are interoperable. The demarcation between the reference ontologies and the application-specific ontologies, extensions for specific applications, will be collectively established and maintained and managed. It's important to understand that there is domains of overlap between ontologies, and it's necessary to manage them to ensure interoperability and avoid redundancy. An illustration of that is simply shown below. The IOF ontologies currently contain ontologies in relation to material science and systems engineering, which clearly have importance of biopharmaceutical manufacturing industry council extensions in relation to materials and equipment, which contain concepts and semantics, which are very specific for biopharmaceutical manufacturing. This shows the instance of hierarchy. It also shows that there would be a high degree of interoperability alignment that occurs owing to the collaborative nature of the operating model devised. So, NIMBLE will continue to operate in partnership with IOF using a sponsorship model. Ontologies will be created, released, distributed, and matured within IOF. And there's benefits associated with membership of IOF. I'm joining IOF. I hope you do too as well. By being a member of the IOF, any biopharmaceutical industry council member has the right to join any IOF working group. There's a lot of different working groups working in a lot of different ontologies of high value for your enterprise. Things as, for instance, maintenance, things as, for instance, procurement and purchasing, all which are endemic and extensible across all industries. Those types of ontologies are being developed and maintained within IOF as we speak. Similarly, any IOF expert has the right to join any BIC working group. Well, that's highly valuable because there's a high degree of expertise presently in IOF, and this cross-fertilization of teams helps to ensure that there's a high degree of alignment to ensure interoperability and avoid redundancy. The Governance Board and the Technical Oversight Board within the biopharmaceutical manufacturing can form any working group or technical group that they deem necessary. And the working groups develop the ontologies. So you can be part of that development process by being part of the working groups by being a member of IOF. The technical oversight groups develop technical contents other than ontologies, for instance, software that can help with the ingestion and use of ontologies. Nimble will be working with IOF specifically through the sponsorship model. We will identify use cases and identify how those interoperate within the BMIC modules and the IOF ontologies that are existing. We will develop those use cases with competency questions and ensure the high quality delivery of that ontology asset, including, for instance, including a member assessment of the ontologies to ensure that it's fit for the intended purpose. That sponsorship model is available because when we created the Biopharmaceutical Manufacturing Institute Council, we created it with a sponsorship model. As a result of that, any IOF member has the right to finance a project which will be beneficial to the Biopharmaceutical Manufacturing Internal Council or the IOF. The party member providing the sponsorship has the right to set the scope of the utilization of the given funds. It's not just transfer funds, it's specification of how the support will be used by the funds. Upon determining the objectives, deliverables, and the targeted project timeline of the IOF member should present the plan to the BIC Governance Board to ensure that there's good understanding of the project. And then the IOF or the Governance Board or the BIC Governance Board would then take the responsibility to communicate the information to the appropriate stakeholders to ensure there's no overlap of conflicting work and approve the work if deemed necessary. In the case of a deliverable that's an ontology, it would be subject to the final release procedures and approval by the technical oversight board and the governance boards. In the case of a deliverable that's a report or some type of assessment, then similarly there will be a presentation, the report would be made available to IOF. So this is a depiction then of the opportunity to be able to describe valuable use cases, provide them by a sponsorship model, which contains, for instance, support by data scientists of academics in order to accelerate the creation and support through member, nimble member assessment to ensure that the ontology once developed is fit for purpose against its intended competency questions. And then for those ontologies to be developed within IOF and ultimately released to the community via the open source model. Thank you so much for your attention in this seminar. I hope that you're very interested like I in joining IOF in order to mature these ontologies. Happy to answer any questions that you might have. Yes, Roger, we have a few questions, but I'd like to remind everybody that you can ask a question through the through the chat feature. And I'll read them out to Roger for for the answers. So okay so why don't I get started with a couple questions here. There was a question early on which ontology editor is the team using? Oh well that's good. Well, we have an ANZO derivative that we're using now that we've been using within Nimble, but all of the ontologies have been organized within their modules with RDF files, which can be ingested by Protégé or any number of open source ontology editors. So, if you have one, I'm quite certain that the Nimble ontologies can be ingested and used with your editor. Great. Okay, we have another here. Can you provide a real-world example of how biomanufacturing floor operator or entry-level analyst can help develop these ontologies? And they have a couple of examples like interviews, actual deviations, errors with current processes, et cetera, bottoms up? Well, you answered your own question. Remember, an ontology is a semantic model. Semantics are words. We seek to encapsulate the meaning using the right words and the right relationships associated with different uses and different concepts. Well, the people closest to the work have the best knowledge of the words and their relationships. They're using the words and communicating the concepts all the time. So there are subject matter experts for various domains within your companies that actually have high value owing to their expertise within subject matter expertise within a domain. So for instance, you have quality analysts that are very familiar with the vocabulary associated with quality analysis, the various assays that are used, the nature of the deviation and investigation process. Similarly, you have experts within the material procurement area, which fully understand everything about quarantine of untested materials, the assurance that materials apply within or align with the specifications associated with their use, and all of the policies associated with distributing raw materials for use within the plant. The list goes on and on. The people within your companies, they really understand their domains well and the associated semantics to clearly articulate the work and the concepts involved. I hope that answered your question. Thanks, Roger. We had a couple more come in. So, it sounds like this is a question from somebody. It sounds like the. They sounds like the IOS approaches how you intend to proceed. How do you deal with interoperability with BFO ontologies that says BFO has a lot of standards in order to be accepted and published? Are you planning to meet those? Oh, yes. We're absolutely aligned with BFO standards now, as is all of the IOF. Everything that we build will be aligned with the BFO standards. And for those that don't know, the BFO standards are ones that were created really to help to organize many of the original ontologies that were developed in relation to biology and genomics. Because our space of biology and genomics is actually among the leaders in the creation and adoption of all ontology concepts. An example, for instance, would be the Oboe Foundry, where many of the ontologies related to biology concepts are stored. The Oboe ontologies in their original creation really lacked a tool to be able to help ensure that interoperability, and it is the BFO ontology, which has really closed that gap. So, the BFO ontology is really the key for interoperability among many ontologies. And it's really, to be honest, you know, among the key issues associated with ontology creation is ensure that you're aligned with standards and policies for interoperability. There's a real danger associated with creating isolated silo ontologies. They can work well within their intended domain, but they don't operate well with others. And that is a problem associated with the design. So we've been intentional in our development in a hierarchical manner to ensure that we were aligned with top level ontologies that have maximum interoperability with others. Thank you for the question. Thanks again. So there's been a couple related questions. So I'm just going to kind of compile it into one question. Does a company need to join Nimble or OAGI IOF to have access to biopharmaceutical manufacturing ontology? Oh, thank you for that question. We're pursuing open source principles, so you don't need to be a member of anything to have access to the ontologies that are being created. You would just need to download it from the IOF website and accept the permissive license associated with that download. That's the essence of the open source approach. Lower the barriers for adoption. Make the ontologies available for everybody in order to ensure ingestion and use of an aligned consensus standard. Are there benefits associated with joining IOWF? Well, of course, you can be part of creating future ontology modules. You can influence the type of modules that are being created if you're participating on the governance board, which makes decisions about priorities for different added work. And, of course, you can then bring your subject matter experts so that you can ensure that the ontology that's being created, the consensus standard, aligns well with your own internal practices. Of course, by being a member of NIMBLE, then you can benefit yet further because you can contribute to definition of the sponsor projects, they're sponsored both to incentivize their being done within IOF, but more importantly, they're sponsored with assets to help to ensure that they are created quickly. And that they are created according to the very best practices address all the questions that were asked in terms of alignment associated with the BFO ontology and the like. And their dedicated presence on the project owing to their stipends being paid by through the sponsorship means that there's a greater ability to be able to accelerate the creation of ontology. And then additionally, NIMBLE with its members, we will be evaluating the ontology by applying it in different contexts against the data systems that we have within our enterprises. So we will be testing it to ensure that it is meeting the quality standards that are necessary for us being fit for purpose. So clearly there's a benefit associated with being with Nimble because you can access that sponsorship model with others in order to accelerate the creation of ontologies of value to your company. So I hope that answered the question. Everybody benefits, whether you're a member of anything or not, by simply downloading because we pursued open source. But there's added benefits associated with being either or a member of IOF or NIMBLE. Thank you for the question. Okay, there's a few questions related to this same topic also. And you may have touched on this a little bit already. But with the with the ontology be applicable across different companies across different biopharmaceutical classes of products such as. Biologics, small molecules, those type of things. Oh, that's a very good question. Thank you. I've been intentional to use the term biopharmaceutical. Biopharmaceutical. In our own internal developments, we have been focusing on biologically derived products. But clearly there is a overlap, right? The types of practices, equipment, recipes, et cetera, involved in the formation of synthetic pharmaceuticals is a close cousin to biopharmaceutical products, such as fusion products, which actually contain combinations of small molecules that are conjugated with biologic molecules. So there's actually overlap between the two. So again, it's our intention to develop ontologies within a common environment in order to cover all. Well, we could conceivably develop pharmaceuticals within the biopharmaceutical manufacturing, or if deemed necessary, we could create a pharmaceutical industry council within IOF if it was necessary for there to be independence. I personally think there's a high degree of overlap. I think we could probably incorporate it within biopharmaceutical manufacturing. But having created terms of the ontology concepts that it will embrace. The best way to answer that question or influence that question is to join IOF. I hope that answered the question. Okay, so then another one is, is this ontology available for download now? And let's see. Well, let's start there. Oh, that's a good place to start. We have completed our modularization and transferred all of the ontology data files to IOS. There are ontology experts within IOF that are currently placing those ontology modules onto their GitHub and applying the hygiene principles, et cetera, in order to ensure compliance with the policies of IOF ontologies. And they are doing all that to ensure that they're really compliant with IOF needs. They will place those ontologies in a, I'm trying to remember the right term, a, we'll call it, you know, in process state, and that will be available on the open GitHub for download. It is not in a release state because it hasn the maturation of the ontologies will be within IOF to really take use cases and identify what's the full scope and expanse of the associated modules and complete them to bring them to a state of completeness for release and make them available on the master branch of the IOF GitHub as a released ontology. Thank you. Okay. Let's see. So, if my company already has an ontology, how can I use the Nimble ontology, the Nimble biophineer manufacturing ontology? Oh, well, that's a fine question, too. So, if you already have an ontology, you're already benefiting from ontologies, and you know what the value is. Good for you. Well, you can continue to utilize your ontology within your company. Your company, you know, has invested considerable assets for using it. I would argue that you probably ought to stay the course with your ontology. However, other enterprises don't have your ontology. And so the ability for you to be able to easily query or transfer data from other enterprises, say within the context of procurement of materials or supplies from suppliers, or within the context of transfer of data or recipes for use with contract manufacturing, right? Well other entities don't necessarily have your ontology unless, you know, you did it and created it in collaboration with them. So the consensus ontology that we're creating within the IOF has the opportunity to be the intermediating ontology. That's a way of thinking maybe as a translator. You get two different persons in a room and they don't speak the same language and you have a translator. And suddenly now you have good communication between the two persons. That's a simple metaphor for how the IOF ontology can be used by your company in order to support your data interoperability with other enterprises, wherein you effectively would map your ontology to the IOF ontology, and the partner that you want to translate data with would do the same mapping according to the diagram that I showed earlier, and now you would be using the IOF biopharmaceutical manufacturing ontology as a translator to be able to share and communicate data across your enterprises. Thank you for the question. Great, so maybe just one more quick question, and then we move on to the closeout. So, AFO or Allotrope is a very important ontology in this space. What is the team's approach to working with that and other ontologies that overlapped with or closely associated with this space, what is the team's approach to working with that and other ontologies that overlap with it, with or closely associated with this space? You may have touched a bit on this already, but this is a question that came up. Oh, no, it's an outstanding question. So, you know, we're urging portions of the Allotrope Foundation ontology that were specifically made open source and available to the community, right? We're utilizing what available open source ontology assets are available. Allotrope has a large amount of ontologies in my past history. I've worked with them and others, right? In order to help create ontologies. And so there's a lot of valuable ontology concepts that are out there. It's my hope that we can all work together because for instance, the Nimble ontologies were transferred to IOF via a data transfer mechanism that could be used by others. We have a sponsorship model that's not limited to Nimble. As you heard from what I read, it's available to everybody. IOF could sponsor creation of ontologies. Excuse me. Allotrope could sponsor creation of ontologies within IOF. Similarly, Nimble transferred its ontology assets to IOF by a permissive license. Well, Allotrope could do the same thing so that the valuable Allotrope ontologies that are available could now be joined with the other ontologies and we could be using IOF as a central repository to ensure interoperability, reliability, and make them all available as a portfolio of ontologies for download and use by the community. So in short answer, there's, you know, certain concepts that are mapped that were available because they're open source, as are other ontologies not created by Nimble or the, you know, BFO related. So, yes, I highly encourage collaboration among all the different parties that are creating ontologies. I believe IOF is the way to do that. And I would encourage people with questions to speak with their partners that they're working with to develop ontologies and start a conversation to see how it is that we can provide opportunities to really link and join these valuable ontologies. Thank you very much for that question. Great. Thank you, Roger. So we had a couple people ask about the availability of this recording. So I'll field that one um and then we'll move on to the close out uh so the recording will be available and uh it'll be available in a week or two uh on the nimble uh portal so please please go over to there to access this recording in the in the very short term or near future. So, okay, thank you. Next slide, please, Roger. So, I'd like to remind everybody that you could follow the Big Data program, which is where the zontologies were developed or any other projects within nimble by going into the portal and and then navigating to the program or project you're interested in and and clicking on the follow button right there whenever updates will be added to the to the portal you'll get an email once only once a week um that there's been an update and you can navigate to the portal with a with a link from that email so uh next slide please roger so like that let everybody know that the next webinar is march 27th at noon Eastern, and it's going to be related to evaluating next-gen sequencing as an alternative method for replacing the in vivo and in vitro adventitious virus detection assays. And it's being presented by Dr. Khan from FDA. And you can go to nimble. Events to get more information about that. Next, and as a reminder, the, the national meeting for nimble is fast approaching. It's coming up in June 25th through the 27th in Washington, DC. And this, this national meeting is open to members of NIMBL as well as federal stakeholders. Registration will be opening in a couple weeks in April. Again, you can go to the NIMBL portal to get more information or follow this nimblenationalmeeting.org. Next slide. And for those that are interested, there's still time to register to attend the U.S. Biopharma Workforce Partnership Conference, which is going to be taking place in Boston April 17th through the 18th. Again, navigate to the Nimble portal and go to the events area. So, to register next slide. And so we'd like to thank everybody for for attending the webinar as well as asking such such great questions. Thank you. Thank you all. | Open-source Biopharmaceutical Manufacturing Ontology Webinar (March 20, 2024) | 3,550 | NIIMBL | 20240403 | NIIMBL's Roger Hart will provide an update on the development of the Biopharmaceutical Manufacturing Ontology (BPMO) platform. The BPMO will define the properties and relationships of concepts, enabling data engineers to efficiently capture and more readily access and transfer relevant data about the process. The opensource platform will ensure broad access across the industry and allow for continued modifications and expansion. The BPMO promises to be a game-changer as our industry looks to use data to gain deeper insights into manufacturing processes. | 2024-10-11T11:15:33.632664 |
https://www.youtube.com/watch?v=rde3ak_H70Y | here we go hello everybody good good morning good evening good afternoon and thank you for joining the going meta episode two of season two with uh jesus how is it how's it going it's going very well but the question is how's it going for you because you're the one that's away from home today how is i'm away from home yes uh it's uh it's devox in marocco in marrakesh should be precise happening this week so we have a little presence there we have three sessions in the program we have a little stand um so i don't know it's probably very late but if you are happen to listen to this now and are in and around marrakesh come by the uh the event and then say hi um i'll be there uh obviously uh from from tomorrow on but yeah i arrived here it's very warm not to say hot uh it's very nice so it's good very different very different from what we're experiencing here in the in the uk i don't know if you can hear the rain hitting the hitting. No, I cannot hear the wind. It's a wet afternoon and pretty cold. So yeah, different experience, I suppose. But yeah, definitely. Exciting over there. Well, we hope that your Wi-Fi connection, so far, you know, you come great, loud, clear, and great video. So looking good. That's good. That's good. Yeah, I hope it holds so far it's been stable uh since uh since i'm arriving here so i think it's it's it's going going well but should i disappear all of a sudden you know why i'll either continue or go with some dad jokes like we've done in the past right yeah exactly Yeah, exactly. Yeah. So, yeah, I'm looking forward to the second episode. So we kicked it off last month with, you know, using, it's a mouthful. Our title is getting longer and longer. I know. So I thought maybe just to make it a little bit more readable and probably the season two, episode two can go at some point. But still, it is a long title. And we kicked it off last time with the trip into how to use ontologies, why they make sense and why it is a good idea to add ontology to guide knowledge graph creation from unstructured data. And we had an early peak, I'd say, into the Knowledge Graph Builder app that has been released a couple months ago. Yeah. And today, the idea is to take this in the next step because it's a lot and we couldn't really cover everything in the last episode. So we look at it some more, but this time with more code. So this is more or less a part two. If you haven't watched episode one last time, it's fine. I think you get up to speed quickly, and you can always recap on episode one later on. So as usual, obviously these are stringed together as to some greater idea, but it's not really necessary to have watched uh the previous episode i'd say that's right yeah and we'll do a quick refresh here on what we did last last month and and connect it with that because this is um you know kind of a different approach to the same same type of problem so yeah that's the plan good any announcements should we get started any thing we should know anything coming up anything coming so i can i can do this now yes um we can we can talk about this now um so the um the next session after this one is october 8th so that's uh next week 8th of october we talk about data-driven career guidance with GraphRack so I'll have a new guest on the show next week to talk about how to you know find the right people for the right job and how to develop yourself in your skills and your knowledge basically further on then Road to Notes is happening so that's another one happening also this week on Thursday so in two days time we do app building on Aura. So that's another one happening also this week on Thursday. So in two days time, we do app building on Aura. So if that's something that you're interested in, check out the Road to Nodes website. I'll share it in the chat as well, where you can register and can subscribe to this upcoming session. We have the others upcoming are Mastering Graph Rack and Learn Neo4j Bloom, so graph data visualization course. So these are coming up. But obviously, this is all leading towards the big one, which is nodes in 7th of November. So in a couple of weeks, and obviously today, this is as early as a going meta episode can be on the first of the month. But on the seventh, so it's a couple more weeks. So we have NOLES 2024. The agenda is now live as well. So if you check out the website, if you go to the NOLES website, you can see the agenda, see all the interesting sessions on the day. So yeah. I think that's what I want to get out to you ahead of or the episode depending on time I can always repeat that but I put the links in the chat for now perfect so I'm probably gonna share my screen and get started let me know if you can see it well first of course i'm gonna have to share it otherwise if i share this window will it be happy with him yeah okay perfect so here's the super long title extended even more but yeah the idea like you said is to continue what we left last month. So we were looking at building a knowledge graph. And we tried before in towards the end of season one, doing that from structured data. When data comes from tables, from databases, we looked at how to get the help of LLMs to do that, and even without LLMs. So we've done that in many different ways over the episodes. But since last month, what we've tried to do is unstructured data. So we're looking at data that comes from web pages, from documents, from whatever, but it's kind of a more heuristic approach. We're reading text and we're trying to structure it in a way that we can ingest it into our Knowledge Graph. And in episode one last month, we looked at the Knowledge Graph Builder. That was the kind of no code approach. It's a great app that you can go on a browser and feel free to review the episode, the recording and the assets. But all you have to do is point at your data sources. It can be, like I said, PDF documents, YouTube videos that you want to consume the transcript of, or like we did, web pages. So you can point at web pages, and the app will read them, will generate the knowledge graph, and will give you a... And thinking about it, I don't know if we even tested, but once you build a knowledge graph, not only you can query it in Neo4j, of course, but you have a conversational kind of a chatbot. I don't know if we tested, did we, Alex? So... No, I think we didn't go that far as far as I remember. Yeah, but the great thing about the app is in a single kind of environment, without having to write any code, you point at the sources, the knowledge graph is populated, and you have a chatbot that you can interact with, and that consumes the graph that you created. Now, we saw that there were some, and that was the core of last episode, right? So we actually, let me move to the next slide, because this is kind of the refresher of what we did, right? So we saw that we can do that without any additional information, but sometimes we want to take a little bit of control over the knowledge graph that was built. And the way to do that, it was through the app to provide, it's not so much an ontology, but a list of the labels, the types of nodes that we want to create, the types of relationships. And in our example, we saw how to do that by picking them manually from schema.org, from a standard ontology like that one. So we saw how we could pick concepts, classes, and relationships, like they call it properties in the RDF world, and we could just feed them to the app, and it would create a graph that's a bit more predictable in terms of the schema. But we saw that that was a limited approach, because we, for example, could not set properties or have control over the source and target of relationships. We could say there is a relationship of this nature, but we couldn't be more specific about what these relationships should be connecting. And you probably remember this funny situation where it was connecting Mozart to a location through a composed, right? So there was a bit of uncontrolled generations, which is to be expected when you throw an LLM into the game. But that was the idea, right? So we kind of show how we could, to some extent through the app, use an ontology to kind of put some guardrails on the graph that's generated, And ultimately, we did some validations using Shackle. But the idea, as we see in this diagram, is we have some unstructured data in different sources. We use that through some programmatic API, some form that involves LLMs, to populate our graph in Neo4j. And once it's in Neo4j, we can interact with it and do loads of things. So last episode, using the Knowledge Graph Builder. Today's episode, we're going to look at that using code, using Python. We're going to do exactly the same. So, that's the idea. We're going to get some unstructured data and build a Knowledge Graph, and we're going to get the help of LLMs. But this time, I'm going to do it in Python, and I'm going to show how we can use an ontology in a bit more detail so how we can be more precise and more strict about the the structure of the graph does that make sense yeah sounds good and we kind of can answer the question that came in chat just now from create the imaginable who missed part one um and if we were using an open source AI ontology. So I don't know if you're going to do this at some point, but in the past episode, we did kind of like crudely created our own ontology just to make the point. I mean, I can probably click back, and we still have the slides from last time. But we, I don't know if it's mentioned here. No, it's not in the slides. But we use schema.org, and we, I don't know if it's mentioned here. No, it's not in the slides, but we use schema.org and we extend it manually. So schema.org is a standard, is a public one, and we added some elements that were convenient for the, and we're going to do something similar today. And yes, but in any case, whether they are public standards or something that we create, it's going to end up in our GitHub repository. So you're going to be able to take it and use it and modify it. And yeah, yeah, absolutely. So yeah, I wanted to kind of do a more detailed description of the elements involving this. But that's, you know, I run out of time, so I'll fix that and when we upload it to GitHub, you'll have a better described diagram of the elements. But basically, the data that we're going to be using, it's one that we've played with in the past. I mean, we love the Tate Modern catalog, so... Yeah, that's great. And it's a great website, and, you know, it contains great textual description of some of the artworks so I thought why not you know doing a search for David Hockney for example and we picked a couple of two or three of the of his works and that's what we're going to be using so you can you have the url there and it looks something like this so if you click on some of the artworks, you will get the text. And what I've done is I've extracted the text and put it on text files. But you can do that in many ways, depending on what stack you use. But if you use Langchain, you know that it can go in pages and extract the text. But I'm going to focus on the ontology part and the guidance to the creation of the knowledge graph, rather than on the extraction of the text. That's why I literally collected that in a file, as we're going to see, and we're going to consume the text from it. And we did that a couple of times also. Looking better in season one, the Tate Modern data, I think it was also the New York Met Museum at some um museum at some point so if you if you want to know how this works check out the the previous episodes and uh you'll see it a little bit more i say long form yeah so yeah the the the approach that i've taken and we're going to go in details into the the text uh and the code is to um remember that the the and actually we mentioned that we might bring morgan or neil we still have to do that to to discuss you know maybe on the knowledge graph builder right because you know it's open course and some of the prompts that i'm going to use today have been inspired from the ones that exist there. So you feel free to explore the coding GitHub of the Knowledge Graph Builder we shared last week. And that's basically what we're going to be using today in my own kind of custom version, simplified version for these exercise. So I had asked, but unfortunately they are both out this week so it didn't work out but i'll ask them again for um for a couple of minutes uh next episode maybe we can do that next time yeah so uh and and you probably remember that uh the one of the things interesting things about the knowledge graph builder that we saw last week was that it created two different graphs. One that was a document graph that was based on chunking the content, the text, and creating basically a linked list of chunks. And then linked to the chunks, we had the actual domain graph that talked about the entities that were mentioned in the text. So in this case, whoops, there's an animation. In this case, I'm going to focus exclusively on the domain graph. So again, the chunking is something that we've looked at before that you can do with Lank, Chain, and many of these libraries. It's just about defining the size of the chunks and build. But I thought it was more interesting, the domain graph, which is basically the entity extraction part which is the one that's more driven by the ontology so we won't generate the document structure graph we will focus on them what we call the domain graph basically the entities that are mentioned in these in these articles okay so that's uh that's the introduction and with that you know let's get our hands on to the experiment so good yeah so i don't know if this dark background or is better than the other one hope that's readable i think dark is fine i think you need to zoom in a little bit yeah the problem is uh maybe it's more if uh yeah if uh i can try but it might get off the screen but anyway that's well that should be should be yeah right let's try this and uh yeah if if it's really not working let us know in uh in chat please perfect so uh you know if if we um let me hide this the idea as we said is we're going to ask an llm to do the work for us and and extract a knowledge graph out of some unstructured data. And to do that, we're going to create, well, we're going to use the system setting, and that's kind of a slightly modified version of the one I mentioned you can find in the knowledge graph builder, right? So I'm telling the LLM that you're an expert in extracting structural information out of natural language text. And you can extract entities with their attributes and relationships, et cetera, et cetera. And you can produce the output as RDF triples or as Cypher. In this case, we're going to use Cypher, but we could generate RDF triples as well, depending on the request. And basically, the prompt looks something like this. It's pretty, I'll go into the details in a minute, but you see that, right? So given the ontology below, and I'm going to show how to pass the ontology, run your best entity extraction over the content, and the extracted entities and relationships should be described using exclusively, let's say must be described, to be more, must be described using exclusively the terms in the ontology. So I'm being very explicit about the fact that I want the LLM to just describe the extracted entities using this vocabulary, the ontology that I'm going to provide, and in the way they're defined. And by that I mean, explicit about it, this means that for attributes and relationships, you have to respect the domain and range constraints and domain and range if you remember is for our relationships where does it start and where does it end right that's the domain in the range for example the author relationship connects a book with the author right right? And that's the direction. So the domain would be book and the range would be the author. So that's what we specify in an ontology. We're gonna see that in a minute. We've seen it in many episodes before, but that's what we specify. And in this case, I'm asking it to return the output of a cipher using merge. And the merge, of course, uh for linkage of nodes of course if i do multiple passes i expect entities to be uh to be mentioned in multiple documents and multiple text sources and i would like that to build over time as i run the process uh to build a a single graph and merge merge is the perfect way to do that. I understand that this is not very rigorous, and we'll leave that for iteration three or part three of this because that's. So but this is kind of the entity, the duplication, right? So we're going to see that David Hockney is going to be mentioned in multiple sources. We, for this case, we're going to merge them on the name, which might not be perfect. Probably there are not many artists called David Hockney, but you can think of things that are on many locations that are called Boston, right? There's one in the UK, one in the US. So you have to be more precise and we'll go into the space of unique identifiers and that's that's a separate topic that I would like to leave out for today let's assume that we simplify the problem in that way we're going to focus on the structure of the graph right and I say absolutely no comments on the output sometimes the LLMsMs, you know, they add something like, this is the model that I've come up with, boom, and the output. I want it to produce something that I can use to populate my graph. Yeah, that's important. And I think here, another comment here from Saha makes a lot of sense just to remember, be as precise as possible with these prompts and be as narrow as you possibly can be. So here, Saha says, must is really important, otherwise it sounds it does not listen. And that's probably true. If you give it a range or some vague description, it'll vary a lot and you don't really know what you get. But if you say, okay, this is what you have to do this is what i want and then this is you know still probably in some way but uh you know on things like the temperature right and how yeah exactly for creativity you leave and maybe you run the same the same code on the same source and you get slightly different things. But with these, at least in the test so far, and I get pretty consistent results. And that's the reason why the ontology. So what I pass is with this prompt, I concatenate the ontology. I'm going to show in a minute how do we get it. And the actual content, which is the natural language text that we get from the Tate Modern catalog pages. So that's pretty much it. Then I pass the system information in one single call. In this case, I'm using directly the Python library for OpenAI. I didn't use a Langchain. If you want to see Langchain, we'll use it in other episodes. But in this case, I go with this simple approach. So just the system information that I show at the top and then the prompt and I'm using GPT4O in this case, right? So that's what I'm doing. Now let's go through the different parts. So we are passing the ontology, right? The ontology that I want my extraction to be aligned with. So if we look at the code, we see that... let me do one thing at a time. I'm going to comment this. So the ontology is read from this file. So it's called... I think it's called art.notfinal. Let me fix that. That's the name of the file, as you can see. I'm going to show it in a minute and for that i'm reading it using uh rdf lib which is the more popular python uh um parser well it's a it's a python package that deals with rdf and a number of things so we um we parse the the file that i'm going to show in a minute and we just serialize it as as ttl i mean literally we read it we could have just read it and pass it through, but I wanted to show this library because we're going to use it in a minute for some more sophisticated things. So that's how I get the ontology. It's simple. And then the content, I read it from the files, and you will see that we have some files here, and they're pretty straightforward. So what I've done is I've copied the text text and we have a text uh description and just the text information that comes with each of the of the uh artworks right so that's basically copied from the the website uh about the description yeah to simplify the code i mean i'm just reading from this file but you could go access to http paint and extract it blah blah but you know the interesting part is in the ontology. So we're using this art ontology. For those of you who are familiar with Turtle language with RDF, you will see that there is a bunch of property definitions. But this has been edited. And instead of showing the code, I'm going to show. I must have it somewhere here. I've edited it with... Of course, I logged out, so let me log back in. So I used Web Protégé. Web Protégé is an ontology editor, one of the free one that you can access online. And in it, you will find a number of things. What I've done is define a very simple ontology. And the reason why you see so many items here is because I've linked it to, let me see where it is. Here we go. To the schema.orgontology, for example. And I've done it intentionally because I want to pass myorg ontology, for example. And I've done it intentionally because I want to pass my own ontology. It's true that to some large language models, they might be, because they're trained on public data, they might be aware of schema.org, right? So if you tell chatGPT, well, GPT-4 in this case, produce the output based on schema.org, maybe you don't even need to pass it. I mean, I wouldn't trust that because I've seen it generate some, you know, something accurate, but something generated. But in this case, I wanted to show how we can pass our own ontology. And in this case, just good practice is to, rather than creating a random one, what I've done is I've created it extending schema.org. And there's a bunch of of uh of classes that i've created not many but um let me find it here on the thing there was uh person and under person i've defined artist right so this is uh it's a bit small we can be there can i make this bigger yeah i can oh yeah this is better yes this is good yeah so it shows it in in the whole hierarchy so person as you can see is a schema.org one so that's not me that's i i'm importing that ontology and what i've defined is artist by the notion of an artist and you see that it's my own because it's it my own namespace. I created neo4j.com slash vocabulary slash art which is my art ontology. So I've defined the notion of an artist and I say that an artist is a subcategory of person. So I define artist in the same way I've defined, I believe I've defined artwork under creative work. So there's plenty of them, but I've created artwork which was not there as a subcategory of creative work. And you can see that I've added a description which also helps to the LLM. That's an artistic creation. In the same way with the person, the artist, I said is the creator of an artwork. What else was there? I think there's relationships that I've created. For example, there's object properties that we call relationships in Neo4j, like was created by. So was created by, as you can see, is a relationship that connects an artwork to an artist, right? So he's kind of the the creator but specifically it's kind of a refined relationship that's specific for artworks and artists i've also created a represents a represents is kind of a relationship as you can see that connects an artwork with something that's in it so this is a portrait of alex so i would create a note representing alex uh an artwork and I would create a note representing Alex, an artwork, and I would say, Alex is depicted in this, and the name is represent. So that's how I've named it. And importantly here, that's what I was talking about, domain and range. So the domain is where the relationship starts. It's an artwork that represents a person. In this particular case, I focus on persons, not something in general, but persons. What else? There's the notion of an influence. So an artist influences another artist. So that connects an artist to another artist, indicating that the former is an influence to the latter. And this is, of course, a reflexive, not reflexive, but it connects an artist to another artist, indicating that this has been influential to these other ones. So anyway, there's a bunch of relationships there. There's represents, there's created by, and there's some data properties, so like the attributes, and I've created my specific one. For example, instead of using the generic name in schema.org, I've refined it to create artwork known as, right? So it represents the title or an identifying description of an artwork. Sometimes the things don't have a title. I mean, paintings tend to have titles, but maybe displays or other types of artworks maybe don't have such a precise description. So it's kind of a more general term. So I call it artwork known as. And its domain is it applies to artworks and the range is a string so it's a value that indicates that it's not a relationship but a property of an artwork and the same for artists. I've defined professional name as an extension of the name so it's the name used to refer to an artist so it may not necessarily match the real name but maybe it can be a pseudonym or whatever. But it's a name for the artist and the intention was to show the case where we potentially are extending existing ontologies like I've done here but I'm using my own which is specific of the art domain. Well this has has been defined in schema.org. In Prodge, it can be exported, and that's what I was showing. But just quickly, Jesus, it makes a lot of sense because, I mean, obviously, if you're doing this in a business environment, you probably are not importing artists, or maybe you are, but if you're not, if you don't, then think about it. I mean, you could talk about product names and product could be the product name. Once it's final and released and ready for people to buy, it might have or probably very likely has a different name than when it was still in development. So, but still it would refer to that. So internal documents during development would refer to the code name and then afterwards the code name refers to the final release product. And maybe there's a version 1, or version 2, or version 3. And they would reference it. This is in no way available in any ontology, depending on your own business. Absolutely. Absolutely. And that's the idea. I know some people might say, hey, there's some, and there are some museum ontologies out there. I could have literally just picked one, how we refer to things. And I still may map it to existing ontologies, but I want to use my own vocabulary, my own ontology. So that's what I'm showing in these examples. So that's the idea. And the great thing about ontologies is they have formal ways of have them serialized. And I'm using Turtle here. So this is just one possible serialization of what I've just shown you in the application. So you see here the artwork known as, where you have data property, the comment that I showed, the domain, the range, the fact that it's a property of name, et cetera, et cetera. So all the things that I've shown you through the application, you have it here in a machine-readable format in a kind of a well-structured form. So that's the ontology that I'm going to pass my LLM as a way of saying align whatever you extract to this ontology. And that's exactly what we saw briefly in the prompt. So we're passing the ontology as a lump of text and actually i'm going to print it so that it's more more readable not more readable but more more more visible what's going on and then the rest is just the the content itself the text so that's what's happening and the text like i said is just reading this this text file that I showed. And ultimately, what we do, and let me, I'm going to, maybe I'm going to print the prompt just to make it more, print the prompt. Print the prompt. There you go. And I'm going to run it because, and well, this is going to generate some cipher. And the cipher, I could just copy it and manipulate it. Or I can just have it run directly on my database. And that's what I've done. So I have my local database running here on my laptop. There's an empty database. So you have my user and my password, triple Neo there. And super high security. So this is just a connection. I have a helper function. But this is kind of boilerplate code. So you have it here. But just does the connection. It instantiates the driver, creates a session, and then runs the Cypher that I pass it. So there's nothing special there. That's what I'm calling here when I say run Cypher. So there's nothing more than that. I'll share all that, like I said. And this is hopefully going to run on my local instance, which I have running here, and at the moment is an empty database. Right? So that's what we have. As you can see, this is my local Neo4j instance running. So I'm going to run this code, right? And I'm going to do it directly here. Boom. And you should see, let me run. And first of all, we see, oh, there's a bunch of, let me see, the last relationships. No, this is here. Oh, no such file. It's, I must have, Hockney, Mr. and Mrs. Did I not write the name correctly? So it's not finding the, it's reading the file from, oh name correctly so it's not finding them it's reading the file from oh of course it needs the prefix because it's in the content directory right so content save and rerun and here we go that looks better so let's there's a lot of uh output let me show that to you because I remember that I printed the whole prompt, right? So you see that this is the text that I show you, given the ontology below, blah, blah, blah. I said that that's hard-coded and this is the ontology. So I pass the ontology and this is literally the text in the file that I showed you a minute ago, the art ontology. All this to say that I'm passing a minute ago, the art ontology. All this to say that I'm passing the ontology serialized as RDF. So I hope, I expect, and that is the case, that the LLM is going to be able to decipher that and make sense of this total serialization of my ontology. And then I pass the content, and the content is just the text in this file that I showed you a minute ago. Okay, now that's the output so this is the cipher that it's generated so it's extracted an artist an artwork and and the first thing that we realize and i'm going to show that to you in the graph because there you go we have the instances now created here so i can uh show it and we're going to explore it visually in a minute. But if we look at the cipher first, before I get into this nice representation, what we see is that, first of all, it seems to be following the terminology that I described, right? So it's calling things artist, it's using the professional name, remember? So it's not using name or any other term. So it's calling things artist. It's using the professional name, remember? So it's not using name or any other term. So it's being pretty strict at that. It's calling this artwork, artwork known as. So that's pretty good. Then the relationships are also pretty strict. There's was created by that connects. We're going to see in a minute, but it's connecting artist to art, artwork. So that's looking good. Then there's a representation, represents relationship that connects. So it looks pretty correct, right, I would say, because all the terms are the ones defined in my ontology, and if I explore the graph visually, we see that it looks exactly like it should, right? So there's David Hockney at the center and we see that he's... Oh, that's very interesting. So that should go the other way around. That's not being that strict, is it? The direction is... Yeah, I see. The direction is not being treated properly. So that's a negative point. That doesn't mean it's going to happen like that, but let's continue the experiment so but the represents looks good so mr clark works with david hofney that's good he's represented in the picture uh represented and they work together well anyway that that looks pretty good with the exception of the um there's the influence. It would be... so it looks good from the terminology, from the vocabulary point of view. I don't like this was created by because the relationship should be the other one. I mean we saw in the ontology that was created by... let me show it to you here... was created by connects an artwork to the artist and not the other way around. So, the domain is artwork and the range is artist. So, that's inverted. Anyway, that's an error. But basically, what we would do is now maybe we can... So, that shows that we have achieved what we wanted to do. We have controlled the generation of the LLM by passing an ontology with some imprecisions. But we can now take, for example, the other file, the image of Gregory, which is another of the files. I'm going to just replace. I'm doing this a bit manually but you see that we could iterate over the files in the in the directory and that would work as well but if I have rerun this it's going to and actually I can remove the print of the prompt because that's not that useful anymore but uh the next iteration is has generated something similar. I mean, the first thing, and here we see the problem, right? Before, remember how the previous generation created an artist with a professional name David Hockney. Now, this second iteration has created an artist with a professional name Hockney. So they're, of course, not going to link. So that's a manifestation of the problem that we were mentioning before. We don't have global unique identifiers. And that's what we're going to address next episode. But, you know, that's an unsolved problem, let's say here. But the rest, it's all looking pretty good, so good structure so good um uh labels good relationship we could have so sorry we could have maybe solved this by using by not using merge yeah we i mean we could use create for the but but uh but then we would need to do a post process indicating hey hokney and david hokney are the same so we need to merge them together or or we could do a post process indicating hey Hockney and David Hockney are the same so we need to merge them together or we could do a merge on some really unique identifier like some global URI like let's say for example the wikidata URI or whatever we want to call it and again let's translate that to our our our enterprise environment maybe we have a global unique identifier for things maybe we don't so in some cases we will not be able to do it in this first pass and we have to inevitably do some post-processing but first thing i notice look at that now the was created by is now used correctly right so there's a bit a bit of unpredictability yeah you're right but here it goes from p um no it's still wrong it's still wrong. It's still wrong. So it's artwork. It goes from P3's artwork. Artwork to artist, right? So that's good. Oh, yeah, you're right. Yes, yeah. So if we look at the graph now and we extend it, so normally this would have, if we can, so we have two subgroups of these not merging properly on David Hockney and Hockney but uh the second was created by is in the good direction so it goes from the artwork to the artist whereas here uh it went in the wrong direction so still we see that it's a bit of a rebel right so we're telling it to strictly follow our instructions and still. So this is the kind of thing that we would detect using the shapes that we introduced in the last episode. So you can still add the constraints in the form of shapes and have a validation that would tell you, hey, this has been generated in the wrong direction, right? So the domain and the range, the direction of the arrow should be a different one, right? So there you go. So that is, I mean, one thing that this, well, we've not identified the problem of the unique identifier, right? That's a known thing. But in terms of the consistency of the graph, that's very, very good, I would say. So if we look at that, we see that we have a very consistent set of labels. So we have artist, artwork, person, and we have influencers represent exactly the elements that I define in my ontology. So that's perfect, I would say. So the only unsolved problem is the identifier, the merging of the nodes that represent the same entities. Now, there's another thing that some people might be asking themselves, OK, that looks good. But what if I cannot or don't know how? Or is there a way to pass this information that's captured by the ontology in another way because maybe i've not been uh trained on on on on on owl the ontology languages etc etc or to use a tool and the answer is yes and we get we're gonna let me empty the graph and we're going to show what that looks like so now we have another empty graph and we go back to the code and and um and this is going to show what that looks like. So now we have another empty graph, and we go back to the code. And this is going to show us the following. So you might have noticed that I commented this line, where I said, OK, I can get a natural language representation of the ontology. I mean, here I'm going to generate a natural language representation of the elementsology or you know i mean i'm here i'm going to generate a a natural language representation of the elements in my ontology but this is something that you could type yourself let me show what i mean by that so if i if i do that and i reprint the prompt uh let's save that and run it and see what this i mean this is calling get natural representation of the ontology is in the utils package and it's just I'll briefly comment on it but it uses the rdf lived to iterate over the content in the ontology in order to present it in the following way let me run it and you'll see what I mean so if I run again on the same file with the same ontology but now I'm passing the LLM and ontology in a slightly different way. And let me show that. Here we go. So that's the ontology. So now it's not a total representation of the ontology, but it's a description. It's something that I could type manually. I'm not an expert, but I can tell, well, in my ontology, there's a bunch of categories. There's an artist, which is the creator of an artwork. There's an artwork that describes that. There are some attributes, like artwork known as. This is an attribute that applies to entities of this type, and it represents the da da da da da. So it's an entirely textual representation of exactly the same thing. Yeah. So in this case, I've generated from the ontology. So that's programmatically generated. But let's say you're not comfortable taking Web Protégé or an ontology modeling tool, but you could describe that because anyone can do that, right? So you're kind of using natural language to describe the structure of your graph. And the same with relationships. Influences. Influences is a relationship that connects entities of type artists to entities of type artists. It connects to artists, blah, blah, blah. So entirely text. And what's interesting is that the cipher generated is perfectly valid and perfectly well aligned with the ontology that we've passed. So it can be equally strict. And if we look at the graph, now we have something, if not identical, very, very similar. Very similar, yes. And in this case, look at that. The direction was correct. I'm not saying that this is a better approach. There seems to be an element of randomness there. Yeah, yeah. But we see that the result is identical, right? So first thing is, yes, large language models seem to understand decently RDF languages like OWL that we're using, but don't let that be a blocker. I mean, you can define your ontology in many ways. I mean, it's true that there's advantages to the definition. It's because you can do a number of things. They're machine readable and everything. But if you can't, I mean, don't true that there's advantages to the definition. It's because you can do a number of things. They're machine readable and everything. But if you can't, I mean, don't let that stop you. I mean, you can pass this definition in a perfectly, as long as you are not contradictory and pretty clear, the LLM will do a good job of making sense of this ontology definition. And I could, just like we did before, maybe we can try our luck and and say, well, was this file that we were looking at? So, maybe this time in the second pass, he will name Hockney in the same way, maybe not. So, we cannot rely on that, right? So, that's the kind of thing that we need to build a more solid solution for. And let's see, I mean, now we can look at that. No, still not making things. Same thing. So we see that we still have the problem, Hogni, Hogni. So if these two are the same entity, we have to introduce some way of either teaching, training the LLM to detect that, which might be a relatively complicated thing. We will try, I mean, don't get me wrong. You can also do some post-processing, right? But yeah and and what i want to explore and and to what extent that can be that's not the kind of thing that you can easily pass in the context and and and that's before i ask this question let me take a pause because i see there's some conversation in the chat anything that we should address before i kind of get into the conclusion or where we're going to go next? So I see one question that hints to a little bit of what we talked about in the problem with the duplication here in the entry is if entity resolution could not solve that problem. That's a suggestion from Darwin. I think that's something you already mentioned that we might look at in another episode. So yeah. And then another question more generally from Eshwar. If we merged a graph of any kind, really, I guess, with the direction being wrong of relationships, is there a way to change that? Or what would you recommend if you'd say i imported a lot of data and then i realized all my direction of of of this relationship is is going in the wrong direction yeah well that that i think we in that case i would recommend because that part we covered in the in the previous episode so i would not try to detect it visually because let's i mean i'm running this in a simple example with two or three files but the idea is that you're going to run these at scale and there's no chance that you're going to be able to pick up all the violations for that you have ways of encoding what you know the expected shape of your graph and and there's a way of formalizing I expect that relationship to go from an artwork to an artist and not the other way around. And we show that using this idea of shapes, shackle, and you can just formalize it and have the graph, have the code that we shared, detecting any violation of that. And like we did in the previous episode, you can either remove them, you can invert them, you can flag them, you can basically take control of what you want to do with them because maybe there's different people involved. All you want to do is flag them you can basically you take control of what you want to do with them because maybe there's different people involved all you want to do is flag them mark them as incorrect and then someone else takes care of it maybe some human curate it maybe you want to automate it maybe you want to remove them so keep in mind that the moment that's in neo4j you have full control over what you want to do is data in the database and it's up to you what you want to do to rectify that issue so we have mechanisms to detect it and to ultimately automate the correction so yeah that's I see someone asking what are shapes yeah so exactly a formal way of defining you know the expected structure of the graph so there's a language that's used to, I mean, you can do it in many ways. You can use it, you can do it with the cipher, but we've shown how to use Shackle, which is another order, sorry, W3C standard, and that's supported by Neo Semantics. And yeah, I invite you to look at a bunch of episodes, because that's something that we've shown at least two or three times. You showed it a couple of times, yeah, and it's a really good way to, like you just said, you define how a pattern or a certain, yeah, think of it as a pattern. You define that. You think this is what you want to shape. It's the same as a pattern, I guess. And that's why we said here, if we had defined this shape or this pattern, that artist was created by, sorry, artifact was created by artist, and this is the direction it follows, so from artifact to artist and not the other way around, then this would have been corrected or flagged. Exactly. So, yeah, two open questions that we're going to address in the future episodes. So the first one has been clear, right? The unique identifier. So that's something that... and connected to that is what happens when the ontology grows. I mean, we've shown a relatively small ontology with a very small terminology around art, and we're passing the ontology in every single request. So basically we're telling the LLM, take this text, take this ontology and extract structured information from that text according to this ontology. Now, what if my ontology is schema or has a couple of thousand types of entities, but there are bigger ones. So it gets to a certain size where it's too expensive, too heavy to pass a whole ontology in every single request. So we have to look at that problem as well, because normally, most of the times, you will be working on reduced domains, and this approach will work. But in some cases, you won't. So you will have to train, I mean, to fine tune your model. And I want to explore, and that's one of the things I want to test, how some fine tuning can build up all that knowledge in VLLM so that then each request is already pre-trained and is going to behave as if it was already familiar with my my ontology and the same applies to the unique identifiers because if i go now and i tell the llm by the way try to attach to every entity that you extract something generic like the wiki data id you will go ahead and do it but it will hallucinate and we've seen that many times so as much as it's strained on all the data it's not capable of or it does in a hallucinating way and i've tested that and you might get huckney with the with the uri that points at the the county of of of of half a share right so if so if the llm could consistently attach a unique identifier to things, then we would have solved the problem of the duplicating. So that's another thing that we're going to be exploring, because ontologies sometimes are extended with reference data in what's called named entities, named individuals. So it's something like saying, I define the notion of a country. And because there's a well-defined list of countries, don't refer to Spain as Spain. Use this code, which is my unique identifier of Spain. It's like I'm giving you the reference data, and that helps consistency and then integration. And the same applies to basically any domain. So we could have a unique identifier for every painter, but this is too much data to pass in a single context. So we have to pass it to the LLM through some form of fine tuning training, and that's what we're going to explore in future episodes. But I hope today at least we have covered how we can be very strict, specific about the vocabulary that we want to use down to the to the property to the relationship which is something that we could not achieve uh last uh last time through the through the um the knowledge graph builder application so so that's what i'll probably stop and uh and i don't know if there's any questions we can take them and if not we'll um yeah super yeah cool uh thank you very much jesus uh i don't see any any new questions coming in so but that sounds like a very very interesting uh setup for the the upcoming episodes um and we have you know a variety of things to look at and I think that's topics that people are interested in, you know, entity resolutions, different ways of fine tuning, maybe even, I think we touched this a little bit, but even agents, AI agents, I could imagine that could be something. We have like an ontology agent sort of that's like working beside your prompt agent, and then you have a data modeling agent and then they work together. I think we covered this before. So you have a variety of options, a variety of ways to do it, to get to the goal of having that database in the end that makes sense and it connects your data correctly. So yeah, thank you very much, Jesus, again. I think that concludes episode number two. We will be back in November for episode number three. As always, the code will be in the GitHub repository. Let us know if you liked the episodes by giving it a thumbs up. Start the code on GitHub in the github repository let us know if you if you like the episodes by giving it a thumbs up uh start the code on github so that also helps so we know that you are doing it if you're using it if you are using it if you are doing anything cool if you have any ideas for future episodes for content let us know ideally in the youtube comments of you know this video or any other video of going meta and we can uh we can have a look at that and think about if that makes sense and uh yeah share it with your friends your family your colleagues and uh and let everybody know uh that this is uh is happening and taking place excellent well thank you everyone and thanks alex i hope you have a nice day in marrakesh i don't know how long you're going to stay there but enjoy and uh and see you next time from wherever you are i don't know what are the plans for for next i don't know either no uh yeah i'll see you uh i'll be i'll be back at home in november for sure no i'll stay here until Friday. So it's a short trip. All right. Take care, everybody. Thank you for watching. Thank you again, Jesus, for the presentation and the demo. And see you around either next month or this Thursday for the Road to Nodes app development workshop or on the 8th of October for career development with Gen AI. So a couple of options. Until then, take care, everybody. See you soon and have a good rest of your day, rest of your week. And, you know. | Going Meta - S02 Ep02: Using Ontologies to Guide Knowledge Graph Creation Part 2 | 3,293 | Neo4j | 20241002 | Season 02 Episode 02 of Going Meta – a Series on Semantics, Knowledge Graphs and All Things AI
Topic: Using Ontologies to Guide Knowledge Graph Creation from Unstructured Data Part 2
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Knowledge Graph Book: https://neo4j.com/knowledge-graphs-practitioners-guide/
Previous Episodes: https://neo4j.com/video/going-meta-a-series-on-graphs-semantics-and-knowledge/
LLM Knowledge Graph Builder: https://neo4j.com/labs/genai-ecosystem/llm-graph-builder/
Tate Modern: https://www.tate.org.uk/
#graphdatabase #neo4j #graphrag #knowledgegraphs #ontology #data | 2024-10-11T11:18:56.026686 |
https://www.youtube.com/watch?v=ib-wTAvCZqg | Good to be here. Good to see a fair number of people. It's early, so I wasn't sure if anyone would come, but thank you for coming. One quick note, I just put my slides on Twitter. I wasn't sure the best way to access everyone. I'm at rlancemartin. If there's another way I can get everyone's slides, then, yeah, I see some people opening. So the slides will link to a few different, it will link to a collab and other notebooks I'm providing, so all the code will be available for you. Okay, good, I see people finding it, that's fantastic. Colab and other notebooks I'm providing so all the code will be available for you Okay good as I see people finding it that's fantastic Maybe if there's a better way let me know but figure this is somewhat easy Well, it's great to be here. We have a bit of time So I think the format is I'll like lay out some slides to kind of set the preliminaries and like kind of give the big picture And then there'll be a bunch of time where I can just walk around talk to people and because we have I guess three hours so I think the idea of this will be a hands-on workshop I provide a bunch of starter code and also one of my slides shows it will be kind of a choose your own adventure format so why don't I kick it off I think maybe one or two people still coming in but so the the theme here is building and testing reliable agents and let me go to slideshow mode here and maybe I'll just kind of start with like the very basics you know LLM applications follow a general control flow of some sort you start usually to user input there's some set of steps and then you end and you've heard a lot about chains, you know, when you build applications, oftentimes we talk about this idea of chains. And chain is just basically, you know, it is some control flow set by the developer. Again, you start, proceed through some steps and you end. So retrieval augmented generation is a super popular application many, some of you may be familiar with, basically refers to retrieving documents from an index and passing them to an LLM. This is a good example of a chain. It's a control flow set by a user. The questions you know provided, vector store retrieves them, pass to an LLM, LLM produces an answer. So this is kind of a classic chain. Now when you get into agents there's a lot of different confusing interpretations. What is an agent? Here's the way I might think about it, which is just a really simple kind of framing is agent is just when the control flow set by an LLM. And so you can imagine we talked about this process of you start your app, step one, step two. In this case, I have an LLM in there. An LLM looks at the output of step one, makes the decision, do I go back and do I proceed? So that's like the simple way to think about an agent. So again, chains, developer defined control flow, I set it ahead of time, I follow some set of steps every time. An agent, an LLM kind of determines the control flow, an LLM makes a decision about where to go inside my application. That's one simple way to think about it. Now you hear about function calling a lot, and this is kind of a confusing topic, so I want to talk through it kind of carefully. Agents typically use function calling to kind of determine what step to go to. So usually the way this works is what you do is you basically give the LLM awareness of some number of tools or steps that it can take So in my little example here I define this tool and this little decorators a line chain thing But the point is I have some step it's some function I'm defining it as a tool and I'm what I'm binding it to the LLM So then LLM has awareness of this tool and here's the key point When it sees an input like what is the output of step two it actually produces the payload needed to run that tool now this is often confusing remember LMS are just string to string they don't have the ability to magically call some function what they can do is produce the payload or arguments and needed to run that function and the function name so really think about tool calling or function calling is just an LLM producing a structured output. Still, you know, obviously a string, but it's a structured output that then can be used to call a tool. So that's all the function calling is. And you might have heard of React agents. So the way to think about this is it's just basically binding some set of tools to my LLM. And again, we talked about tool calling. So LLM makes decisions about what step or what tool to use and you have some node that will call that tool so LLM says okay run step one I have some node that runs step one and passes the output of step one back to my agent react typically stands for like basically action so the LM choose the action a tool is a tool is run it observes the output that's what goes back to the agent, observe that tool response, thinks about what to do next, maybe runs another tool, and this runs in a loop until you end, and usually the end condition is the LLM just outputs a string response, not a tool call. So this is the way to think about a classic React agent. And it's really flexible, that's the nice thing about a react agent so basically it can implement many different control flows it can do step one only step two one two two one that's the beauty of these open-ended style react agents and these have a lot of promise these kind of flexible tool calling agents were really hyped last year they're still really hyped it's really exciting because they're flexible they're open-ended you can give them a task give them some tools and can just execute arbitrary control flows given those tools to solve open-ended problems the catch is and this is kind of the crux of what we're getting to with this workshop is they do have poor reliability or they can so you can get caught if you've played with agents so as you can see they kind of get caught on one step and they keep calling the same tool and you know really this is often caused by l and non-determinism lms are not deterministic and also errors in tool calling so tool calling is kind of tricky if you think about it lm has to basically pick the right tool given the input and has to pick the right payload so it has to produce the right inputs need to run the tool and these both can break so here's a good example the tool I'm passing is step 2 and the LLM is saying the tool name to run in step 3 so that's obviously wrong or I'm passing what is step 2 of the input 3 and the LM says okay pass 4 so these both these errors can happen tool calling is a tricky thing and it's exacerbated if you pass an LLM five tools ten tools it actually gets worse if you have very long dialogues it gets worse and so this idea of open-ended tool calling agents is really promising it's really exciting but it's really challenging because of these issues that we were mentioning here so this is kind of the build-up here like so can we envision something in the middle? So again, we talked about chains. They are not flexible, but they're very reliable. This chain will always run step one, two in order. We talked about like React agents on the other extreme. They're extremely flexible. They can run any sequence of tool calls. You know, can run step one, two, step one only, two only, two, one. But they do have reliability issues. So can we imagine something in the middle that's both flexible and reliable? So here's kind of the setup, and it's kind of the intuition. A lot of times in many applications, you have some idea of what you want the thing to do every time. So some parts of the application may be fixed. Like the developer can set, OK, I always want to run step one, and I want to end with step two. And you can inject an LLM in certain places that you want there to be some kind of branching or kind of optionality in the control flow Okay, so this is the motivation for what we call lang graph so lang graph is basically a library from the language team that can be used to express control flows as graphs and It is a very general tool. And I put out a bunch of videos on it. And we're going to use it today. And by the end of this, you will all have an agent that runs reliably using line graph, hopefully. And we'll see. So you should test me on that. If things don't work for you, then we'll work it out. But the idea is kind of this. This graph has some set of nodes and edges. So nodes you can think about are basically, well, maybe I should start with this. This graph has something called state. So it's like short-term memory that lives across the lifetime of this graph that contains things you want to operate on. Nodes modify the state in some way. So basically each node can like call a tool and can modify the state edges just make decisions about what node to go do next okay so you basically have this idea of memory and this is the same as common agents right majors are characterized by having tool calling and in short-term memory as well as planning those same things are present in line graph memory is this state that lives across your graph tools exist within within your nodes, and planning, basically, you can incorporate LLM dictated decision making in the edges of your graph. So like why is this interesting and where has this been cropping it? We've actually been seeing this theme crop up a lot of places. So there's a really interesting paper. There's actually a few I really like. This one's called corrective rag. And the idea is pretty simple. Like with a naive rag pipeline, you're doing retrieval, you're taking retrieved docs and generating your answer. Corrective rag is doing one step more, saying why don't we reflect on the docs we retrieved and ask are they actually relevant? You can have lots of issues with retrieval, you can reflect on the documents, see if they're relevant, if they're not relevant you can do different things, you can kick out and do a web search. So it makes your application a lot more dynamic to poor quality retrieval So this is one of the first videos I put on land graph back in February It was very popular and I basically showed you can build corrective rag inside land graph and it's super simple This is what the graph looks like I do retrieval I grade my documents and We're gonna actually we're gonna do this today and I have a bunch of code for you that does exactly this So we're gonna go way in detail on this one but this is kind of set up and I showed this working I showed it works locally with Olama using in fact at that time it was Mistral 7b and it works really well so this is like one simple illustration how you can use Lang graph to build kind of a self reflective or corrective rag application now another cool paper was called self rag, which actually looked at the generation. So basically we're all familiar with the idea of hallucinations, it's a real problem. Instead of just allowing hallucinations to propagate to the user, you can actually reflect on the answer relative to the documents and catch hallucinations. If there's hallucinations, you can basically do different things and they propose a few ideas here. implemented this and this is a this is actually our most popular video of all time so this was showing Lang graph and Lama 3 implementing three different things corrective rag which we just talked about the self rag thing of hallucination checking and this adaptive rag thing so I can kind of walk through it this all runs in Lang graph locally and I have the notebook here if you want to test that today you definitely could so that's the point is reliable enough to run this whole thing locally so what's happening here is I take a question I route it either to my index or to web search I then retrieve documents I grade them for relevance if any are not relevant I kick out and do a web search to supplement my retrieval if they're relevant I generate my answer I check it for hallucinations and then I finally check it for answer relevance so basically does it have hallucinations relative to my documents and does it answer my question if all that passes I go I finish and return that to the user so this is kind of like a complex rag flow but with lang graph you can actually run this on your laptop it is reliable enough to run a laptop with lang graph and the intuition again is that you're constraining the control flow you're allowing the lm to make certain decisions but at very discrete points if you implement this as a rag agent this could be very open-ended and a lot of opportunities for breakage and so that's the real intuition here um now a final theme is karpathy kind of mentioned this idea of flow engineering related to this alpha codium paper a really nice paper on code generation and the intuition here is produce a code solution they tested this on a bunch of coding challenge abuse a code solution and check it against the number of unit tests auto-generated or pre-existing and Basically if it fails unit tests feedback those error cell and try, really simple idea. I implemented this in LangGraph, again, the code is here. And this works really well. So I basically, I share a blog post as well. I ran this on our internal coding. We have an internal application for RAG at LangChain. And we're actually working on implementing this right now in production because the performance is way better. And a common thing this can fix with code generation and code solutions is hallucinations and imports so we see that a lot with our RAG app so what I did was I very simply implemented a unit test for import checks just run that it significantly improves performance relative to without doing it and so we're actually working on implementing this in our internal rag system so super simple idea that can really improve code generation so you know if I kind of like back up what what did we talk about and we talked about chains they are not flexible which is fine in some cases but a lot of interesting newer papers with rag for example this idea of self reflection is really beneficial the ability to kind of self-correct applications can be really beneficial not beyond RAG as well for coding. So chains are very reliable but they're not flexible. Now if you go to the other end like a classic react agent is very flexible. It can implement any sequence control flows through your different tools but it does have reliability problems due to things we talked about non-determinism, tool calling errors and line graph kind of sits in the middle where you can actually implement these user-defined slash LLM gated control flows and they can actually be extremely reliable because of that constraint they are less flexible than a classic react agent so that is true so for very open-ended tasks I do agree maybe you is true. So for very open-ended tasks, I do agree, maybe you do need a very open-ended, more autonomous style React agent, but for a lot of applications that our customers are working on and seeing, these kinds of hybrid flows are sufficient and you gain a lot of reliability. And so we've talked to a lot of companies that have implemented LandGraph successfully for agentic flows for this reason. Because reliability is just incredibly important in a production setting. So this gets into, if you look at the slides, I have a few different notebooks. And what I show is, we talked about corrective rag. These notebooks show how to build corrective rag yourself. And I thought that's a fun starting application. It's a really popular one. It's super simple. There are not many dependencies. You can use your own web, whatever tool you want to use your web search. You can use other things as well. You should have a look at the notebooks, and I'll kind of walk around. And we're going to keep going. This is just like a placeholder here. But so if you want to test this locally, if you have a laptop this locally people laptop cable over anything's locally then we have a notebook to support that I use a llama I can talk a lot about that that's a really cool thing if you don't then I have two options for you so one is a collab so that's probably the easiest if there's issues let me know if I've tested it so if if you have a Google account, you can spin up a colab. All I need is a few API keys depending on what models you want to use. It's all kind of there. You can set those accordingly. And I also have a notebook. So this just kind of is like a kind of a gives you a roadmap of the different things you can try today. Since this is a workshop format. And I'll just be walking around and we'll do questions for a while. But I want to talk about the second half of this of this you know story so one of the things we're seeing a lot I think you're gonna hear a lot at this conference is the challenge of testing and evaluation and this is a real pain point like for example how do I actually know that my land graph agent is more reliable than the react agent how do I know what LLM to use how do I actually know that my LangGraph agent is more reliable than the React agent? How do I know what LLM to use? How do I know what prompt to use? So testing agents is, testing in general is really hard and agents in particular is challenging. So there's kind of three types of testing loops I like to think about. One is this in-app error correction. And that's actually what we just talked about. So LangGraph agents are really good for that. So basically, in-app error handling, where you can catch and fix errors is really useful for code generation, for RAG, we just talked about that. So that's like placeholder one. Now we get into this idea of pre-production testing. And then finally production monitoring, and I want to introduce a few ideas on the latter two. So we just talked through this here we're going to build corrective rag a few different ways and I just showed the choose your own adventure stuff and so this is just kind of reiterating that but I want to show you some other things so Langsmith is a tool from the Langchain team that supports testing and evaluation as well as monitoring and tracing and so we've seen a lot of interest in this and it's quite popular. It is really useful for doing these types of testing and evaluations. So the key idea behind Langsmith and the notebooks actually have this so this is totally optional if you just want to build an agent that's totally fine. If you want to also test it you can you can use Langsmith you don't have to of course but I have it all set up to use Langsmith if you want. It's free to use, of course. And so the idea is there's kind of four components that I like to think about when it comes to testing slash evaluation. You have some data sets. That's some set of examples you want to test on. So say you have a RAG app. That's like a set of ground truth question and answer pairs you've built. Like you're testing your system. You have question and answer pairs that you know are correct. Can your system produce those answers? How many will actually get right? You have your application, that's your agent, that's your rag app, that's your code app, whatever that is, that's your application. Now the thing that's often the trickiest, you have this evaluator thing. And the notebooks show you in detail, but this evaluator is something as simple as a user defined function that can implement a few different things. You can think about using an LLM to actually judge your output. So in that case, let's take rag as an example My application produces an answer. I have a ground truth answer You can actually have an LLM look at those two answers jointly and reason is it correct? And this is often very effective or require some prompt engineering I have some nice templates in the notebooks to show you But this thing is very populous idea of LLM as judge evaluators is very interesting a lot of people actually you'll probably hear about this week it's a really good theme it's still kind of in development but that's like one placeholder to keep in mind so for one option for time testing is this idea of using LLMs themselves the other is building your own heuristic evaluator so a custom evaluator of some sort and actually the notebooks that I share have both and So we're actually gonna the notebooks actually show how to evaluate an agent specifically and there's a few different things You can look at with an agent. So one is you go to that far right in blue the end end performance So our notebooks are basically gonna be a a RAG agent. The eval set has five questions. And I basically have a set of question-answer pairs. So basically I'm going to compare my agent answers to reference answers. And we'll walk through that in the notebook. But that's kind of one thing. I just want to introduce the idea. So one big idea is you can evaluate the end-to-end performance of your agent. Right? You don't care anything about what's happening inside the end-to-end performance. The other, which I actually like to look at a lot, is this thing on top. What are actually the tool calls that the agent executed? This is how you can actually test the agent's reasoning. So what you see often with agents is they can make some weird trajectory of tool calls that's highly inefficient but still gets the right answer. You don't get that if you only look at the answer. You say, oh, okay, it's got the right answer, but if you look at the trajectory, it's some crazy path, and so you want to actually look at both, like how efficient, how correct is the trajectory, and does's got the right answer. But if you look at the trajectory, it's some crazy path. You want to look at both. How efficient, how correct is the trajectory and does it get the right answer. The notebooks I share actually do both. This is actually an evaluation that I ran. This data set is public. This is on the agents that we just talked about. This is what you see when you open Langsmith. These are different experiment names. This is just saying like I've run three replicates of each experiment. And these are my aggregate scores. So this first score is basically the answer correctness thing. And the second score is like the tool use trajectory, or like does it use the right reasoning trace. And I can go through my experiments. So this top one is actually, this is kind of cool, this is actually my local agent running on my laptop with LangGraph, okay. It's a five question, you've got a small eval set, just a bunch of, it's some very small test examples, but basically my local agent does not, does fine. It does 60% in terms of the ultimate answer, so that's not amazing, but it does do very well in terms of the tool calling trajectory so it's very reliable in terms of reasoning it's an 8 billion parameter model so I basically I think the quality of its outputs are a little bit lower than you might see with larger models now fire function v2 is another option it's basically a fine-tuned llama 70 B from fireworks this one with Lang graph so this is actually showing this top actually gets up to 80% so very strong performance in terms of answers and a hundred percent again in terms of tool calling so the key observation here is the tool calling or reasoning is consistent whether using a local model or 70 billion parameter model with Lang graph so you get very high consistency in your true calling the answer quality degrades that's more an llm capacity problem but the reasoning of the agent is consistent so that's the key point now here's where it gets interesting fire function v2 again that's llama 70b this is with uh a react agent what you can see here is the answer quality is degraded but here's the interesting thing the tool calling trajectories are really bad and this again gets back to that problem with react agents they're open-ended they can choose arbitrary sequences of tool calls and you can deviate really quickly from your expected trajectory so that's the key intuition here now the final two are gpd-4-0s that's obviously a very flagship model it's maybe number two now relative again sonnet on the chatbot arena at least you know again answers ultimately are are strong the tool calling though even here is degraded so basically it follows some weird trajectories to get to its answers that are unexpected so what's the high level point here the high level point is lang graph allows you to significantly constrain the control flow of your app and get higher liability and if you look at these tool calling scores it's very kind of consistent going all the way down to local models. It follows the same sequence every time. React agents kind of go off the rails much more easily. The answer performance is really a function of the model capacity. So using an 8 billion parameter model locally, the answer quality is lower than a 70 billion. That's to be expected. But the reasoning of my app is consistent and strong, so that's the key thing that you kind of get with LangGraph, and this is all public. And hopefully some of you will actually, you know, implement this or reproduce this today. And this is just walking through those same insights I just mentioned. And then deployment, we're going to be talking later this week, we have an announcement related to deployment of LangGraph, so this is actually a very good setup. If you're playing with LangGraph, you enjoy working with it, we're gonna have some really nice options for deploying later this week. And so Harrison will be here on Thursday to give a keynote on that one. And if you've deployed, we also have some really nice tools and lengths to actually monitor deployment. And this is not as relevant for this workshop. It's something just just be aware of. I can talk about if you're interested. So maybe to close out, so this is a really nice write up. These guys are actually going to give a keynote later this week. It's Jason and company, Hamel and others. And they kind of made a really nice point that the model is not the moat. Like, elements are always changing. The moat is really the systems you build around your application That's what we talked about today. Like do you have an orchestration framework for example like Lang graph? Do you have an evaluation chassis like Lang Smith and again? You don't have to use Lang graph. You don't have to use Lang Smith for these things But this workshop will introduce these ideas to you and frankly I think it's important to understand the ideas rather than the Implementation whether or not use Lang graph whether or not you use Langsmith I think understanding these these principles is still helpful but you know an evaluation chassis guard rails data fly well these are like the components that give you the ability to improve your app over time that's the that's really the big idea that's the goal and I think you'll hear more on that later this week the goal here is how are you measuring improvement of your app and ensuring it always gets better. That's what we're actually trying to achieve here and that's kind of what evaluation is giving you. And yeah this this is kind of my last slide then maybe we can just move into maybe some Q&A. I can actually show the notebooks themselves if you want to walk through them together. I mean I'll just do that I'll let you guys kind of hack on them in parallel as I walk through them and then I can just go walk around and talk to people something like that so you know the three types of feedback loops you have this design phase feedback something like line graph in app error handling that's kind of step one we talked about cool examples there for coding for rag a lot of nice papers really promising I'm very excited about anything you can do here with terms of kind of agentic self-correction self-reflection in your app itself pre-production testing we just talked through that building evaluation sets running evaluations testing for an agent like your tool use trajectory your answer quality all really interesting and important and And then in production phase, production monitoring, this gets into, we didn't talk about it too much, but basically this stuff. So basically you can have evaluators running on your app in production, looking at inputs, looking at outputs, tagging them accordingly, and then you can go back and look later. So that's kind of the setup here. I know that's probably a lot, and we went about half an hour. So if there's any questions, I can just open it up, and we can kind of talk through stuff. I can also start ripping through some of the notebooks just to kind of give you an overview of the code itself. But if there's any questions here, maybe I'm happy to take a few. I'll give you a minute to digest all that. Yeah? Is there a non-Twitter link to your slides yet? give you a minute to digest all that. Yeah? Is there a non-Twitter link to the slides yet? That's a good point. Let's see. Is there a non-Twitter link to the slides? Let's see if the conference organizers give me some. I don't know if I have an email list for everyone in here. Just post to the general channel. Is there a Slack? Yeah, if someone can put, I actually didn't know there's a an email list for everyone in here. Is there a Slack? I actually didn't know there's a Slack, so that's very helpful. If there's a Slack or an app for this conference, then please someone post. I appreciate that. I actually didn't know that. Thank you for that question. Yes, sir. So about testing and evaluation, does it really scale to predict the exact sequence of the agents? If it's smart enough dealing with a complex problem, it's hard to say exactly how it is achieved. Okay, this is a very good point. So I'm going to repeat the question now. How do you evaluate an agent's reasoning trajectory if it's a large and open-ended problem that can be solved in many different ways? In that particular case, you are right. It is hard to enumerate a specific trajectory that is actually reasonable. For really open-ended long running type problems Trajectory evaluation may not be appropriate It's one one thing I would think about is can you define a few canonical trajectories of tool use through your application so It depends on this that if it's a very long-running agent I think it's probably infeasible if it is a shorter run agent where it's like you expect something in the order of maybe five to ten steps you can probably enumerate some set of reasonable trajectories and basically check does it follow any of these. You can also do things so you can do things like this you can do things like check for the repeat of certain tool calls. You can be very flexible at this and it's kind of open and few so you can look for like is it repeating certain tool calls. can be very flexible at this and it's kind of open and you can look for like is it repeating certain tool calls you can look at recall like is it for sure calling this tool or not so you can actually be very flexible actually the way we set up Lang Lang Smith evaluator for this it's just a simple function that you can define yourself so it's a very good point you can be in credit you can be arbitrarily creative about how you evaluate that but I would say for very long running, you're right. You can't really articulate step one, two, and three. But I would then think about more like evaluating. Is it repeating steps? Can you evaluate for clearly aberrant behaviors, excessive number of tool use repeats, excessive number of overall tool calls, so kind of like guardrails related to like kind of clear aberrant behavior? It's very short-term you can actually enumerate the trajectory specifically that's a good question so in the code you can see we actually lay out a custom function we can define that yourself so that's a that's a very good point though yeah yep Can we transition the case to Human in the Loop? Yeah, Human in the Loop is a good one. The workshop notebooks I share do not have that. But LangGraph does have some good support for Human in the Loop. And I can share with you some notebooks that showcase that. Also, what we have shipping on Thursday has very good support for Human in the Loop. So I will share some notebooks with you for that and wait for Thursday for even more there. So if we're building a rack like application, right, pre-production, we can do this testing framework using a known set of this is a question, there's a deeper answer to the end, and I will present that. What I've been struggling with the drill is the right way to approach that in production where we don't know what the right answer is. And I'm wondering if you. Yeah. So the question was, this is actually a really good one. For RAG, I'm just going to go to our docs because I actually wrote a doc on this recently. So you can still see my slides. For RAG, in a pre-production setting, it's easy to define, or not even easy, but you can define a set of question-answer pairs and evaluate them. When you're in production, though, how do you evaluate your app because you don't have a ground truth answer? So what are other things you can actually evaluate for rag app that don't require a reference yeah so there is a conceptual guide that I will share so this is actually a rag section I've kind of a nice overview of this there's actually a few different things you can evaluate for rag that don't require a reference that are very useful yeah so it's this right here so this is like a typical rag flow so I have a question I retrieve documents I pass them to an LLM I get an answer right what we just talked about and we showed is comparing your answer to some reference answer now this, this is, to be honest, pretty hard to do. You have to build an eval set of question answer pairs. Very important to do, but it's not easy. So what else can you do? So some of these we've seen that are really easy and actually pretty popular, there's three different types of grading you can do that don't require a reference, that are like internal checks you can run. I mean, you can run them online. So one is retrieval grading so basically looking at the doc your retrieve documents relative to your question so like an internal consult consistency check so this is actually a great check to run and actually the corrective rag paper that or the corrective rag thing that is in the cookbooks that I share here does this so you can play with the prompt and all that but basically this is just checking the consistency you retrieve docs relative to your question you can do that and we have some really good prompts for that another one I like is just comparing your answer to your question have an LLM look at here's my answer here's the question is this sane are they related and this is a really nice check just for like you know of course you don't have a reference answer but like you can still sanity check and say does this deviate significantly from what the questioner is asking the other this is a great one is hallucination and this is this is super intuitive compare your answer to the retrieve documents so if the LLM went off the rails and didn't ground the answer properly and you hallucinated you can catch that really easily and so I need to get on this slack because I want to share this link with you I'll figure that out I'll find you but this is in our Langsmith Docs if you search Langsmith valuation concepts we have actually a bunch of videos that showcase how to do this and I have a bunch of code as well so but those are three things you can do that don't require a reference and we do run those in as online evaluation with our application so yeah yep yeah unit testing so um yeah okay so do we have any thoughts on unit testing? So Langsmith supports PyTests as unit tests, but basically, it depends what you mean by unit tests. Typically, like conventional software engineering unit tests are very effectively done in things like PyTests. There's a lot of frameworks for that. What I like to think about in unit testing with respect to LLM apps is kind of like what we show in this code generation example. Here we use some really simple unit tests just for imports and code execution. Simple unit tests like this can actually run in your app itself. So basically one place you can think about putting unit tests are actually in your app itself within LangGraph for self for kind of in-app error handling or self correction. So that's like one place for unit tests that's kind of interesting and new with LM applications. They can live inside your app itself. Another good one for unit tests within app is if you're doing structured output anywhere in your application, which is like a really common thing people like, confirm the schema is correct. That's another good use case for unit tests in your application which is like a really common thing people like confirm the schema is correct that's another good use case for unit tests within your application those also both of those things could also be done independently like in CI outside of your application so we are going to have more integration support for CI with Langsmith soon I will check with the team on that. But I think the interesting idea for unit tests with LLM applications is this idea of inline within your app itself, because LLMs are so good at self-correcting. If you run unit tests in your application, they can often catch the error and then correct themselves. And unit tests are fast and cheap to run, so it's actually a really nice piece of alpha that, in fact, that's exactly what Karpathy was mentioning here, that running unit tests in line with your application is actually really quite nice and produce significant improvement in performance and alpha codiums cool yep I forgot to say that you can use a small area of. OK. Yeah, yeah. So what the question was, if I wanted to do some of this in-app error correction stuff, so let's take this example, the corrective rag thing. If I actually want this to run in my application, it obviously needs to be super fast. So that's actually what we've seen. The tricks we've seen here are basically use very fast smaller LMS so you mentioned for example even the ability to fine-tune that's actually a good idea if you have a judging task that is very consistent it's a very good use case for fine-tuning actually fine-tune a small low capacity extremely fast and effectively very cheap to deploy model that's a very good idea we've seen people do that also use very simple grading criteria don't have some kind of arbitrary scale from zero to five with high cognitive load the LM has to think about yes no very simple binary grading even for some of the stuff you can even be old-school and fine-tune a classifier but basically really simple lightweight fast LLM as judge style classifier wouldn't be an LLM necessarily but but basically very simple fast test for in-app anything like kind of in runtime you will need or want another cool use case for this or another like interesting option for this is grok is very very, kind of with their LPU stuff. And they actually would be a very interesting option. We've done some work on that with Grok, basically, for any of these kind of in-app LMS judge error correction things using something like Grok, which is extremely fast. But it's a very good insight. Fine tuning your own model is actually a really good idea. We've seen people do that for these types of anything with LandGraph and in-app error correction cool yep yeah we have some good cookbooks talking about multi-agent which I, again, will need to find a way to share with you. Yep. So we have this. If you go to LangGraph GitHub, examples multi-agent, there's a few different notebooks here that are worth checking out. Yep. Is this LangGraph allowing you to find cyclical graphs? Yep. Yeah. Yeah. Yeah. So actually, LandGraph is specifically designed for cycles. So some of the examples, like what we're showing today is only a branch, so it's a simpler graph. But for example, the React agent we will show today is a cycle so it's basically gonna gonna continue in a loop just like this and what you do is you set a recursion limit in your line graph config so you basically tell it to only proceed for some number of cycles this is default set for you I believe it's like 20 or something like that but But that's what you're going to want to do. Yep. So the question was about timing of the responses. So do you mean like if you're implementing some kind of self-correction, how long that takes? Well, that gets back to the question that that this gentleman asked it depends a lot on the LM you choose to use for your judging and its latency so that's kind of where if you're so maybe there's two sides of this one side of it is choosing an LLM that's very very fast and that's very important to do could be something like grok could be a fine-tuned like deployment that you do yourself could be a gpd3.5 so that's like one side of it the other side of it is um how do you actually kind of monitor measure that and so again langsmith actually does have very good support for tracing and observability and we do have timings all in fact I can go ahead and show you very quickly if you want to see So this is my Langsmith dashboard. I'll zoom in a little bit. These are my experiments Now if I if I zoom in here, I can open up one The Wi-Fi is a little bit slow. These are my replicates I can open up one of my traces and what I can see here is over here I get the timing. So this is the timing of the entire graph and I can go through my steps This is like the retrieval is really fast. That's good. You know less than a second Okay, now here's what's interesting my grading in this particular case is like four seconds So that's you know not acceptable in a production setting most likely But again, this is just like a this is a test case in fact I'm using what I'm using to grade here of GPT 4.0 so it is You know there's ways you could speed this up by using different models or different prompts or grading everything in bulk So there's a lot of ideas. I should grade each document independently. Oh, actually, you know what this is using chat fireworks So anyway, but you can look at your timings in lengths, but that's a really nice way I'd like to do it to kind of see, to kind of monitor the timing of my applications. Yep. The case you showed, like, from scratch, but when the agent is running from the beginning and trying to accomplish the task, but in reality, usually people have some context or history that's being passed to an agent. Do you have any suggestions of how the task is going through? Because, say, they don't have to do a function call because that's already in the history versus't have to do a function call because that's already in the history versus like. Let me try. I got the question. So the question was with agents, you typically pass them a message history. That is absolutely true. And in fact, the React agent that we we implement here like I can even open up one of the traces we can look at it together so so here's a react agent with GPT 4.0 here's one of my traces let's open it up and actually see what's going on here so what happens is first my assistant is right here. So this is OpenAI. Will, you know, here's the system prompt, right? So your helpful assistant, you're answering questions. Here's the human question, right? So again, this is the start of our message history, okay? So what the, and also these are the tools that the LLM has. And this is pretty cool. We can see this is the one it called. So what happened is our LLM, it looked at, here's our system prompt, here's the human instruction, and it says, OK, I'm going to retrieve documents. Great. So then it goes, and this is a tool call. It goes and retrieves the documents. Now that goes, so you look here, we can actually open up the retriever. Here's our documents. Cool. Now it goes back to the assistant. So this is a looping thing actually open up the retriever. What do we get? Here's our documents cool Now I go it goes back to the assistant so back to this is a looping thing It started our assistant it made the tool call the tool ran we got documents now we get them back now Let's go back to our LLM. So now our LLM. This is pretty cool, right? Here's the message history like you were saying instructions question document retrieval the documents that are retrieved are right here. And then now the LLM says, okay, I want to grade them. It calls the grader tool. And this is its reasoning and this is its grade. So anyway, you are right that as this goes through, you basically accumulate a message history. And the LLM will use the message history and most recent instruction to reason about what tool to call next that's exactly how it works I think I answered the question is is there anything that isn't clear about that so it is true like let's look at an example right so in this particular case right the LLM sees the retrieved documents from the tool and then it makes the decisions is okay I have this tool response it's retrieved documents right the LLM sees the retrieve documents from the tool and then it makes a decision it says okay I have this tool response it's retrieved documents what should I do next and it says okay well why don't I go and grade them and it calls a grade tool so it looks at the message history and reasons about what tool it'll call and that's exactly how these react style agents work and the whole issue is that's kind of a noisy process like it can look at that whole trajectory it can get confused, it can call the wrong tool, then it's on the wrong track, and that's exactly why these more open-ended tool calling agents fail. Say like in the second follow-up question to the agent, that technically it should follow similar paths in calling, right?... Okay, right, so I think that the question is, well, let's say there's a multi-turn conversation where I can, the user can go ahead and ask a second question, of course, and that whole kind of message history will be propagated. yes that is that is a common pattern and that let's see I mean what's the question on that though like it it could use context from its initial trajectory to answer the second question for sure it'll probably look at that jointly when it's deciding what tool to call so for example if it receives a question and in its message history it sees the context need to answer the question the agent could probably decide okay I don't need to retrieve documents again I have the documents need to answer the question I'll answer that question directly so it is true that a multi-term conversation the agent can look at its message history to inform what to do next. That's definitely true Here I don't consider evaluation of multi-turn conversations, but that is it's a good topic actually I don't quite have a tutorial for that yet, but I Could think about putting that together. Yeah, that's a good point I'll make a note of that actually so multi turn is a good one cool okay yep yep because they reduce the hallucinations you can actually run within the app and so that's what have you experienced with the writing tests and then you're going to do that for an open and the full gen tests? So the question is, I believe, have I tested like the ability to kind of online auto generate unit tests yeah okay so that's a big topic so basically the Alpha Kodium paper that Karpathy references here does that I have not tested that because it does ramp up the complexity because then you're relying on and I don't think I mean that would be aggressive for a production setting because basically be relying on an LLM to auto generate you unit tests testing against things that are auto generated there's a lot of opportunity for error there I think it's interesting particularly in terms of like offline challenges like this but in terms of like a production application that feels pretty hard and risky, but it's an interesting theme. The thing I've tested more on I found to be very effective is super simple, crisp, lightweight, free, effectively unit tests. Like, again, the good use case I found was our, so Langchain, we have an internal rag application called chat Langchain and it indexes our documents and provides QA it occasionally hallucinates the imports and that's a really bad experience right if you take a code block copy it from this app and then it this import doesn't exist it's like what the hell you know that's really annoying so I incorporate a really simple check where I have a unit test that just, it does a function call where it extracts the imports from the code block from the answer and it tests the imports in isolation. If they don't exist, there's an error. I feed it back to LM and say, look, this isn't a real import. Try again. And I can do, you can do other tricks like then context stuff, relevant documents or something like that. Anyway, you can handle that differently. But that little alpha significantly improved our performance. So I kind of like simple, lightweight, free unit tests. The idea of online-generated unit tests is interesting but opens up a lot more surface area for errors. The idea is that, you know, if you think about how do you just like they oftentimes seem easier to write than the code and without a better structure. So we use that for, like, not reviewing the outputs of some things, but have the and then we do see the other simulation and then we get right to them going straighter and right to them before you actually do a better implementation. Yeah. Yeah, so the follow-up there was, let's see, trying to write the test before the app is implemented. I see, yes. So they also do that. So basically for each question, they do have they so the Alpha Kodium work references existing unit tests for a given question as well as auto generates so you are right that would be interesting to have a bunch of pre generated unit tests that you know are good for certain questions and to run them absolutely hard to do in a production setting with an open-ended input but potentially very useful in well okay even a production said you could maybe have some battery of unit tests and based upon the question type pull related unit tests that you know are gonna be relevant it's a good idea yeah it's a good idea for sure so basically using some battery of dynamically chosen pre-existing unit tests based on the question type or the documentation whatever documentation they're asking about that's a good idea yeah he was saying so for larger projects you can also use that to test for aggressions yes that is that's definitely a good idea and this paper it does incorporate that idea as well as this auto-generated unit test thing Which is a little bit more aggressive Cool yep for instance, LLM checking. There are some frameworks that have to do this with. It's added complexity. Yeah. It's added latency. I was wondering if you've seen those give more better results to justify the latency complexity or whether you've seen one checker, one reflection of this. Yep. So the question was, in some of these self-reflective applications, like let's say this one, the self-reflective applications like let's say this one the self-reflective rag Right. We're doing a few different checks here. We're checking documents. We're checking hallucinations. We're checking answer quality Instead of having some hard-coded single prompt To do that. Can you have another agent like kind of a checker or a greater agent that's a little bit more sophisticated I have and he mentioned a few frameworks I have not played with them I think it's interesting I think it's it's one of these things where it's really good for kind of academic papers it's very interesting it's really good it could be good for offline testing in an online or production setting or it's the latency and complexity is probably a bit much I think in a production setting it goes back to what I think this guy was referencing you probably would want something that's extremely fast lightweight and I would not think about like a multi-agent system in a production setting doing any kind of grading this whole idea of LLM as LLM graders look kind of a new idea so I think this idea of like more complex agent graders is interesting but we're kind of in the we're taking baby steps at this point especially in a production setting so I'd probably shy away from that for now particularly everything about production but first thing offline or experimentation it's probably interesting. Yeah. What's the best practice that you've seen so far for performing tasks or evaluations on those cycles? Because obviously you can end up with many, many different types of input and that sort of thing. And obviously LaneGraph provides some tracing there. But what about, hey, here's an interesting test case. Yeah, exactly. So actually the notebook shared here go into it a little bit. So the way I like to do it is, and I can even, why don't I just show you one of the notebooks. So basically, so at the bottom I have kind of all the different evaluations. So this goes back to a question that someone mentioned down here as well. So this is a, so there's a colab and then there's a notebook and they are both the same. So they have the same evaluation section. This is, this is ragag testing dot IPNB and there's also a colab which is it's the same notebook basically it's in the slides just make sure you got those links but but to your question so the way I did it and you can be very flexible this is basically I define the expected trajectory through the nodes in my graph and in this particular simple case the trajectories that I expect are basically retrieval grading web search generate or retrieval grade generate those are the two expected trajectories I want to take now in this case I don't do cycles if you did cycles you could just incorporate more steps here to say okay here's kind of the expected number of steps I want or expect to see through that cycle and I think someone mentioned here if you've a really open-ended challenging task then it may be hard to enumerate exactly but for a lot of like more production setting applications where these are not extremely long-running you can enumerate here's the steps I expect to take through the cycle and and the way I do the evaluator it's as simple as this there's not much code it's all in the notebooks but basically I compare the tool calls of the thing did to these trajectories that's it super simple so that's how I would think about it I keep it really simple I would basically try to enumerate here's the steps through the difference the cycle I wanted to take and yeah go ahead right yep that's it that's it that's it so it's actually really simple in in this is actually a good point in The land graph case for this custom agent. You'll see all I do is For every node and we need to get into the code if we want maybe people have already explored so Well, I'll just answer the question directly then we can back up and go through all the code if we want. But basically, each node in my graph, I just append this step name. So like retrieve, I just say retrieve documents. This is my generate node. I just append this thing generate answer. And I return that in my state as steps. So then I have this record of the steps I took in my graph that's it and I can go ahead and fetch that at eval time and compare it to what I expect that's all you're doing so that's how it work with like the custom line graph thing and now with the react agent actually it's a little easier because the react agent uses a message history so I can just go to my message history and that's exactly what I show here. You can strip out I guess I do it up above but basically I have this function in the notebook Let's see. Where is it? Yeah, it's fine tool calls react. So it's this little function that will Basically look at my message history and strip out all the tool calls so Yeah, it's a nice little nice little thing with a react agent it's really easy to get the tool calls out with Lang graph I just log them at every node and then at eval time I just can extract them yep Yep. So if you elaborate on the open-turn scenario, like there's context switching, where is it actually the cognitive ability that is a high-level count, you can't see it. And ultimately, it's going to treat, where is that cognitive ability? And then it's very valid point in the ground. Yeah. So the question was, with an agent in a multi-turn conversation setting, how does it know whether or not it has the answer? To a given question. Where to go, exactly. Yeah, that's right. So there's a couple different ways to kind of break this down so with these agents there's a few levels of instruction you give it first you give it an overall agent prompt so if I look at the notebook here we can go look at the react agent as an example of this. So the React agent is defined right here. This is kind of like the planning step. Here's my, like, naive prompt, okay? So your helpful assistant is tasked with answering tasks. Use the provided vector store to retrieve documents, grade them, and go on. Now let's take the case that's more complicated. Let's say I had two vector stores. So one thing I can do is I can explicitly put in the agent prompt, you have two vector stores, A, B. A has this, B has this. And then you're implicitly having the LLM giving it the ability to kind of reason out which one to use. Now this is where the second piece comes in. You also bind it to a set of tools. This is really where the second piece comes in you have to also you also bind it to a set of tools This is really where the decision-making comes in when you create this tool so here's retrieve documents, right this tool description is Captured by the agents the agent knows what's in this tool and this is really where that decision to use this Retriever tool versus another one would be made it'd be a combination of the prompt you give to the agent and or the tool description. So if you had two vector stores, you could basically say, like, retrieve documents one. This vector store contains information about x. Another one has information about y. Then the agent is deciding what tool to call based on that description and maybe based on its overall prompt. But to your point it's not easy so actually with custom agent this is with the react style agent with the land graph custom agent you can do it a little bit differently where I actually don't have it in this notebook but I have other cases where you can actually can build a router node and I mean I'll show you actually so this particular notebook so this one this self-rack it's in the slides if you open this up we did this with llama folks so this is actually a trick I really like if you go to if you go to lang graph rag agent local here see it's Wi-Fi is a little slow. What I define here is a very specific router at the start of my agent that decides where to send the query. And this is something I really like to do, because like we saw with the React thing, it has to kind of decide the right tool to use, which can be kind of noisy, versus right so here's my router right this is reliable not turn locally and what I do here is I run this at the start of my graph and I have the agent or yeah the agent explicitly take the question decide what to use and based on what decision it makes I can send it to either web search or in this case a vector store so to answer your question if I pull all the way back I Personally like to do explicit routing as a note at the start of my graph Because it's it's pretty clean and you can see in the flow of this overall graph this router runs first And it looks at my question and sends it to one of two places And this can be more complex you can send it to one of n places but as you wanted to here this is what like a custom line graph agent this is what I like to do if you're using a react agent it is then using a combination of the the tool definitions and the overall agent prompt to decide what tool to call but you can see it's more squishy because it has to call the right tool and as opposed to giving it a router and saying always run this router first it has to kind of make the right decision as to what tool to call based on the context question which is harder and that gets back to our overall story about these kind of Lang graph explicitly defined agents are more reliable because you can like lay out this routing step right up front and have it always execute that step before going forward. So this seems to be very deterministic. Everything's around. And what was exactly like, actually, a graph would say, do I have a history of the answer? Can I answer that? That's the thing that we did counted. Because then you have not only had the trees, or you all have to have the history. Can I answer the question? I'm just curious. How do you orchestrate that? OK, got it. So the question was, how do I incorporate the idea of routing with history? So here's what you do. It's actually, you know, kind of, it should be pretty straightforward. You can define this router node in your graph. And that router node, I'll actually go down to it. So basically, here's my graph and here's the the route question node, right? Yeah, actually in this particular case, it's it's an edge don't worry about those details Basically what you could do is you could have a node that takes in the state. Now that state could include your history. So what you could do is in that router prompt, you could really easily, here, include another placeholder variable for like your message history or something. And what you could say then is make a decision about where to go next based upon the question and based upon something in our history and so you actually would plum in your message history here and use it to jointly decide what to do next so that's actually really easily handled in line graph using a node and you can reference the history they can be passed you can pass into that node as state cool yep the notion of state yeah let's let's talk about state in a little bit more detail so let's actually go to the notebooks that we're working with here that I've shared so here's this ragged and testing notebook so if you go down to the custom land graph agent the way you do it is find the state yeah so here is what I call graph state so the graph state is basically something that lives across the lifetime of my graph I typically like to do something as simple as just a dictionary so basically this is a rag graph and here I'm basically going to find a number of attributes in my state that are relevant to what I want to do with rag a question A question, my answer generated, whether or not to run search, some documents, my step list. And basically, the idea here is that I define my state up front, and then at every node, I accept state as the input, and then I operate on it in some way, and right back out to state. So basically, what's happening is, I define state generally up front as like a dictionary or something like that. It's a placeholder for things I want to modify throughout my graph, throughout my agent. And every node just takes in state, does something and writes back out to state. That's really it. So basically it's a way you can think of it as a really simple mechanism to persist information across the lifetime of my agent and for this rag agent it's things that are really intuitive for rag it's like question it's documents and so let's take an example like okay here's a fun one so my great documents node right what I'm doing here is I'm taking in state and from my state it's just a right what I'm doing here is I'm taking in state and from my state It's just a dictionary. So I'm extracting my question. I'm extracting my documents And I'm appending my I'm appending a new step because I'm notifying. Hey, here's my new step and Basically, I'm doing some operation. I'm iterating through my documents. I'm grading each one the grades yes, I keep it if if the grade is no so yes no means like is it relevant or not basically so but yes it's relevant I keep it I put in the filter Docs list if it's not I've set the search flag the yes which means I'm gonna run web search because I want to supplement I have some docs that are irrelevant and I write back to state at the end. My filter docs, my question, search, and the steps. That's it. So state's a really convenient way to just pass information across my agent. And I like using a dictionary, just like nice and clean to manage, as opposed to a message history, which is like a little more confusing. Like in any node, if you use a message history, it's like a stack of messages so if you want the question you have to like it's like usually the first message you need to just index it's just kind of ugly I like using a dick which is like I just get the question out as a as a key in my dictionary cool let's see we're about an hour in. I can also let people just hack and walk around, talk to people, stay up here, whatever is best. And you keep asking questions if you want to. I think people are just working, doing their own thing now anyway. So I might ask one question just for fun. Is anyone interested in local agents? We didn't talk about that too much. It's a big theme. Yeah. So I shared a notebook for that. And by default, I am using Llama 3. You can absolutely test other models. So this is set up to test just Llama 3 with a Llama. Try other things. I have a M2 32 gig, so 8B runs fine for me. If you sling bigger, you could actually bump that up a little bit. So that can be a nice idea. Yeah, exactly. so that can be a nice idea yeah exactly 70 B is it's unfortunate that yeah I have I actually want a bigger machine so I can run that because I found for tool calling 8b is really at the edge of reliability So that's actually why you really can't run the react agent locally with an 8 billion model reliably you can run the land graph Agent very reliably because it doesn't actually need tool calling it only needs structured outputs you'll see in the notebook So that's actually a really nice thing But react agent won't run locally reliably at least Yep You say chunk documents, okay, yeah, so this is this question always comes up with rag. What is the typical chunk? Size yeah, if you ask this question always comes up with rag. What is the typical chunk? size Yeah, if you ask ten people you'll get ten answers it's notoriously ad hoc you know what to be honest I did a I Did kind of a talk on the future of rag with long context models I'm kind of a fan of Trying to keep chunk sizes as large as possible. Actually, I think this is a whole tangent, but I actually think one of the nicest tricks, let me see if I have a good visual for it. Basically, let me try to find something here. This is a whole separate talk. But basically, I think, yeah, this one, rag and long context. So this is a whole different thing. But yeah, this idea. So I think for rag, one of the ideas I like the most is decoupling. I'll explain this, I'll just say it and then I'll explain it. Decoupling what you actually index for retrieval from what you pass the LLM, because you have this weird tension, right? Smaller semantic-related chunks are good for retrieval relative to a question, right? But LLMs can process huge amounts of context at this point, you know, up to, say, a million tokens. So historically, what we would do is you would chunk really small, like try to get it as tight as possible, all these tricks, semantic chunking, a lot of things to really compress down to just compress and group semantic-related chunks of contexts, right? But the problem is then you're passing to the LLM very narrow chunks of information, which the problem is then you're passing it to the LLM very narrow chunks of information which has problems in recall so basically it's more likely you'll miss context necessary so it's good for retrieval but bad for answer generation so a nice trick is for retrieval use some chunking strategy whatever you want like make it you know small but you can actually use like a doc store to store the full document and what you can do is retrieve based on small chunks but then reference the full document and pass the full document to the LLM for actual generation time so you decouple the problem of like are you sure you're passing sufficient context to LLM itself now I also understand if you've massive documents It's gonna be waste one of those tokens to pass full documents through but There's some Pareto optimum here where I think In that being too strict with your indexing approach doesn't make sense anymore given that you can Process very large context in your LLM. So you want to avoid being overly restrictive with your chunk size So this is a nice way to get around it basically to summarize You can choose a chunking strategy whatever one you want But I like this idea of referencing full documents and then basically passing full documents to the LLM for the answer itself It gets around the problem of an overly aggressive chunking strategy that misses context needed to actually answer your question and I think with long context LMS getting cheaper and cheaper this is starting to make more sense I'm trying to think I had a whole kind of like slide on yeah this one is kind of like balancing system complexity and latency so it's kind of like on the left is like maybe the historical view like you'd the exact relevant chunk. You can get really complex chunking schemes, a lot of over-engineering, lower recall. Like if you're passing 100 token chunks to your LLM for the final answer, you might miss the exact part of the document that's necessary. Very sensitive to chunk size K, all these weird parameters, right? On the other extreme, just throw in everything, throw everything into context. Google, actually I think this week will probably announce some interesting stuff with context caching seems really cool maybe that actually could be a really good option for this but higher latency higher token usage can't audit retrieval security authentication like if you're passing 10 million tokens of context in for your answer generation. So something in the middle is what I'm advocating for. And I think this kind of document level decoupling, where basically you index and reference full documents and pass full documents to your LLM, is a nice trick. We've seen a lot of people use this pretty effectively so yeah Okay, that's interesting. So the question was, how can you evaluate the amount of context you are using? Okay. Yeah, like does it definitely use the whole state? I don't want to miss it. Okay, but I guess, so in a RAG context, context you have a question you have an answer. So you have a question you have an answer and you have some retrieved documents. So you can evaluate the question relative to your document. That's one way to get at this. Like how much of the document is relevant to your question? So that's maybe one approach. And actually the notebooks show a few prompts to kind of get at that and actually a good way to think about that is you can you can think about document precision and document recall and this is a little confusing maybe so I should explain it basically document recall is does a document contain the answer to my question anywhere so let's say you have a hundred page document on page 55 is my answer recalls one it's in there precision is the other side of that coin which is doesn't get any information not relevant to my question in that particular case huge amount of irrelevant information so recall be one precision be very low so that's one thing you can do you can actually look at your retrieve docs measure precision and recall that's like one thing I think I would probably like to do there. And that's probably the best way to get at this question of like, how much of my documents am I actually using? Now, with this approach, your recall will be high, your precision will be kind of low, and you would say, I don't care, it's fine. If I have a model that's super cheap to use large number of tokens, I okay maybe I'll frame it another way I care more about recall than precision I want to make sure I always I answer the question if I pass a little bit more context than necessary I'm okay with that versus if you're a precision gated system then you would say okay I'm gonna miss the answer sometimes I'm okay with X I never want to pass more context than necessary. I think a lot of people are moving towards higher recall because the LLMs are getting cheaper and they can process more context. So that's kind of how I might think about it. And I think this approach is a nice idea of indexing full documents or indexing chunks but then referencing full documents, passing full documents to your LM to actually generate answers. So yep. I was wondering if it seems like there's a couple of options for where exactly you inject the context into the conversation history. Like just for one example, you could stick something into the system and say, hey, this is your knowledge that you know. And then the user just asks a question and there's no prompting from the user. The other thing is you can completely modify what the quote unquote user is as in saying answer this question with this context and then it's in there. And then like a further idea on top of that, you ask a question and you ask a follow-up question. Maybe if you say, please elaborate, there's no need to actually retrieve anything, but you're just having to chat. But if you can chat, or you ask something else to be retrievable, do you have any observations as far as things that work well, or have to work worked well or continually appending things into the system or like adding another user like where do you stick that context how much of it history yep yeah okay so this is a great question the question was related to kind of like agentic rag and where to actually put documents. So let's walk through the cases. So case one is I have a fixed context for every question you're just going to ask. Let's say I have a rag bot against one particular document. And that document is always going to be referenced. So you make a very good point. In that case what I would do is let's go back to our like agent example you can put that in the system prompt itself like you said so and actually I'll mention something else about this it'd be like right here so here's your system prompt your agent you just plumb that whole document in you say every question you're going to reference this document you don't need retrie every question you're going to reference this document You don't need retriever system. You're done. That's a very nice case No, the retrieval complexity use context stuff your whole document The thing that Google is going to announce this week I believe this context caching seems really interesting for this because basically what they're saying is if I get it right I think Logan will speak to this, but, and I think they have a context when a million to 10 million tokens is huge, right? So basically you can take a large set of documents and stuff them into this model effectively. And they house them for you somehow. I think you've some minor data storage fee. But then for every inference call, they don't charge you for all those tokens that are cached it's pretty nice so basically here's the use case to your point exactly I have some set of documentation it's 10 million tokens that's a lot of pages hundreds of pages I have it cached with the model and every time that user asks a question I don't get charged you know 10 millions of tokens to process the answer they just cached for me really nice idea so that's your point of like the first thing that's like not quite in the system prompt it's in like the cache but that's the same idea so you have cached or system prompt fixed context that's like case one so case two is you have you want to dynamically retrieve so you don't you can't stuff your context maybe you have a few different vector stores like we were talking about here with routing in that case yeah you have context. Maybe you have a few different vectors like we were talking about here with routing. In that case, yeah, you have to use an index of some sort, maybe a router to choose which index to retrieve from. So that's kind of case two. I'm trying to remember the, oh, OK. So like in that particular case for follow-up questions, how do I kind of control whether I retrieve or not so that's the nice thing about? Either one of these agents it has some state so the state lives across lights on an agent So basically the agent and this actually gets exactly what this the other question was on Let's say I built my agent with I'll show you right here so let's say I have a router node at the start of my agent okay and that router has access to state what I can do is then given a question this could be let's say it's a multi-turn thing it's a second question in my in my conversation I have an appended state from the rest of my discussion here. The agent knows it returned an answer. So basically, when a new question comes in, you could pass the entire state back to that router. And the router could know, OK, here's the docs I've already retrieved. And it can basically then decide to answer directly because I already have the answer to the question. So that's a long way of saying you can use state, either message history or explicitly defined in your land graph agent to preserve docs you've retrieved already and then to just use them to answer a question without re-retrieving. So that's kind of what these graph, these rag agents can be really good at. That was kind of like storing that in short term memory and reasoning about, hey, do I need to re-retrieve or not? So that's exactly the intuition behind why these RAG agents can be pretty nice. Yeah? Yeah. Yeah. Yeah. Yeah Yeah Yeah Yeah, exactly. Okay. This is a really good. This is a really good discussion and whole debate. So the highest level framing of this is How do you want your model to learn? So one option is you can modify the weights of the model itself with something like fine tuning. Another is you can use what we call in-context learning through your prompt. So RAG is like a form of in-context learning. I'm basically giving it some documents. It's reasonable as documents producing answers, not touching the weights my model Fine-tuning be taking the knowledge I want to run rag on fine-tune your model and updating the weight so it has that knowledge there's a lot of debates on this and The and actually I think Hamel has a whole course on fine-tuning. Oh, yeah, do you have a Okay I just want to clarify. I don't mean fine tuning information. I just mean fine tuning so that you have the idea. If you can fine tune it so that it knows, oh, I should have always retrieved this information from my system. And use that when answering questions. Before you can find the model, you really use whatever extra context the user is providing. So just as a simple example of that, if you have a system. Yeah. that the user provides, then it's focusing more of its intention on the end of the chat. So it's not adding the information in, it's just how do I use the facts. Yep, I got it. OK, I'll repeat that. So the clarification was thinking about using fine tuning more to govern the behavior of the agent rather than to encode factsode facts which is a good it's a very good clarification because I was gonna say using fine-tuning to encode facts I think a lot of kind of literature as points that being a bad idea for a lot of reasons it's costly you have to continually fine-tune as facts change so let's dispatch that I think that's a kind of a not a great idea but you make a very interesting point about fine tuning to govern behavior. Now there's a paper called Raft that came out kind of recently. And actually, as far as my understanding is, I haven't played with it in life, it's fine tuning to kind of do what our notebooks show today of this kind of like, look at the documents that are retrieved, reason if they're relevant, and then don't use them if they're not automatically filtered. It's doing exactly what we're doing in this land graph thing, but it's kind of achieving that same outcome through process fine tuning. So that's a very good insight, you're right. It seems promising to have these kind of fine tuned rag agents so to speak, or it wouldn't be an agent, it'd be an LLM fine tuned for rag that incorporates this kind of logical reasoning or what you're saying like maybe some kind of reasoning about if you have a multi-term conversation like avoid recency bias whatever it is that seems like a very good trend and interesting the challenge is a little bit if you're fine-tuning yourself fine-tuning is hard and somewhat advanced and all that. Alternatively, if it's like a very niche use case, like the RAF system would basically, if fine-tunes to kind of do this, as models get changed or update all the time, you kind of need to keep your fine-tune model up to date if you see what I'm saying so I think I'm still a little bit queasy about using fine-tuning even in that context because of the challenge of keeping it kind of up to date with a state of the art but it's interesting I think the raft paper is a good reference in this direction and does exactly what we do in this workshop but I believe it fine-tunes this into the model or attempts to which is a very intuitive thing to think about basically let the model reflect automatically on retrieved documents and automatically filter them for you. Seems like it should be able to do that. Seems like a good idea. But my hesitation would still be, well, if I want to switch my models, I need to re-fine tune. I want to use Lama 3. I have to fine tune Lama 3 on this task. I can't use proprietary. Maybe I can fine tune. GBD-40 might have fine tuning now. I'm not even sure. I could fine tune Lama 3 on this task. I can't use proprietary, maybe I can fine tune, you know, GBD4 might have fine tuning now, I'm not even sure. I can fine tune GBD, you know, so again, if I fine tune myself, that's hard. It still feels a little bit like I'd rather just sit up a simple, like, orchestrated agent that does it rather than rely on fine tuning, is my sense. Yeah. I guess I'm not a really bad person doing what Google meant to brag to do that fine tuning. Yeah, the provider. Right. Right. Right. Yep. Right. Right. Right. Yeah, okay, so that's a very good point. I think this is also a very big debate. So OpenAI just did an acquisition this week under a retrieval company. I forget the name, Rockset, I believe. So I think they are moving more in the direction of retrieval. I could absolutely see them offering an API that potentially does retrieval for you and incorporates some of these ideas for you. So how much does it does get pushed behind APIs and they take care of whatever is necessary behind the scenes for you? That much does this does get pushed behind apis and they take care of whatever is necessary behind the scenes for you that could absolutely be the case i would not be surprised if at all they move in that direction and i think there's always you know it's an interesting trade off like how much are you willing to kind of um you know abstract behind an api versus not i think there's always a lot of companies developers that want to kind of control everything themselves and build themself have full transparency, and others that don't. And so, you know, it's an interesting question. But of course, for certain functionalities, multi-modality, very few people are going to stand that up themselves. You kind of let that live behind an API. So what do you allow to live behind an API or not? My only concern is, I think for some of these kind of things, they could be very domain specific Like what you consider relevant or not could be very relevant to you and your application You kind of want to be able to control that's the only thing I can imagine It could be kind of hard to abstract that all behind an API which I think is maybe why open I Hasn't done too much in retrieval yet. It's just a hard beast. I know they've been trying for a while Um, I don't know. It's a great hard beast. I know they've been trying for a while. I don't know. It's a great debate though. Yeah, we can discuss more. Yeah, it's a good topic for sure. Yep. You mentioned about long context windows. Oh yeah. So there's a problem of like, lost in the middle. For example, the precision, like, it's, for example, in the context, they'd say 95% is relevant information, 5% is somewhere in the middle for example the precision like it's uh for example in the context they say 95 percent is relevant information five percent is somewhere in the middle and somehow at that point like you have any questions over that yeah yes the question is on lost in the middle of long context i actually did a whole study on this with greg cameron yeah it's a really interesting topic so the insight was basically that long context LMS tend to have lower recall or like you know factual recall for things in the middle of the context okay so that was one observation at least that's what their paper reported so I actually looked at this with Greg and we did something a little bit even harder. We actually tested for multiple fact retrieval So we tested can you retrieve one three or ten different facts from the context and this was using GPT-4 GPT-4 turbo single turn and on the x-axis you can see the fraction of the needles that it basically can get and Then on the y is the number of needles. So basically one needle Three needles ten needles green versus red is basically just retrieving versus retrieving and reasoning So it's like reason is a little bit harder than than than just retrieving These needles are actually pizza ingredients. So basically the background was, you know, this was 120,000 tokens of Paul Graham essays and three secret pizza ingredients or however many, one, three, or 10, but I injected in that context. And I basically asked the LLM, what's the ingredients need to build the secret pizza? So I'd have to find them in there. And basically, as you ramp up the number of needles, go from one to 10 it gets worse so with 10 it's actually retrieval itself is only like 60% so then I looked at okay well where is it failing and that's what I look at here in this heat map so basically this is telling you like how long the context is so a thousand tokens all up to 12 120 thousand and then here's like the needle placement so one to ten so this red means you couldn't retrieve it and what I found is actually it doesn't get them towards the start of the document so the retrieval gets worse if the needles at the front so it's like this I read a book I asked you a question about the first chapter you forgot because I read that a month ago or something same idea and actually I put this on Twitter and then someone said oh yeah it's probably recency bias and that's a good point that basically the most informative tokens in next token prediction are often the more recent ones so you know basically these elements learn a bias to tend to recent tokens and that's not good for a rag so that is all to say I'm a little wary about long context retrieval I wouldn't quite trust basically high quality rag across a million tokens of context you can see look if it's a thousand tokens no problem if it's 120,000 tokens of context you know it depends a lot on where those facts are if they're towards the start you actually can have much lower recall and so that's a real risk which is kind of why it goes back to this whole thing of Like I don't really buy just stuffing everything in the context that far right side I think there's too many issues with bad recall recency bias like you said and So I think until we have very and by the way I also don't really trust you know when they show those needle in the haystack charts It's like perfect. I don't trust any of that. I did my own study I found there's like a lot of errors and I think it depends a lot on a Couple different things one how many needles so in this case you see with one it's okay with ten It's really bad, right? So how many needles and then I saw an interesting study saying that like the difference in the needles relative to your context makes it easier so like in these studies like Pete's ingredients in Paul Graham is really different but if it's just like related slightly it's actually harder still so that is to say I don't really trust a needle in Haystack studies I don't particularly trust stuffing you know passing a million tokens of context and counting that and counting on that to just work effectively I'd be very wary about that that's kind of my thing but you know in these studies look a thousand tokens of conscious already you know that's that is if you're stuffing a thousand tokens yeah I mean that's actually still pretty small so yeah I just be wary about very retrieval from very large contexts yeah yeah okay that's a great question with agents the number of tools this is a really big issue I hear mentioned a lot so So if you recall, if you go back to the agent stuff, so you're basically binding some set of tools to your LLM, right? And that's what we show here, right? I've seen a lot of issues with a large number of tools. So I don't know exactly what the exact cutoff is but this is one of the big problems with open-ended tool calling agents is if I am basically selecting from 20 different tools I actually I maybe the Berkeley leaderboard has data on this as someone knows feel free to mention it but a reliability of tool calling even with a small number tools like on the order of five is already challenging if you're talking about hundreds of tools or dozens of tools I don't think there's really yeah I think it's quite challenging and which is why I've seen more success in a not using these open-ended style tool calling agents, laying it out more explicitly as a line graph where the tool calls live inside nodes. And you're not relying on your agent to pick from like 20 different tools. So you can lay out more of like a control flow where you route to different tool nodes based upon the logic. So that's kind of one thing I've seen. Another thing I've seen is maybe multi-agent type things we have different agents with subtasks with each having like maybe a small number of tools but basically what I've seen is it seems to be that okay maybe it's two things selection from large number tools is definitely challenging one of the most interesting things I saw is something like you can use something like rag where basically you can take a description of your tools create a natural language description embed it and then use basic like semantic similarity search your query versus the embedded summaries to select using semantics that's actually not a bad idea I would actually use that more than I would trust in LLM just like do the tool selection from a list of a hundred that's not gonna work so actually I think that like rag for tool selection is a cool idea I was gonna do a little like test that out and do a little tutorial so actually maybe I'll just make a note of that that's it that's a great a great question to do rag for many tools right yeah well I think I think using semantic similarity for tool selections is a good idea, definitely. What do you mean by data querying, though? Right. Let me make sure I understand. Right. Let me make sure I understand. I think the way I would think about it is, so you know how in the notebook, like in the code, for all of our tools, right, you have this little tool description, right, like retrieve documents, grade them them run web search I would actually write verbose descriptions for all my tools and index those descriptions right or embed them and then I would do and I would probably create like very very verbose high-quality summaries of what the tool actually does and then do semantic similarity search against those summaries I think that could actually I haven't done that yet but I think that could work really well because it's a very tall task asking LLM to differentiate between like 20 different tools, whereas you can do something somatic similarity that would actually probably be very effective. Right Yeah Yeah. Yeah, yeah. So the question was about multi-agent context when you want to orchestrate a large number of tools, having sub-agents with specializations that manage some small set of tools each, and how do you kind of Move between them. So if you look at our land graph repo, we do have a sub directory. It's under It's under Land graph examples multi-agent. We have a few different notebooks that have multi-agent style kind of layouts Which I would encourage you to look at. I haven't personally done too much work on it It seems promising, but I haven't played with it Multi-agent in general and these all reference papers you can also look at the papers a multi-agent in general for production setting feels quite aggressive Although that said as far as I understand I remember looking at the code a while ago Devin and some of the software agents do use like multi-agent style setups So Maybe have a look at the Devin repo or there's open Devin Have a look at these notebooks those could all be useful if you want to learn more about multi-agent Yep Yeah, sure. Right. How do you think about when it makes sense to wrap the rack inside of an agent versus just making a chain on its own system? That's a classic question, yeah. So the question was, when do I use a chain versus an agent? So that's very good. So we kind of touched on it a little bit kind of here so I think that the intuition behind why where and why an agent can make sense is simply that sometimes you want your application control flow to be variable and if you want some flexibility within your application an agent is a nice idea and so all this self-corrective type stuff we're talking about the corrective rag thing. Those are all kind of agentic flows Where the control flow depends upon the grading of the documents and so You know Historically people have largely been building chains and chains are very reliable and they're easy to ship and all that I think with things like Lang graph and of course, yeah, we're gonna line chains I'll speak my book about Lang graph but I've really used it quite a bit and I found to be reliable and I we are seeing a lot people starting deploy with it because you can actually ship and deploy a reliable agent with Lang graph and so I think a blocker to the ability to kind of ship agents has been reliability I think we're I would actually encourage you play with the notebooks and look at Lang graph because it does allow you to have that kind of reliability whether it be necessary to ship something in production. We do have customers that have land graph in production. Whereas a React agent in production is not recommended. Yeah. Yep. Sure. 100%. Yep, sure 100% So okay, so I think yeah, so the different why would you ever want kind of like an agent be it Langrath React or otherwise versus not and I get again I think it goes back to do you want your application of any kind of adaptability so okay here's one we can talk about routing I have three different vector stores I want to build route between them that is kind of a quote-unquote agentic use case because the control flow depends on the question that's one you might want routing you may want self-correction so that's kind of what we talked about here a whole bunch with a corrective rag stuff so you want routing onecorrection. I mean, those are two obvious ones in the context of rag itself. I mean, that's one thing I've often found problem with the rag systems is the routing thing is a real issue. You want it to be flexible enough to deal with questions that are out of domain for your vector store. And you need some kind of dynamism in your application to handle that so looking at the questions like okay just answer this directly don't don't use a vector store so those are like the most popular ones self-correction or routing yeah yeah building evaluation data sets, like question answer pairs are so domain specific. Yeah. I'm wondering if there are like general best practices, mental models, like things to think about when sitting down to build an evaluation data set. Yeah, yeah, okay. So the question was about kind of building eval data sets. Okay, that's a great question. It's often a very, very challenging part of app development. So if you have a RAG application that's domain specific, then oftentimes you have some set of canonical question answer pairs you care about. You know it's hard to find like very, very general rules for that, I think it depends on your application. I think there's kind of this hurdle, any evaluation is better than no evaluation, so small scale eval sets that you can use and just work already better than not doing any evaluation so I mean for this particular case I just looked at the document now maybe maybe I'll back up and answer so so one thing I've seen and I've done this a little bit is you can use LM assisted QA generation so here's one thing you can do. And I've done this a little bit with Langchain docs. I can build a prompt that says, given this document, produce three high quality question and answer pairs from it. And I can just basically load my documents and pass them into that LLM. I use a high capacity model like Sonnet or 4.0 and have it generate QA pairs for me and then audit them. That's a nice trick. I've used that. It actually kind of works. Now, you have to be careful with it. capacity model like Sonnet or 4.0 and have it generate QA pairs for me and then audit them. That's a nice trick. I've used that. It actually kind of works. Now, you have to be careful with it. You only would pass it usually one document at a time to keep it really restricted. And you audit them. But that's a nice way to bootstrap your Eval sets. That's idea one. And that gets into the whole idea of synthetic data sets. But if you're building domain-specific synthetic QA pair data sets, that's that's a nice trick so basically using Ellen to help bootstrap I think that's one idea that can help a lot yeah and otherwise I think that basically trying to stand up a small evaluation set for example for rag is even this case five questions but you can already see I can get some nice insights it's very simple to set these up. I have my little experiments all over here. And again, it's only five questions, but it gives me some immediate insights about the reliability of React versus versus Landgraf agent. So keep it small, potentially use synthetic data, start with something and then like kind of build out over time. Now, a whole other thing here is we didn't talk about this too much but if you have an app in production then the way this whole thing kind of comes together is you can actually have different types of evaluators that run on your app online we call those online evaluators okay so this is with our internal app and this gets back to the question I think he mentioned of you can have a bunch of evaluators for RAG that don't require a reference so like I don't show it here but but basically I can look at like document retrieval quality I can look at my answer relevance or hallucinations I can run that online I can flag cases where things did not work well and I can actually roll those back into my eval set so if I do that, then I actually have this self-perpetuating loop, like Karpathy talked about, like the data flag wheel, where actually I'm running my app in production, I'm collecting cases of bad behavior that I'm tagging with online evaluation, and I'm rolling those back into my offline eval set. So what you do there is look at the case that the app is doing poorly in production, audit them, correct them, so build a canonical question as a parent from that and put that back in your test set. And that's a good way to bootstrap and build it up. So I'd start cold start, synthetic data, small scale examples, online evaluation, or some system to check online where it's failing, loop those back in and build it up that way. That's like your data flywheel. Yeah. Actually, I even had a slide on this in one of my older talks I used to work in self-driving for many years and I actually was a big fan of Carpathy stuff at Tesla and actually I've had his his thing here of this is like the the data engine thing of like you know you ship your model you do some kind of online evaluation where's it failing capture those failures curate them put them back in your test set run that as a loop that's like he called operation vacation because you can go on vacation and the model keeps getting better and this is more in the context of like training models because basically all those those failed examples once they're labeled they become part of your your training set but the same thing applies here with LLM apps cool yeah Same thing applies here with LLM apps. Cool. Yeah. So I just wanted to ask, you just mentioned fix to SQL. Oh, yeah. So I just had a question where, for example, all that happens when we send stable schema, and we send this SQL query, and then we have the database. And if the result is too large or something, so we cannot send it for another LLM user-friendly query. Like user-friendly after or something, right? How do we handle that? What do you suggest how the query is so big that we cannot put it in the context. Yeah, the question was on text to SQL. We actually have a pretty nice text to SQL agent example here. So it's in Langrath examples. I think it's in SQL. Where is it? I'll find it here. What is it? Tutorials, as in tutorials. So SQL agent here. I think a lot's in the prompting. So basically, in this particular case, I believe Ankush from our team set this up. You prompt your, you can do a couple things. So you can prompt your SQL agent to, where is it? It's somewhere where he tells it to, yeah, it's based on all these instructions. Be very careful about, let me just find the, in short, you basically can instruct the SQL agent when it's writing its query to ensure not to extract an excessive amount of context. And I can't remember exactly where it does that. Okay, yeah, it's right here. Limit, always use a limit statement on your query to restrict it to like whatever it is, five results, and that's the hard-coded thing here. And then also in your SQL agent, you can incorporate a query check node to actually look at the query before you execute it to sanity check for things like this. So basically, I would have a look at the LangGraph example tutorial SQL agent notebook to have... I've actually... I ran this and evaluated and I found it did work pretty well. So that's one thing that I would look at. One way question on SQL, a lot of times like the SQL, they don't follow the group, I have questions, but how do you handle filtering questions, like where flaws, like when you have a high cardinal column, like I have a question, right, like if you say, I'm going to SQL query and it needs to be where, where flaws in SQL, where exactly translates to like any column value, like any value from the column, but like especially if in some depth, but you're high cardinality, like maybe 500 values, like how do you handle that? That's what I think. Hmm, yeah. So the question was related to like, how do you handle high cardinality columns? And I guess that is related to restricting the output size. I mean, I'm actually not really a sequel expert so I'm probably not the right person to ask about very gory details of text to sequel but in general I would kind of consider can you prompt the LLM effectively like okay two different things one the general ideas here were basically upfront prompting of your LLM to kind of follow some general kind of query formulation criteria. And then two, an actual query check explicitly to review and confirm it doesn't have some of the issues like extracting accessible context. But for anything more detailed than that, I'm probably not the person for those insights. But there might be a specific text to SQL deep dive at some other point in this conference and you should definitely seek that out. But I haven't done that much of text to SQL. Yeah. question answer rag patterns, but one thing I'm interested in is more long form document generation, so writing for a floor or something like that. Yeah. Can you see the interesting patterns of what you think about tools marketing solutions with a line graph or other types of things to achieve that type of outcome? Yeah, okay, so the question is related to kind of document generation. That is a really good theme. So we actually have kind of, there's an interesting paper, I did this a while ago, I need to find it. Where is it? So we have a notebook, if you look in Landgraf examples, Storm. So this was actually for wiki article generation. Here's kind of the diagram for it. We actually have a video on this too. I'm actually trying to refresh myself. I did this like three or four months ago. But it was basically a multi-agent style setup in line graph. Where if I recall correctly, I'm just looking at the flow here myself. But basically what it did was you give it a topic and it'll kind of do initially this kind of like generation of related topics and actually uses the multi-agent thing of like editors and per and um experts the experts go and do like web research don't worry too much of the details so the point is it was an interesting paper in flow for wiki article generation using land graph in a multi-step process and actually the wikis are like pretty good i think at the bottom of the notebook i have an example wiki article generation using line graph in a multi-step process. And actually, the wikis are pretty good. I think at the bottom of the notebook, I have an example wiki. So you can see all the code here. That's the graph. Yeah, and then here's the final wiki that you get out from this type thing. So it's pretty good. Have a look at that notebook. I think, yeah, Jason Liu also had a post on this recently, this idea of like report generations, a theme that we're going to see more and more of. This was one idea that was pretty sophisticated, though. You can probably simplify it a lot. I've done a lot of work just on like a simple kind of distillation prompt, like perform rag and then have some generation prompt to give a bunch of instructions for how I want the output to be formatted that could also is really effective yeah Yeah, so the idea the question was related to user feedback. That's a really good one. So we do have, I think I mentioned previously, if you look at LandGraph, where is it? We, I believe we have some user feedback examples. Thursday, we're definitely going to be announcing something that has a lot of support for user feedback and I would encourage you to keep an eye out for that. So Tarasyn is going to launch that here on Thursday. I will look for, I know we have some user feedback examples in LangGraph, but I will need to find them. Let me see. We probably, let's check Lang, We probably tweeted about it at some point. I haven't actually done anything with user feedback, though. Let's see. Yeah, maybe I might have to get back to you on that. I thought we had some nice examples with LangGraph. Yeah, I'll have to get back to you on that one. Feedback. Customer support might be in here. Let's try something else. the LandGraph docs. We have a bunch of tutorials in the docs here. Just Google LandGraph documentation. I'm just poking around here for user feedback. Ah, here we go. Look at this. So LandGraph how-tos, human in the loop. I would have. Look at this. So, LandGraph, how to's, human in the loop. I would have a look at that. I have not played with that myself,, yeah. Okay, right. So for like mid to longer term problem solving tasks, how do you incorporate user feedback to ask for more information? So I would have a look at this documentation because I would imagine it will cover examples along those lines. I haven't personally done that. I also believe that I would have a look at the customer support bot because that's an example of a kind of multi-turn interaction between a user and a support agent so I would look at the customer support bot which will on my team did as well as the documentation on human loop so those are two things I would check out there nice yep to model architecture for training models for agentic reasoning? For agentic reasoning? Yeah. Yeah, so that's kind of an interesting question. So the question is related to training models specifically for agentic reasoning. If anyone, so I mean there's a lot of, there's a lot of work on prompting approaches for different types of reasoning for sure. I'm a little bit less familiar with like efforts to fine tune a model specifically for particular like agentic architecture or use case, but you could imagine it. Most of the work that I've encountered, though, is just using high-capacity generalist models with tool calling and specific prompting techniques. So React is kind of a particular orchestration flow and prompting technique rather than, you know, you can interchange the LLM accordingly. I think the main thing typically or historically for agents has been the ability to perform high quality and accurate tool calling. Because agents, that's one of the central components of agents. And so that's kind of been the gating thing. And I think model providers have been focused a lot on just high quality tool calling, which helps kind of like all agent architectures. I haven't seen as much on fine tuning for one particular architecture. I think it's like high capacity journalist models with tool calling and then prompting. So it's more like in context learning. That's kind of the trend I've seen at least. Yeah. Can you talk a little bit more about line graph checkpoints? Yeah. Yeah. Yeah. So the checkpointing stuff,? Can you just hold it? Yeah. So the checkpointing stuff, actually, this Thursday, that's going to be a lot more relevant because we're launching some stuff to support deployments for line graph, in which case you can do a bunch of different things, but you can have a single state that persists across many different sessions. You can also have checkpoints. You can return to state and revisit an agent from a particular point don't worry about that too much for now I think there'll be a lot more documentation and and kind of context for that on Thursday when this stuff through deployment comes in but it's good to be somewhat aware of I'll poke around the documentation for a little bit more on checkpointing, but it really becomes relevant on the stuff we're announcing on Thursday. So I would have a look then. Let's see if we update our docs. Yeah. Yeah, so there is some documentation on it now, but it'll become a lot more interesting and relevant come Thursday and we have a lot more support for deployment Yep Yeah Yeah or makes small mistakes in YouTube, for instance, instead of YouTube video around it, it says YouTube video from the web. Yeah. To stop the process,. OK, yeah, so that's a really good question. And so the way it works with the existing, so it depends on the architecture. So using the React architecture, let's see if I can find an example of it so here's with react agent let's look at one of the traces see if I have an example so basically the tool call itself will return like an error and the LLM then is expected to self-correct from that error it has to kind of self-correct from that error. It has to kind of self-correct. So that's kind of one approach that at least we do with the React agent. So actually, you can see it in the notebook. If you go to, and if I can find some traces that have that example, I will pull them up but I think it's in utilities somewhere yeah so basically this tool node with fallbacks basically what happens is in this tool error so this if there's an error in the tool call itself it'll return that error. And usually the agent will then, or the LLM assistant, will look at that and self-correct its tool call. So that's typically how it's done. And this actually is reasonably effective. But again, the nice thing about the other implementation, the custom agent I called in the notebook, is you don't rely on tool calling in this way and so you can get around this type of issue but basically catching the errors in the tool call itself with this code is what's currently done let's see if I can actually find an example yeah I mean if I can look for one where it gets the answer wrong. Yeah, let's see this one. Let's see if we can find a tool call failure. So here's the trace. Let's see. Okay, this didn't have a tool call error. Yeah, so basically what you'll see in the message history is that like the tool itself will return this error message and then the LLM will say, okay I need to retry and then it'll retry and hopefully get it right. Yeah, yep. Oh yeah. Yep. Yes. which is great for that exact problem. Yep. It does like the instant revalidation of the output, especially for those simple little errors. Yes. I'm wondering, well, first of all, it'll probably work, but does Nightchain, I haven't used it much, so could you use it with an instructor and then I can't take a look at it? And it's similar to the light. Okay, so this is a very good point. So yeah, I'm a big fan of instructor. I haven't used it as much, but what you're saying is one particular type of tool call. So basically that pertains, I believe, more to structured outputs, which is indeed a kind of tool call. And when you're using something like a pedantic schema, you're right. It's very easy to check and like correct errors. So I've found catching errors like with schema validation, like using a structure is really good. And we have some other things you can use within the line chain to do the same thing. So that's one type of error that's actually particularly easy to kind of detect and correct. What we show in this notebook here, and the code I showed is more for any general tool. So this code here will operate on any tool you call regardless. So it doesn't have to do with structured outputs or anything. And so just a more general check for tool call errors. Now in terms of instructor with Langchain, now maybe I'll just back up a little bit. So LangGraph does not require Langchain at all. So that's kind of point one and neither does lang smith. So actually everything we're doing here does not need to use lang chain. So actually that could be a pretty interesting thing to try for like kind of the choose your own adventure thing. But basically in the custom agent part, I use with structured outputs to do the grading so if you go to yeah if you look at the the retrieval grader here so this is using LLM with structured output and here's my grade schema try that one instructor that should work work great. You don't need LangChain at all for this. And that'll fit right into the LangGraph. So actually I think it'd be great to use instructor with LangGraph for this particular use case. And I do agree that Lang, that instructor is really nice for those kind of like schema validation error correction. Plug and play. That, that I'm gonna make a note of that's a really good kind of choose-your-own-adventure case or should I put that try instructor with line graph for grading yeah I like that a lot yep yep Yep. Yep. where is the not, where is the rag pipeline to be in the world? Like what are we still not doing at? And your guys are thinking, or so, down in the moment. Yeah. So a question was related to just rag in general. And where is rag? Like rag agents. Rag agents. Yeah, sure. Well, to be honest, a lot of the problems with RAG, I think about our own internal application chat Langchain, a lot of problems with RAG actually are retrieval problems. Retrieval is just hard. I'll give a good example. Like Langchain, we have, I'm trying to remember, five million tokens of context across all our docs, something like that. We have also we have a very long tail of integration docs. You want very high quality coverage and questions across all of that. There's a lot in how you index all that stuff to ensure that you boost retrievals from more canonical how-to guys that are much better documented, but still having coverage over long tail content for like, you know, long tail questions. For example, if you're using raw semantic similarity search, you could have relevance to, you know, say your how to guide, which is really well developed and three random long tail documents that are not well developed and they'll all get returned. And so how do you overlay different systems? It could be re-ranking to basically promote content that you believe to be more accurate or better based on some criteria. So that is all to say, I think with RAG, the challenge is actually just domain specific retrieval for your application. That's just a hard problem. And there's been a lot of work on this. It's been around for a long time. I think that's really the limiter. And actually there's kind of no silver bullet. Like in our case, we're having to look at the structure of our documents very carefully, design our retrieval strategy based on that doc structure. Like in particular, we're thinking about applying certain post-retrieval ranking to docs of certain types based upon their importance or thinking about retrieving a large, like a large initial number of docs and then boiling them down with, with kind of re-ranking based upon importance. So I still think retrieval is very hard. It's very domain specific. It depends on the structure of your documentation. There's kind of no free lunch. I think the things that are good about RAG is context window is getting much larger for LLMs. And so back to that point I was making before, I think we're seeing, and we're considering this ourselves, less worry about the exact right chunk size. You can think more about chunking in different ways and then passing full documents to your final model. So I think that part of it's really good. But still, even like, even in this particular case, you probably still need some re-ranking to promote the most important documents. So I think retrieval is still quite hard. In particular, like looking at the line chain docs in particular, the overlay of document importance on top of raw semantic similarity search. Take a case of like, I have a question, semantically, it's similar to 10 different documents. Those documents though vary widely in their quality and they're relevant, like more like higher level relevance. Like maybe that passage is related, but like it might be a general question, but how do you build an agent? And then some random integration doc talks about building agent for integration X. And I wanna make sure that the more canonical, well-developed agent, you know, overview doc gets promoted and passed back in the end. Stuff like that. Sorry, it's a long answer, but basically RAG is, it's hard. I mean, I think retrieval is really the hard part. The generation part is getting better and better as long contexts grow. Yep. Lance, so this pre-ranking approach for your documents, what's the metadata? Do you have a relevancy to a particular topic as well as numerical ranking? Yeah, okay, that's a great question. So the question was, when we talk about this re-ranking, how do you assign this relevance to your documents of what is that? So I'll just give you what we've been thinking about. I actually think it is for us going to be a hand-tuned kind of relevant score based upon our doc structure. So if you look at the Langchain docs, like go to Langchain documents. Yeah, so language documentation. We have these sections up here, tutorials, how-to guides, conceptual guides, which are like really well developed, more recent, well curated. These you can imagine have some kind of relevance or importance ranking of one or highest. So these are documents that contain very high quality, well curated answers that we want to promote and serve to users in the generation phase. However, let's say someone asks a question about one particular integration, right? If you go to integrations, we have all these pages, right? Components, go to retrievers, look at the ZEP cloud retriever. This is some stuff related to ZEP cloud specifically. If someone asks about ZEP cloud, you do want to be able to retrieve that doc, right? And so some ability to differentiate between questions that need, you know, general answers, in which case you would promote your more canonical how-to guides conceptual docs versus questions that require retrieval from very specific integration docs, in which case you would still promote this information. That's kind of the crux of it. And I think we'll probably use kind of manual or heuristic scoring to up weight or up rank our core like how to guides and conceptual guides over longer tail integration docs. And we might have a router that will indicate whether the question is general or specific. So those are the two things I'd probably do. So routing on the question side and then some kind of heuristic relevance or importance grading or quality grading on the document side. And that can be packed in the metadata that you pack along with your index chunks. Yeah. I don't know. Maybe not. Yeah. Oh, yeah. So let's say a typical RAC application where there's a question and answer but we kind of maintain the multi-turn, like for example, we maintain the compensation history of the user to kind of create a. The problem is, like, let's say, for example, a question is asked and then the retrieve chunks are, like, let's say, for example, a question is asked, and then the retrieve chunks are, let's say, five. Right. And then a subsequent question is asked, so let's say, no, but it's related to the first question, but still somehow in the first node, you transform the query, and then the retrieve chunks are still the same. So I would just get that answer, which is more of the first answer. So how do you, my question is, how do you make sure that, let's say, he wanted to deep dive into the document, like into the more context, how do you make that happen? Yeah, so the question I guess was like in a multi-turn RAG context, let's say the case where a user asks an initial question and you retrieve some documents, you produce an answer and they ask a follow-up that says give me more information about this. Now, do you want to re-retrieve or do you want to re-reference those same docs? No. So what happened in my case is I try to rewrite that question. OK, you do a rewriting. So you rewrite the question. OK. And then go to the documents. So most often, the documents would be the same as the first. The same as before. Yeah. OK. So the answers would be mostly the same. OK. OK, interesting. So the problem there is more of a retrieval problem. Okay. The answers would be like, what's the same? Okay. Okay, interesting. So the problem there is more of a retrieval problem. You're doing a rewrite. You're still retrieving the same set of documents, though. Now, what do you want to have happen? Do you actually want to retrieve different documents, or do you want to? It's kind of like deep dive. Yeah, but that's a question. What do you mean by deep dive? Like you're retrieving, let's say it's a chapter of a book, you're retrieving only the first page, you want to retrieve the whole chapter. Yeah. Okay. Okay, then I think actually a question rewrite would probably not sufficient. What I would think about more is for that second pass You could actually do something like metadata filtering on your chunks if you have your data or your documents partitioned by like Chapters or some sections. I would just do a bulk retrieval the whole section or something like that So it's more like a trick on the on the retrieval side rather than a rewrite of the query Because I hear I see what you're saying you rewrite the query You might get the same Docs back if you want to guarantee that you actually get like a deeper dive in your Docs Then maybe it's something in your retriever itself. You can increase K. So retrieve more Docs You could use metadata filtering to like ensure you get all the Docs in a given chapter So I think it's more retrieval thing. But that's kind of an interesting point, though, yeah. Cool. Well, I know it's been two and a half hours almost. So there we go. It's good. Yeah, yeah. Yeah, I'm hanging out for another till noon. So cool. Yeah, yeah. Okay, okay. This is good. So, see, the question was he's doing the local agent tutorial, and the question's on the eval set. So, actually, that's a fun one. Modify them any way you want. The key point was I wanted some questions that are definitely outside the vector store. So I asked something about, like, two things about sports, because I know it's not in my vector store about agents. So I think I indexed three blog posts about, about like agents and prompting and adversarial examples I just wanted some orthogonal questions that'll force web search so that's the only thing there but you actually play with those and you can modify them and all that yeah but that's cool. It's working. Are you using Lama 3? Yeah. Cool. Yeah, I was doing the 70B. Oh, you have a laptop big enough for 70B. Not really. Okay, okay. You're at the edge of... There's an M1 that's taking like a minute. Okay, okay. And then I switched to the 8B. Yeah, 8Bs. Yeah, exactly. I mean, it's actually kind of nice. You can even run the 70B, to be honest. I'm not sure I can even run it. But, yeah, that's actually kind of nice. You can even run the 70B, to be honest. I'm not sure I can even run it. But yeah, that's cool. Nice. Let's see. Well, I can just hang out and oh, yeah. Sure. What would be the best way to incorporate full-on questions? Is it pro to your... Yeah. Well, if you have a chat application, so the question was related to how do you incorporate multi-turn. So if you look at the React agent agent it uses a chat history as its state in that case follow-up questions we captured just in the message history as part of chat the I think the current layout of the custom line graph agent though is a little bit more single single turn so it'd be kind of question and answer. That's, oh yeah. I meant that the agent actually has to come back with a question to talk about details. OK, got it. So the question was, how do you modify the agent so that it will actually return, like, if it needs more clarification from the user? Yeah, these particular agent examples don't do that. But again, I think that's maybe a good takeaway for me. I should add that to these tutorials, incorporate a simple example of multi-turn. I will do that and get my contact in. I will send that to you. So now I have a simple ev eval or pull off questions. Yes. I think that should be the smarter thing. Yes, exactly. So you want, so I mentioned previously, if you look at Langrath, let me find it. It's one of our notebooks. The customer support agent. So Landgraf examples customer support is an example of an agent that has like multi-turn dialogue, but it's complicated. So I'd like to maybe augment these tutorials with a simpler example. Yeah, I'll follow up on that. Give me contact info, and I'll send you something. Cool. Well, I'll just sit up here. Anyone can just come and grab me. Thanks for everything. Hopefully, the cookbooks are working. Yeah, it was good. Made it two and a half hours. So. Thanks. Субтитры подогнал «Симон»! | Architecting and Testing Controllable Agents: Lance Martin | 8,513 | AI Engineer | 20241011 | LLM-powered autonomous agents combine (1) Tool calling, (2) Memory, and (3) Planning to autonomously perform tasks. While they hold tremendous promise, agent reliability has been a barrier for large-scale deployment and productionisation. We’ll cover ways to design and build reliable agents using LangGraph, which can support diverse self-corrective applications such as RAG and code generation. But, just as critically, we’ll cover ways to use LangSmith to test your agents, examining both agent's final response as well as agent tool use trajectories. Collectively, we’ll talk about three types of testing loops you can incorporate into your agent design process - at run time, pre-production, and for production monitoring.
Recorded live in San Francisco at the AI Engineer World's Fair. See the full schedule of talks at https://www.ai.engineer/worldsfair/2024/schedule & join us at the AI Engineer World's Fair in 2025! Get your tickets today at https://ai.engineer/2025
About Lance Martin
Recorded live in San Francisco at the AI Engineer World's Fair. See the full schedule of talks at https://www.ai.engineer/worldsfair/2024/schedule & join us at the AI Engineer World's Fair in 2025! Get your tickets today at https://ai.engineer/2025 | 2024-10-12T08:54:56.121641 |
https://www.youtube.com/watch?v=4nZl32FwU-o | This video is a conceptual guide to how we think about multi-agent systems. So Lengraph is a framework for building agentic systems, and a big type of agentic systems are multi-agent systems. And so in this video, we'll walk through at a conceptual level what a multi-agent system is, some of the common different architectures, and some of the common things that you might want to think about when thinking about building these multi-agent systems. Just to set expectations, this is purely a conceptual video, so we will just be talking about concepts. Please see other videos and documentation for more technical guides on how to build multi-agent systems. When talking about multi-agent systems, it's helpful to start with what a single- system is. The technical definition that we like best is that a system is more agentic the more that an LLM decides the control flow of an application. However, for simplification we'll start from a more simple definition of an agent, an LLM that calls tools, as this is often the first place that people start when building agents. This is great to get started, but as you scale up the complexity, it can have some downsides. Some of those common issues are too many tools. The agent might have too many tools at his disposal, and as a result, it makes poor decisions about which tool to call next. Anecdotally, we've seen that around five to ten tools is the sweet spot for the maximum number of tools an agent should have. Second, the context could grow too complex for a single agent to keep track of. As you're calling more and more tools and having more and more interactions with the human, the context starts to grow and so it starts to overwhelm the context window of the LLM and it stops performing well. And then finally, we often see a need for multiple specialization areas in the system. For example, having a planner, a researcher, a math expert, a coder, things like that. And for performance reasons, it's often better to break these out into separate agents, rather than trying to rely on them all being present in one big prompt. Some of the benefits of multi-agent systems are that they are more modular, which makes it easier to develop, test, and maintain these systems. They can be more specialized, so you can create these different expert agents focused on particular domains. And then you also have more control. A big part of multi-agent systems is how these agents communicate with each other. And when you use a framework like LandGraph, which gives you really low-level control, you can explicitly control the communication patterns between these agents. Let's now talk about some of the common architectures we see when building multi-agent systems. We've already covered the baseline system, a single agent system, where there is an LLM that calls multiple tools. Another common architecture is a network of agents, where these agents each have their own individual tools, and they communicate with each other by deciding who goes next. Frameworks like Swarm and Crew AI are best known for this type of architecture. In practice, we've seen that this architecture is a little bit too loose in its communication patterns. If a single agent can route to any different agent at any different point in time, there's not a whole lot of control that you can actually have as part of this system. As a result, we've seen that these types of systems are often unreliable, take a lot of time, and cost a lot of money because they're making a lot of calls to the LLM. And so we don't typically recommend these in production. In a supervisor agent approach, you have one single agent whose sole job is to route to other agents. If we compare this to the previous network of agents, this makes it a little bit more manageable for the sub-agents. They can focus solely on doing their job. They don't have to think about who to call next. Who does think about who to call next? That's the supervisor agent, this node right here, which can communicate with all the different subagents. A simplified version of this is when you actually pass these subagents as tools to a central LLM. Your individual subagents are now just tools in a larger system. This is great because it's pretty simple. However, the downside is that when the LLM calls those subagents, what it's passing to those subagents is largely the inputs to the tool call. This means that the agents communicate with each other not through some shared state, but rather through the tool call parameters. We'll go into more on this later. In a hierarchical approach, you just keep on layering up these supervisor agents. So you might have one supervisor agent, which you can call into a separate subagent, but that subagent itself is a supervisor agent, and you can keep on layering this. This is really good when you're working with a lot of these subagents, and you can keep on layering this. This is really good when you're working with a lot of these sub-agents, and you're able to group them in more specialized ways. However, the most common architecture that we see above all is a completely custom cognitive architecture. It's not using an off-the-shelf supervisor or hierarchical agent. It's maybe borrowing aspects of that, but it's very custom to the domain that you're building. We see this level of customization and control being needed to get to production, and that's what we try to give with Lengraph. Let's talk a little bit more about how these agents communicate. One aspect of this is how an agent might call another agent, whether it's in a supervisor or hierarchical approach. There's two main methods that we see happening here. One is when the two agents share some overall state object, and the second is when the second agent just gets the results of a tool call from agent one. So in the first situation, you might have a shared state which has a list of messages, some artifacts, and any other keys that you might desire. And then both agents just write to that state. A separate way of doing this is where the agents communicate solely through tool calling. So agent one might call agent two, and it just fills out the parameters that it wants agent two to see as part of the tool call that it generates. Agent two then takes those parameters and nothing more, goes, does work, and then comes back with a final response, and that's passed back as a tool call response to Agent 1. If we compare some of the architectures that use these, this is the main difference between the supervisor and the supervisor with tool architectures. In the supervisor architecture, the overall state is passed to the sub-agent. But when the supervisor with tools architecture is used, it's just one LLM that's generating tool call parameters, and that's only what's passed to the sub-agent. One common question we get is how might you have two agents that have different states communicate with each other? The answer is that there just needs to be some shared keys that they communicate on, but all the other keys can be completely different so here in this example we have the first agent which has this foo key and let's say it's generating that and it's also using bar and baz it also has this foobar key which hasn't been set yet because that's what the second agent's going to set the second agent is going to read from the foo key it also has its own separate internal keys a b ABC and XYZ, but then it's going to return its result as foobar because that's part of the overall state that the first agent can recognize. The final communication pattern that I want to highlight is what happens when you have two agents that are communicating on the same list of messages. List of messages are a very common state that we see in LandGraph, and oftentimes you might have agents reading from and writing to that same list of message. These agents themselves might be doing tool calls, and then they're generating a final response. And these two different sets of messages, the tool calls and kind of like the internal thought process of the agent and the final response, you don't have to treat those the same. So you could append all of the tool calls and the final response to this shared list of messages. And you'll grow up a pretty big kind of like message list that has agent one's tool calls and agent two's tool calls. Another approach that you could take is just only put the final responses on this shared message sheet. You then could have a separate message list that just tracks this internal tool calls for each agent. That's it for this conceptual guide on multi-agent systems. We think they're pretty powerful to allow you to create more complex agentic systems, and we're excited to see how you use it. We're releasing some improved technical support for multi-agent systems, including subgraph support in Langraph Studio, so I definitely encourage you to check that out. And again, I'd also emphasize that the most common multi-agent architecture that we see getting into production is custom cognitive architectures. So while it's great to think about and read about supervisor and hierarchical agent systems, I'd really think about what type of system you think works best for your domain, and then just build that using a lot of the common techniques that we've shown here, but maybe not taking an off-the-shelf architecture. | Conceptual Guide: Multi Agent Architectures | 538 | LangChain | 20241016 | This video is a conceptual video that covers multi-agent architectures
Full documentation: https://langchain-ai.github.io/langgraph/concepts/multi_agent/ | 2024-10-18T16:34:29.338141 |
https://www.youtube.com/watch?v=Fm68I-phaiY | Hello community! You know, do you know that Harvard has a problem? I mean, a beautiful problem. But since they just got here the Nobel Prize here in medicine, Gary Rufkin, professor of genetics at Harvard Medical School, there's a lot on the line for Harvard. And now they noticed here that LLMs have suffer from incorrect retrieval, missing key information, and misalignment with current scientific and medical knowledge. And Howard writes, you know, additionally, LLMs struggle to provide contextual relevant answer. And it goes on, and Howard writes, specifically, LLMs have difficulty combining scientific factual, this means structured and codified knowledge, with tacit, this means non-codified knowledge. And those LLM-powered question-and-answer models that we have available, the best model on this planet, those models lack such multi-source and grounded knowledge that is necessary for medical reasoning, because this requires an understanding of nuanced and specialized nature of the medical concepts. And you might say, this is a real problem for Harvard, yeah? And if we look here at the test, and Harvard published here some tests, we have here a basic medical test and an intermediate difficulty and a real expert level test. And you see, if we look here at LAMA 3B, we are here with the basic below 50%. Intermediate, we are below 40%. And the expert is below 30% with a LAMA 3 8B. GBT4 Turbo is a little bit better, but we are about at 50%. So, for a medical system, this is not acceptable. A 50% performance is horrible. So, what to do? Now, Howard stated here that LLMs face challenges in complex medical reasoning, because such reasoning would require here a simultaneous consideration of dependencies across multiple medical concepts, and I will show you those concepts in a minute, plus a precise local in-domain knowledge of semantically similar concepts that can carry different medical meanings. And we will have an example on this. So I know what you're gonna say. You say, okay, so prevailing strategy to address these challenges here is of course, like in the community today, we have some RAC, some Retrieval Augmented Generation. Yeah, you know, Harvard discovers that also those RAC methods can provide multi-source knowledge. They are really vulnerable to potential errors. And Harvard goes on and says, you know, all those data repositories and the knowledge bases those RAC models draw from, they contain incomplete and incorrect information, leading now here in Harvard Medical School inaccurate retrieval for their specific cases. And Harvard goes on, you know, further, many RAG-based methods simply lack a post-retrieval verification mechanism to validate that the retrieved information is factually correct and does not miss some key information, especially here in medicine. So you see, now you understand when I say Harvard has a problem. Harvard says, you know, to advance now these LL-empowered EI models here for knowledge-intensive medical question and answer. It is now essential to develop models that can consider complex associations between several medical concepts at the same time, and systematically integrate multi-source knowledge, and effectively verify and ground the retrieved information to really make sure that the context, the relevance, and the accuracy is there. Because we are working here in medicine. We cannot allow to have one single mistake in those LLMs. So Harvard develops a solution. And this here is the beautiful study. We have here Harvard University, University of Illinois, Imperial College London, and Pfizer. Hello, Pfizer. Harvard writes here, knowledge graph-based agent for complex, knowledge-intensive question-answering in medicine. And they developed a new methodology and they write we introduce a knowledge graph based agent designed to address the complexity of knowledge intensive medical queries so what we are going to look now is we have a medical large language model with all the shortcomings, particularly in the area of REC. And we will now have a detailed analysis, how we can integrate the knowledge of a knowledge graph and bring them together to have a much better performance in medical reasoning. You know, these medical LLMs, they have kind of a problem because they have a lot of non-codified knowledge, whereas at a knowledge graph, we're looking at come up with a solution. Sorry, I mean, Harvard has to come up with a solution to integrate multiple non-coherent knowledge graph and somehow make them work with a medical LLM. So as I've shown you on this test here, the expert test, the question again, is there an interaction between a Hitchock protein 70 family that acts here as a molecular chaperone and the gene or protein implicated here in a particular case is something there related to a mutation? And there's only one answer, but in the test you have normally multiple choice here for the medical students. So great. Now they look here also at multiple choice test, but what I will focus is, of course, the open-ended question, where we do not have multiple choice, because in medical research you don't already have, and please choose from four answers, because one answer is guaranteed to be the correct one. No, we go open-ended. So, the goal is here to identify the correct answer using here an LLM and a knowledge graph to come up with a better causal reasoning in medical questions. And they say here is now our new idea. So, we have here a question. Let's say we have four answers or maybe no answer at all and then we have four phases we generate something review something we revise something and then we can answer everything so it's a relative simple concept but you know there's some inherent beauty so let me focus here on the methodology because this is great. So at first we are at a generate level. This is designed here to prompt the LLM to follow different procedures and I will show you these procedures for generating relevant triplets. And you know triplets from graph theory. So we have here as I told you two kind of questions, the choice-aware questions and the non-choice-aware questions. And in both, we generate triplets. Goal is here, extract medical concepts in the question stem of a particular question, plus here also have a look at the content of each answer candidate. Now, the concept extraction. The LLM extracts now a medical concept from the input question, which is this one here, identifies now all key entities and relationships that are relevant to the question, and this is done by the LLM. So, from a query like, hey, what is the relationship between hypertension and heart disease? The LLM identifies now the medical concepts of hypertension, heart disease, plus, you know, a triplet here in a graph. We have a relation included in the middle, and this relation here is causes, or is associated with. So, the generation phase is simply to create a triplet. So based on the identified medical concept, and I will show you how we do this, the LLM generates triplets in the form of HRT, where H is the head entity, R is the relationship between those entities, and T is the tail entity. This is what we know from graph theory. So for instance, a generated triplet might be here, hypertension causes heart disease, H-R-T. So we do have a structured format. Now, for those different kind of questions, if you read this publication, it's a beautiful publication, they're also focusing here on multiple choice questions. I will leave out here the multiple choice because I will focus here on open-ended question where we do not have a solution given. But there's nothing. So we have to come up, I mean the AI has to come up, with a medical solution. Involves generating triplets directly from the question. Without, we have answers A, B, C, and D. Okay, so now that we have generated a triplet, and I will show you how we do this, let's have now look at the second phase. This is now the review phase, or the review action phase. Here, it aims to assess the correctness of the generative triplets by, and now we have the connection to the knowledge graph, by grounding here the correctness of the triplets in a medical knowledge graph. We do this to ensure that the relations in the entity produced by the LLMs are factually correct and consistent with medical knowledge. So you see, we have now two different systems, an LLM and a knowledge graph, and now it gets interesting. And there is now a specific way to map entities between those systems. And Harvard decided to go here with UMLS code, Unified Medical Language System code. Each entity in the generator triplets is mapped to the corresponding concept in the UMLS. This step ensures that the medical concepts identified by the LRM are linked to the correct standardized medical terms via the UMLS codes, which can then be matched with entries in the knowledge graph. If you've never seen a UMLS code, for example, some real simple diabetes mellitus, you have a concept, unique identifier, a simple code base, then you have two official names, then you have synonyms, and then you have different sources sources and then you have a lot of treatment options and whatever. So here we have codified knowledge, standardized medical terms. And what we do now is we just jump here from a medical LLM over with this complexity and the semantic knowledge to a knowledge graph. No, we have, if you want, a filter. And we say, hey, everything that is consistent and has a unified medical language system code from the official directory, this filters out that only the correct medical terms come over to the knowledge graph. And if the medical LLM hallucinates some, I don't know, technical terms, that is not within the code, that does not exist, well, then it is filtered out and not in the next step. Okay, now we are retrieving pre-trained knowledge graph embeddings. And you say, what? Yeah, it's beautiful. So for each matched entity and relation, the pre-trained embeddings from the knowledge graph are now retrieved. So these embeddings capture now the structural relationships between the medical entities in the knowledge graph. And you say, hey, wait a minute, I have a knowledge graph and you are talking about embeddings? Yeah, if you're not familiar with this, never mind, this is for an old methodology. This is from 2013. This is the first time it happened. We have a lot of embeddings functionalities. I would recommend you this particular paper. This is the original Trans-E paper for translating embeddings for modeling multirelational data from CNIS and Google 2013, about 2013. And they face here the problem of embedding the entities and the relationship of a multirelational data into low-dimensional vector spaces. So they have a graph and they want to find, if you want here, vector embeddings in low-dimensional mathematical spaces. Here I've given you here if you want to read the paper in detail. It is so old that unfortunately now it is only behind paywalls of this big corporation, but with this link you can read it free of charge and in this trans embedding they give you the translation based model and they say give them a training set of triplets hlt composed of two entities head and tail this is the set of the entities and a relationship now they call it l not r whatever this model learns vector embeddings of the entities and of the relationships by doing here a simple optimization problem. So you see, we have a method to find the embeddings in a mathematical vector space of a knowledge graph. So isn't this beautiful? So what we have, we have now officially from the large language model from chat GPT or whatever you use, here we have the LLM's internal embeddings. If you use GPT-4, you have to pay for those embeddings. And we have now from the knowledge graph also structured embeddings as given by a trans embedding. So we have now a free text information and a graph structure information, and we have brought them at the same level of an object of embeddings. And now we have the task of fine-tuning the LLM on a knowledge graph completion task, as Howard writes. So what is this? This is a key aspect of the methodology is now to fine-tune our LLM on the basic knowledge of the knowledge graph that is now added to the system, which involves then if we have a fine-tuned model to predict missing relations or entities now back in the knowledge graph. I will show you this in a minute. But just on a theoretical level, we have now LLM embeddings and we have knowledge graph embeddings, because the task is to somehow bring those together that one system can learn from the knowledge of the other system. So how we do this? There is something like an alignment phase, and you know, we have always had this problem in the classical phase. So the alignment between the LLM's token embeddings and the knowledge graph structural embeddings, this is now essential to combine here exactly also the unstructured knowledge of the LLM with the structured domain-specific medical knowledge found in a medical knowledge graph. So how we do this? Well, it is easy. We simply add a projection layer, for example, now we do this on the token embeddings, which will adjust their dimensionality and their structure to match here the format of the knowledge graph embeddings. So this means we have a mapping transformation, or it learns a mapping that brings the LLM token embeddings and the knowledge graph embeddings into the same geometric space. This is great, we can normalize it and then we can work with those embeddings, because now those embeddings are really compatible, coherent structure. And in this way the model can now combine the semantic meaning of the LLM embeddings with the grounded, proven structural relationships that are encoded in the knowledge graph. So now the LLM's, token embeddings and the structural embeddings, they are now harmonized. And once the token embeddings are transformed, we have the classical attention mechanism that you know, that you love. Whatever transformer used that has attention, self-attention, multi-head attention, it helps now the model to focus on important parts of the input, and in this case it helps align the LLM's token embeddings with the knowledge graph relational information, and it simply means it assigns more weight in the tensor structure to the important parts of the triplet, now learning thereby how the LLM's understanding of a hypertension should map onto the knowledge graph representation of the hypertension and the similarities for heart disease and related causes. So now we have, if you want, a new embedding created. That is, beautiful. We call this the aligned embeddings of both systems. So after the transformation layer and the additional attention layers and the execution, the final result is now a new set of embeddings for the triplet components, which we call now aligned embeddings. And these embeddings capture now both kinds of knowledge. We have the semantic meaning of the pure text is now understood by the LLM, and the relational structure information as encoded by the knowledge graph in the graph form from the graph-based training is also now available in those embeddings of those aligned components. Isn't that great? So therefore, we have now an improved reasoning. Because we do not have any more one system LLM and one decoupled system knowledge graph, but now we have combined the knowledge of both systems. The aligned embeddings are crucial for tasks like knowledge graph completion, where the LLM has to predict missing relations between the entities and the knowledge graph. By aligning the two types of embedding, the LLM can now reason about the knowledge graph with an improved accuracy and an improved understanding, so that its horse has all now the structural information of the medical knowledge graph. To ground the knowledge. Let me give you a practical example. So if we say here this knowledge graph completion task in which the model attempts to fill in the missing links in a portionally complete knowledge graph and this is a problem that we have almost every time. Especially in medicine you have never a complete medical knowledge graph. There are always some parts of information missing. So we always have more or less partially complete knowledge graph for a particular topic for a particular task. And now if we can now fill in the missing links and make this knowledge graph if you want more complete, this would be great. So let's do this. The LLM is now fine-tuned in order to predict here the missing relation between the entities that already exist in the knowledge graph, or to predict missing entities in a relationship if only one entity in the relation are known. So, simplest examples, hypertension associated with, and then we have not yet the end node for this in the relation. Or we have question mark causes heart disease. And now we can predict the missing entities of those two elements. So you see, we complete the knowledge graph another step, another level. Great. So here we have it. We have now, since we have two systems, we can do now, if we have an improved knowledge, we can have now two options. We can either update the knowledge graph. So if the purpose of the system is to continuously enhance the knowledge graph, then the new predicted relation and entities that we found with the combined knowledge can now be added back to the knowledge graph. We make the knowledge graph more complete. Let's call it knowledge graph plus. This results, see my last video, where I talked about dynamic evolving knowledge graph that are now more complete, more up to date, can have a higher complexity level to have a better augmentation, either in higher complexity structures, plus all this insight gained from the LLM reasoning are now incorporated back into the graph for the future use. So if you want, we have here absorbed all the knowledge of the medical LLM, and we put it now in a new knowledge graph that is much nicer, much more powerful. Or we have the option B, like here in the publication by Harvard, the inside staying with the LLM only. They say, for example, you say, hey, I just need a better LLM for argumentation, for causal reasoning. So my goal is to get a Medical LLM Plus. So the knowledge graph remains static and unchanged in this particular option, serving only as a source of structural knowledge that the LLM can draw from. And you bring in, I don't know, 20 different knowledge graphs from 20 different medical sub-disciplines. You get the idea. Your Medical LLM Plus will be a LLM plus, plus, plus. Great. So in this case, the LLMs learn from the knowledge graph using the fine-tuning process to better predict relationships and answer the question. But the new insights are not fed back into the knowledge graph. So whatever whatever you do, great. Now, there is of course an option C, not described here in the publication, that you could theoretically do both. You update your knowledge graph and you get a better medical LLM plus. But then we would have complexities that i would like to show you in a later video so now we are after we have done the review and we have done all this now there is also the revise phase or action phase and this happens if the triplets that were generated are found to be incomplete those triplets are found to be incorrect. So what we do? Theoretically, you just throw them away. But, Harvard tells us, hey, maybe there is some hidden information in there. So what we do? We keep those, but we get them a special treatment. And now, since we have this super LLM, the LLM Plus attempts now to adjust incorrect triplets by proposing here new relations or new entities in our triplet structure based on related medical concepts. So whatever was learned is new, also integrated in this new knowledge base. And then the LLM will call this triplet iteratively, where the LLM continues to propose and refine the triplets until the triplets are now validated again by the knowledge graph. So you see now, if you have a standard knowledge graph that is not dynamically evolving and increasing its complexity based on the reasoning structure of the LLM. Then sometimes the validation by the knowledge graph you will reach a plateau given by the complexity level that a knowledge graph can handle but might be necessary to validate here a very particular special triplet configuration that an LLM generates. So you have to make sure, in short, that the complexity level of both of your system, the LLM and the knowledge graph, are about the same. Or you must know, hey, my LLM has a complexity level of 6 out of 10, but my knowledge graph has 8 out of 10 you have to know what procedures to follow great and then of course the last phase here is the answer phase beautiful so everything is great we have now a lot of different answers and now the model simply and let's go here as in the publication with the medical llm plus now this model selects now the best answer based on the verified triplets from the review process and now from all this revision if we found some better review stages so the final answer is determined by considering the relationships and entities validated now by the knowledge graph and this ensures that the answer is not only plausible for the LLM but also factually grounded in the medical knowledge as presented in the knowledge graph in a medical knowledge graph so you see the beauty of this I hope so so for my green grasshoppers I try to have here five important points for you. This is the summary for a beginner. What we do, we have an integration of LLMs with domain-specific knowledge graphs. And you have a certain knowledge encoded in the language model, and you have a certain, maybe different knowledge encoded here in the knowledge graphs. And now you want to bring them together, a better system. Maybe different knowledge encoded here in the knowledge graphs. And now you want to bring them together, a better system. Five points. Combine the free-form reasoning capability of LLMs with the structured domain-specific knowledge from our medical knowledge graphs, resulting in a system that can reason more effectively about complex medical concepts. I told you that the use here of these UMLS codes as an intermediary between the LLM-generated concepts and the knowledge graph allows for a consistent and accurate mapping of medical terms. The fine-tuning of the LLM on a knowledge graph completion task enables the model to bridge you the gap between unstructured text and structured knowledge also leading to improved reasoning and as i told you the alignment phase of the token embeddings from the llm and the structural embeddings from the knowledge graph allow the model to leverage both the semantic and the structural knowledge improving also the accuracy of the reasoning. And especially beautiful, I think, is the error correction via the revision mechanism. Now, this is a try and error. Maybe there's nothing there, and LLM does not find, is not able to solve or decompose here this complex triplet. But maybe it is, and therefore we do have an error correction. So this iterative revision action ensures that the model can correct its mistakes by leveraging additional knowledge from the knowledge graph leading to a more robust and accurate answers so nice ideas beautiful ideas integrated here and i immediately have five other ideas how we can build on this and further improve this but the goal of this video is just to make you familiar with this study by Harvard and Pfizer. Harvard calls this methodology the KGA Reveon so knowledge graph A Reveon don't ask me for what it stands and it enhances here the medical questioning answering by leveraging the LLMs, grounded now with the knowledge of the knowledge graph to the novel mechanism we just went through. And they state, hey, unlike the standard RAG approaches, our new methodology doesn't directly retrieve information from the knowledge graph. And this is so important to understand this. This is not RAG. Instead, Harvard writes, it uses the LLM to propose a potential relationship between medical concepts extracted from the question represented as triplets. This is not as easy as you might think, and I would like to show you here an example. But you see, this is it in a nutshell here on this new methodology what is so beautiful that the llm is not constrained now to existing relationship in the knowledge graph it can propose a novel connection based on its language understanding and as i told you if you go here for a medical llm plus plus plus or whatever this here really just uses here the knowledge of the knowledge graph and then you can exchange the knowledge graph with a different subgraph or a different medical graph or a different or higher complexity knowledge graph so you see depending on what system you want to nurture on you have unbelievable options in front of you and the code is rather simple let me show you this now before i show you the code i want to show you the result ladies and gentlemen the results are now howard chose us here also the multi-choice so if you have a b c d and you have to cross A is the right answer. Reasoning. My goodness. So the number of medical concepts, the complexity of the concept is here on the x-axis up to six medical if you take the same Lama 3.8b and you apply this new methodology, you see that you go from, I don't know, 50% to about 70% plus. Nice. However, if you look at a Lama 3.1 and you do the same stuff, you see that the general curve shifts upward here. Suddenly, instead of 0.5, we are now at 0.7, but the increase now by this new methodology is not as significant as it was here on the LAMA 3B, and unfortunately they do not have the data, or publish the data, of LAMA 3.2. Interesting is, however, that if you see the more medical concepts we bring in into the question, the higher is the complexity, the more multidimensional medical concepts we have. You see here, this is nice. Unfortunately, it's just one data point. But you see, the more medical concepts we have, the better this new methodology is, and the less potent the classical LAMA system is. You see, the higher the number of concepts, it goes down, down, down, it goes down, down, down. Okay, but if you go now to open reasoning, let's have a look at this. Again, LAMA 3-8B here, the light blue. And here you see this new methodology is not as good. And look, even if we have here multiple number of medical concepts that we integrate in the answering of our medical question, there's only the same performance with and without this new methodology. And you might ask, is this based on this particular system or is this because the methodology is not working? And then it's interesting, they give you here the second one here. And they have now the LAMA 3.1, the 8b here. And you see the more comedical concepts we have further it goes down but here it separates and it has here kind of an equilibrium but it also breaks down here if wait a minute if we reach here a certain threshold so this tells me there is a high sensitivity to the underlying LLM. If we go with a LAMA 3.8B or LAMA 3.1.8B, and hopefully with a LAMA 3.2.8B, we would see significant differences if we apply this methodology. But unfortunately, the latest models are not available for us. And Harvard writes here, they used here a cluster of four NVIDIA. I think it was an H100 GPU configuration. So it was not that complex to do all of these calculations here in this demo. And I want to close this now here with the prompts. And a lot of questions are received here. How complex are those prompts? So have a look at the prompts and a lot of questions are received here how complex are those prompts so have a look at the prompts this is here the first prompt the very first step here to generate here our if you want medical triplet but before we generate this you remember i told you they want here if you want to fill here the search base with enough information so what they do they say give them here a for example multi-choice question extract all relevant medical entities contained within the question stem so this means identify and extract all medical entities such as disease proteins genes drug phenotypes anatom region, treatment, whatever relevant medical entities there are. And put all of this in a list with the key medical terminologies. So you see, you have now not only one particular term that you look for, that is in the question, but you look for the complete embodiment, for the complete surrounding, the topological cluster around your particular medical term. So this contextual embedding, if you would like to call it, this is done here in this particular prompt. Second prompt is really we generate now here our triplets. So the prompt for generating now the action phase, so we say given the following question stem and the medical terminologies that we just created, so this is here the list of all the surrounding, if you want, generate a set of related undirected triplets. Each triplet should consist of a head entity, a relation, and a tail entity. The relation should describe meaningful interactions or associations between the entities in a biomedical context. And now you can even say that you want to specify those relationships. And Howard tells us here, hey, the relationship should be one of the following. Protein, protein interaction, carrier, enzyme, target, transporter, counterindication, indication, off-level use, energetic, whatever. So you define your valid relationship. If you want, this is kind of another filter that you want to apply for your particular task. You know here the complete set of possible interactions. And then the task is, hey, just generate one to three triplets for each option focusing on the ones most relevant to answering the query. Only return the generated triplet in a structured JSON format with the key triplet and a list. Oh, you have here head entity, relationship and tail entity. Beautiful. So what we have, we have here the question, the user question, then the list of the medical terminologies that we just created in the step before. And now we get a response by the LLM. So the LLM generates here the kind of complexity triplet structure. And here also, the better your LLM, the more the the llm is able to think and reason and detect and create complex more complex triplets the more intelligent the system could be yeah if you want to see the prompt for the revise action i was interested in this and just simply say hey given the following triplet consisting of a head entity of a relation of a tail entity, please review and revise the triplet to ensure it is correct. Or ensure that the rise triplet is trying here to find a new triplet configuration, new terms, new relations that might be voted correct here by the knowledge from the knowledge graph. And this is the end. So therefore, yeah, I hope I have given you here an insight into a complete new methodology that harvard harvard medical school is utilizing here in this research publication it's brand new for me that's just three days old and i think it's beautiful and i thought about replacing here all my rack systems also with this new methodology because it is so much more powerful and if i don't need drag maybe that's maybe not a bad idea if i can substitute it with a more powerful methodology but you see more or less simple idea formulated in a coherent way four phases and they show you hey we have a much better performance of our systems for medical question and answering of course they are not there yet that it will be 100 correct but this is already the next step i hope you enjoyed this video i hope it was a little bit informative and it would be great to see you in my next video hey siri wie spät ist es denn | Harvard Presents NEW Knowledge-Graph AGENT (MedAI) | 2,316 | Discover AI | 20241010 | Harvard Unveils New Knowledge Graph Agent for improved AI in Medicine. Called KGARevion, it combines the knowledge from knowledge graphs with the knowledge of LLMs.
Since RAG suffers from inaccurate and incomplete retrieval problems in medicine, Harvard et al present a new and improved methodology to significantly increase the reasoning performance of medical AI systems. Special focus on complex medical human interactions.
New insights and new methods to combine the non-codified knowledge of LLMs with the structural codified knowledge of medical knowledge graphs.
Detailed explanation of the new methods in this AI research pre-print (also for beginners in AI).
All rights w/ authors:
KNOWLEDGE GRAPH BASED AGENT FOR COMPLEX,
KNOWLEDGE-INTENSIVE QA IN MEDICINE
https://arxiv.org/pdf/2410.04660
00:00 Harvard has a problem w/ LLMs and RAG
04:20 Harvard Univ develops a new solution
07:24 The Generate Phase (medical triplets)
09:50 Review Phase of KGARevion
12:30 Multiple embeddings from LLM and Graphs
15:40 Alignment of all embeddings in common math space
20:48 Dynamic update of the Knowledge graph
21:52 Update LLM with grounded graph knowledge
23:15 Revise phase to correct incomplete triplets
25:20 Answer phase brings it all together
26:07 Summary
29:52 Performance analysis
33:39 All prompts for KGARevion in detail
#airesearch
#aiagents
#harvarduniversity | 2024-10-18T16:35:43.993799 |
https://www.youtube.com/watch?v=DBrvutJ367s&t=738s | Hello, everybody. Good morning, good evening, good afternoon, and thank you for joining this Neo4j Road to Nodes workshop in the final weeks leading to Nodes 2024. I'm happy to do the workshop together with my dear colleagues Alex, Thomas, and Will Tai. So I'm looking forward to this cool session. And what's it called? It's called Mastering Retrieval Augmented Generation with the Neo4j Gen AI Python Package. Another very long name, but I'm sure you'll see that it's gonna be very interesting, particularly mastering graph reg, or reg rather, in this case, with the Neo4j Python Package. Should be very very interesting it's a very hot topic very very interesting for everybody at this point in time so i'm i'm very much looking forward to today's session before we dive in i'd like to let you all know that we will record this session so you if you you know need to drop off at some point or if you want to watch something again you know we missed something or maybe something was too a little bit too quick for you you can always re-watch um materials like slides and anything else we will share with you also in a follow-up email so don't don't worry about this um it's good to see um to see already in chat a little bit of activity here. I see people from Germany, from Austria, from Bangalore. Great to see you all. Let us know where you're from and get the ball rolling a little bit. Make this interactive as well. So if you have any questions, any comments, please don't be shy. Comments and anything else, things you think are cool or interesting, please put them in chat. If you any questions in big marker you see a little q a icon where you can click on and add um any other you know questions you might have anything that's unclear and we'll have a look at it um we are uh also in the chat for for any other questions obviously and there will be probably some time uh some little break times where i can also take some question on on screen so um just you know if anything is unclear we can we can make it public um yeah i see more people italy belgium london france barcelona china stockholm malaysia australia sicily nice That's probably better than here where it's raining. Bedford, UK, probably also not very nice weather. Sorry. Another Germany, another Delhi, that's probably also very good weather. It's good to see you all, but let's dive in. Alex, Will, I'll hand it over to you and we can get going. Brilliant, thank you, Alex. I'm just gonna try sharing my screen just um could uh could you let me know if it if um you can see yeah not yet let's see um here it comes yep there we go you should all see now a web page with a um with a graph academy course on it brilliant let's begin then um so yeah thank you everyone um for uh coming to this course so um the name of this this workshop today is um mastering uh retrieval augmented generation with the Neo4j GenAI Python package. Now, just up front, I should probably say that we have added a rename of the Python package in the last couple of weeks. So the Python package is now called the Neo4j Graphrag package. You can still use the old name. So if you are using any projects and you're pipping, installing Neo4j-GenAI, that still should work, but you will be limited to version 0.5. So if you are going to build any projects off of the back of this workshop, it's the Neo4j-Graphrag package that you'll want. The purpose of this course today is to give you guys a little bit of an overview of retrieval or augmented generation. I'm going to go through some of the sort of core concepts that lie behind that, things like vector search, semantic similarity, full text search, and knowledge graph creation. And then I'm going to take you through kind of the, how you can build, use those core concepts to build kind of full retrieval, augmented generation apps using the Python package. So this course is going to be run in roughly two halves. The first half I'll be covering, but we'll be going through kind of core concepts. So semantic search, vector indexes, embeddings, full text search, and creating knowledge graphs from unstructured data. Then in the second half, Will Tai, who's also here, my colleague, is going to take over and is going to sort of take you through how we can kind of draw that together and build full apps using those core concepts. So this course is going to consist of several sort of lessons where I will sort of describe core concepts to you and several coding challenges. So as we go through, there'll be bits where you guys can get involved and do bits of coding. I really encourage you to try stuff out. None of the coding stuff here is particularly difficult. It's not going to require a huge amount of brain power for you but hopefully it should give you a chance to test the concepts that we're going to discuss today um so i think the first thing to do is to post a link to this course for you guys to sign up so this is a course on graph academy which is our uh training platform here at Neo4j. So I'll post a link in the chat, and then you guys can sign up and work through this course at the same time with me. So that should now be in the chat for you guys. Let's begin. Kim, I've just seen your comment. Yeah, there will be a recording of this talk available later. My apologies. Okay, here we go. So like I was saying, the first half of this course, the purpose is to kind of take you guys through some of the course concepts that kind of lie at the base of retrieval or augmented generation and at the base of quite a lot of AI and Gen AI applications. So in this first part of the course, we're going to cover vectors, using vectors to search for similar pieces of data. We're going to cover embeddings and vector indexes. And we're going to build a knowledge graph from unstructured data using Python and the Neo4j GraphRag package. All right, let's go so for this course we're going to be using um primarily uh neo4j the neo4j browser as well as um a git repo that we put together with some code snippets in it so i will post the uh the git this is the uh the python the uh the git repo we're gonna use. So I'll post that in the chat. You can use, you can of course just run these code snippets on your local machine if you want to clone it there, or you can use Gitpod workspace. So you need a, I believe you need a GitHub or a GitLab account to access this. This could give you essentially a Visual Studio Code instance in your browser if you don't want to clone this repo to your local machine, and you can run through the examples there. So let's open Gitpod workspace. So. Here we go. What this is doing is this is going to clone the Git repo for us and give us a workspace with all of the dependencies already installed and all the code already there. Yep. So here we go. Just take a while. I think it's just installing dependencies in the background. There we are. Yeah. So just give us a little time to install dependencies. So, So maybe I'll just go through this repo while these dependencies are installing. So, like I said, this is a Python repo with some of the coding challenges that we're going to be going through in the course today, split into two sections, which roughly mirror the two sections of the course. So this is the first section. So we're going to be going through first today. As you can see, there's two coding challenges. Each one has got places where you'll put your code in. If you ever get completely stuck, though, there are solutions here for each of these. So if you want to cheat a little bit, you can. But yes, we'll put some instructions in the chat there for those of you who want to run on a local machine. And Manjunath, you don't need a GPU to run this locally. It should run fine on There we are. Okay. So there is one thing we need to do in order to set up this code environment to be ready to submit solutions, is we need to create a copy of this.env example file. So this is just going to export some environment variables that we'll use for connecting to OpenAI and connecting to the Neo4j database. So we create a copy of that and we just call it.env this time. And then what we need to do is we need to fill in with our various, well, first with our OpenAI key. So I believe, Will, have you posted the OpenAI key to the chat? If you have your own OpenAI key to use for this course, please use it. If not, we have a key that will be available just for these two hours, and then we'll be removing it. So you can use that. And then we also need to post here the Neo4j database connection strings. So with this course we will be provided with a sandbox Neo4j instance so we just need to take the login details here and paste them the end file. Cool. Let's go back to the course then. So is anybody... Oh, I should say as well, you can run the test environment file, which should tell you if your environment is set up correctly. So if you've properly filled in all of these details in your.env file, you should get okay. Is there anyone having trouble setting that up? Is there anyone who needs me to go over there anything again? Or if not, we can move on with the course. I think we can maybe give it a second or two, but I don't see any particular question on this. But we could maybe we could, I don't know if you want to, it's a little detour, but we could look at this. This might be an interesting question from Bjorge. And he asks, what are the considerations between vector indexes and graph indexes? I'm not sure what he means by a graph index here. Yeah, I think probably he means why should we add a vector in, well, maybe you can elaborate a little bit, Bjerg, I'm not sure either. Do you mean the difference between vector databases and graph databases or yeah maybe maybe um yeah let us know what what you mean by that actually you're right if you mean um if you mean where where can kind of graph where the graphs sort of do better than like vector indexes on their own? We will cover that later in the course. So we're going to dive into kind of the advantages of using a knowledge graph along with a vector index. So that's going to be in the second half of the course, if people are curious about that. And this first half is going to be sort of core concept covering. Yep. Super. Cool. Super. Cool. I'll give it another like 30 seconds because there's some people in the chat saying that they're just waiting for Gitpod to launch. So I'll give you guys a little bit of time to catch up there, but I wouldn't worry too much because the first couple of lessons here aren't gonna be interactive. It's just going to be me describing a few core concepts. So you can sort of leave Gitpod in the background to load as I go through this stuff. Yeah. Okay. Yeah. Let's move on then for now. If anybody is really having trouble setting up GitPub, if it doesn't load in the next minute or so, stick another comment in the chat and we can go back if needs be. Okay. So in this first section, we're going to cover vectors and embeddings. So two core concepts in retrieval augmented generation or RAG, I'm just going to call it RAG from now on. So firstly, what are vectors? So vectors are simply a list of numbers. They can be used to mathematically represent many things in the real world. For example, the three-dimensional vector here represents a point in space. Vectors can also be used to represent many different types of data, including text, images, audio. And we typically use vectors with very high dimensionalities, often in machine learning and RAG applications. So embeddings. When we talk about vectors in the context of machine learning in AI we typically call them embeddings. Embeddings are vectors which represent complex pieces of information such as text and images. For example here we have an embedding that represents the word Apple. Typically in AI applications, we use specialized ML models called embedding models to create embeddings from our data. There are also versions of these models that work with images or text in languages other than English. And text embedding models, for example, which are the ones we're going to deal with mostly in this course, are trained on large amounts of text data. And these models learn to represent each word or phrase in a given bit of textual data with a vector that kind of captures the meaning of that piece of text or phrase. So semantic search. So semantic search is a very common component in many RAG applications. Semantic search allows someone to search a database for text which is semantically similar to their query. So for example, the phrase big boats is semantically similar to large ships. Even though both of these phrases contain completely different sets of words, they mean pretty much the same thing. So they are semantically similar. The way semantic search is done in RAG applications is using embeddings. So each piece of text in a database indexed with an embedding, which captures its meaning. And then when a user searches, their query is converted into an embedding, which is sent to the database. The database then compares this vector with its index of vectors and finds vectors which are close to it mathematically in some way. And you can see in there's a section just below this which kind of talks about that mathematical comparison process. I'm not going to go through it here because it's a little bit too in-depth, but if you want to know more about the mathematics, there is a little section after this. The database then uses its index after it's found these vectors that are similar to the user query vector to return the text associated with those close vectors. So like I said below, we have some sort of mathematical detail here about this. I don't think it's too interesting to go into here, so I will skip it for now. So are there any questions? So just to check here. Björg, I heard there's something called graph index. Oh yeah I'm not sure what you mean by that maybe it'll become clear as the course goes on. Yeah, probably. Yeah. I have one question from Sebastian. What's the difference between embeddings and indexes? Embedding and indexes. So the embeddings are just vectors, so lists of lists of like numbers that represent, say, text. Indexes here are the kind of data structures within Neo4j, within generally a database that are used to sort of search those vectors and to return appropriate data. So an embedding vector, long list of numbers, index, a data structure within the database to kind of search through those long lists of numbers and search through the text associated with those long lists of numbers. Cool. And one more maybe we can cover publicly here from Marie. Does the Graphic Python package support other LLMs besides OpenAI? We'll already answered, but just to make sure. Yeah, absolutely. So Mistral, Azure, Vertex, Cohere, I believe, as well now. So, there's a fair few others. Yeah, definitely not just over the night. Yeah, super. Cool. I'm hoping everyone's got Gitpod set up now because we're going to go through the next section, which is the first interactive coding challenge. So we're now going to try to semantically search a Neo4j database containing information about movies. We're going to semantically search the database. We can see some examples of the movies here. So if you look, we're running this Cypher query, and we can see we have some movies in our database. We have Toy Story. We have their plot lines, description of the movie, Jumanji, Grimpy Old Men. So we're going to be using this database for the rest of the course. We can also look at examples of individual movies. So if you want to investigate that. So here we're just looking at Toy Story specifically. Feel free to change this to another movie if you want to see the plot line and the title. The embeddings have already been created and indexed for this challenge. We will see in the next challenge how we can create indexes and embeddings, but for this one, we already have them here. And we can see the embedding we have here for the plotline of the movie Toy Story. So these embeddings have been created on the plotlines of the movies in order to kind of capture the meaning of the plot line so you can see here it's a very very big list of numbers like i was saying um this is the one for toy story plot line so this this embedding here represents this bit of text here So if we want to find other movies in the database with similar plot lines to Toy Story, we can compare this embedding here with other embeddings in the database using the next snippet. So this is what this snippet does. It takes the embedding from Toy Story and compares it with the other embeddings in the database and returns embeddings that returns the movies and the titles of, returns the titles and the plots of movies with embeddings that are similar to this. So we can see here the first movie, which is most similar to Toy Story, is of course Toy Story, because it's the same movie. We also have The Little Rascals, which is apparently a similar plot to Toy Story, The Neverending Story, and Drop Dead Fred. The scores here, so this is the third thing here in each result, are similarity scores. So what these tell you is how close the plot, the meaning, how close semantically the plot of Toy Story is to other plots in the database. So you see here for Toy Story, it's the higher this number, the closer the plot is to Toy Story. So the close to one, the higher, the more semantically similar that plot is to the plot of Toy Story. So it's 1.0 for Toy Story because Toy Story is the same plot. For the Little Rascals, it's giving a score of 9.92. So this embeddings model thinks that the Little Rascals plot is very close to the plot of Toy Story. I should say the plot description to be more accurate. So it's this text in particular that we're comparing, not the plot in some abstract sense. So again, feel free to try these snippets with your own movie. So you can compare Jumanji to other plots, compare Goodfellas, see which other movies in the database are similar to the plot of that movie based on the right-hand side to start up. So if you're having trouble running any of these Cypher queries, just refresh your browser and then wait a little bit. It can be a little bit slow. Okay, so on to the next challenge. So in the last challenge, we used our database of embeddings to compare movie plots to one another and to find those which were most semantically similar to the plot of the movie Toy Story. In this challenge, we're going to find out how we can create our own embeddings. As I mentioned a few lessons ago, in AI, we typically use pre-trained ML models in order to create embeddings, which is what we'll do here. And we're going to use a model from OpenAI. So have, again, that OpenAI key that was posted in the chat a few moments ago ready, as we'll need it here. So we can use the genai.vector.encode function from within Neo4j to create embeddings for pieces of text. So if we open those up in the browser, we need to replace this SK here with our OpenAI key. And then we can, so then we're going to create an embedding for this piece of text here, which is just text to create embeddings for. Feel free to change this to something else if you want to see what an embedding would look like for your own piece of text. We should be able to run this. And we can see we have an embedding here. But like I said, we can change this to something else. And you should see the embedding change. Now, creating embeddings is cool, but what we really want to do is we want to create our own embeddings and use them to search our database. So we can do this by combining the genai.vectorEncode function with the db.index.vector.queryNodes function that we saw in the last challenge. So once again, we need to input our token here. So we're going to see in our, this, what this is going to do, this is going to create an embedding for this piece of text. It's then going to compare that embedding with the plots of the movies in the database. And it's going to return the titles, the plots and the scores of the movies that are most similar to this query. So a materious spaceship lands on Earth. Hopefully we'll see some sci-fi movies up here. So yeah, 2001, Space Odyssey, Alien. Makes sense. It's talking about spaceships. These movies are kind of sci-fi and about spaceships. Okay, so on to the next challenge, are there any questions? I'll stop there just before we go on. In such a high dimensional environment the scores can always be quite high. No, I guess it depends, I mean the shouldn't necessarily affect it. The scores should capture the semantic similarity between two embeddings. So if you can test it out yourself, try, if you compare something completely unrelated the vector of something completely unrelated with the plot of toy story you should get a very low score you know to like talk about like i don't know the geography or something like that it will it will give you a low score hopefully with the plot of toy story it shouldn't it shouldn't always be 0.9 or above. There was another, I mean, Will is very active in Q&A, so I don't want to read out too many here, but maybe this could be interesting is for the embeddings, does it include multi-model embeddings? So image, voice, and videos. So we'll answer this with yes. So just so you know, you can create an embedding depending on the model, it is flexible. Yeah, and we'll see this now. So we're gonna create some embeddings for movie posters. Yeah, cool. So I'll continue then. I'll let Will, if there's any specific questions that you'd like me to answer, Alex, I'll let you jump in. Okay, so we've seen how we can create embeddings and use them to search a vector index in a Neo4j database, but what if we want to create our own index? So we already have a vector index for the Neo4j database, but what if we want to create our own index? So we already have a vector index for the plots of our movies, but as I mentioned before, we can also use embeddings to represent images as well as text. So let's try creating an index for the posters of the movies in the database. So we do have posters available. So let's take a look at an example first. So Toy Story here. So you'll be returned within the database the link to the image of the poster. So you click on it. When you run these examples, you can see the posters. Cool. So luckily, we already have embeddings created for these posters. We'll see soon how we can create our own embeddings. Well, we saw in the last challenge how we can do that. We'll see how we can create embeddings a little bit later using Python and upload them to nodes. These embeddings can be used to compare images for similarity. So in the last couple of challenges, we had embeddings on text and we were comparing these embeddings in order to gauge how similar two pieces of text were semantically, how similar they were in meaning. In this situation, these embeddings represent how similar two images are in kind of look. So two images that look roughly the same should have beddings that are mathematically quite close. So let's first take a look at the poster embedding. So like I said, this embedding here represents this image here. So now let's create our index on these embeddings. So to create an index within Neo4j, we could use the create, to create a vector index, we can use the create vector index command. With this command, we just need to include a name for our index. So we're gonna call it movie posters here. We need to let the command know which type of node we want to create the index on. So here we're creating them on movie nodes. And we need to let the command know which property on those nodes we want to create the index for. So we're creating them on the poster embeddings here. We also need to let it know the dimension of the vectors we're going to be using. So these vectors have 512 numbers in them. And then we need to also let the index know what mathematical function we're going to be using to compare these vectors. So I wouldn't worry too much about this part. I can go into more detail, maybe at the end if you'd like, but there is some detail, like I said, a couple of slides ago that goes into kind of the mathematics of this, but it's a little bit more for you guys to read in your own time. So let's run the second code snippet. What model did we use to create the poster embeddings? Oh, yeah. Apologies. It should be up here somewhere. We're using the clip model. Oh, maybe it's not listed here. Potentially. Yeah, we're using the clip model. The exact one I can list for you. Yes, opening our clip model. Okay, cool. There we go. I thought it for you. Yes, open an ad clip model. OK, cool. So it's in here. There we go. I thought it was listed there somewhere. Yep, so let's create our index. So we can run this code over here. And then we can see if it's been created here. Movie plots. We have our movie plots, vector index. We also have our movie posters index here. All good, it's online. So don't worry too much about the ID. As long as it's online, it should be good. So let's now use this index to search for some similar posters. So we're gonna look at the movie Babe and we're gonna compare the embedding for the poster of the movie babe with other embeddings in the database i'm going to see posters that are similar to the poster for movie babe so we run it here first let's see this is the poster for the movie babe and we can see the poster which is most similar to it is for the movie Lassie which I guess is kind of similar to the movie Babe so we can see them all here they're all kind of animals on fields so I think this is working it's a field, sort of rural kind of look. Okay. You can, so again, a sort of challenge for you guys, if you have time, is to sort of try this again with other movies and kind of compare, see if you can compare them with other posters in the database, see which movies have similar posters to the movie you select. Right, onto the next challenge. I made a quick one because it came up two times now. When comparing images, does the image size have to be the same to be able to compare or is this not so important no um just the we're comparing so we're not directly comparing the images we're comparing the embeddings for the images um as long as you're using the same set of embeddings it shouldn't matter okay yeah sense. Okay, so if you guys remember from a few lessons ago, we were using semantic similarity to compare two pieces of text to see if they had similar meanings. For example, the two sentences, the child is playing with a toy and the kid is having fun with a play thing, are semantically similar as they share a similar meaning even if they don't share many of the same words. Semantic search uses semantic similarity to find text with similar meanings to a user's query. However, semantic similarity isn't always the best measure to use to return the most relevant pieces of text for a user's query. So for example, if we are searching using domain-specific terms, semantic search can fall apart. For example, semantic search may fail to return anything useful if you ask for an unusual name of a character or a place in a movie, as it would have no way of knowing what this term means outside of the context. An example of this could be like if you were searching for something like what is the what is a movie set on the planet Tatooine. This query only has meaning if you know what Tatooine means which is a movie from the planet movie from the series sorry a planet from the movie Star Wars. If the embedding model that has been trained on this textual data doesn't have any knowledge of what Tatooine means and hasn't seen it linked to Star Wars in any way, it's not going to be able to semantically match embeddings for Tatooine with Star Wars plot lines, for example. So the embedding model needs to know these specific terms for it to be effective. So if you're searching with anything really specific, you can't always expect Semantic Search to perform well. Another example of where Semantic Search falls down is when a user query includes specific strings, such as names and dates, that need to be matched exactly for accurate results. So for example, with the query Oscar winners 2019, semantic search might interpret the query broadly and return conceptually related results, so such as films, Oscar films, or discussions about Oscars in general. But potentially, it might not return, it might return results of different years when what we really want is only information about 2019. So we need an alternative which allows us to deal with these issues. Full text search is an alternative which matches queries on exact wording including variations in spelling or grammar rather than on semantic meaning so full text search works by splitting a user's query in and into the individual words and comparing them with the individual words contained on a specific property on a specific type of node in our neo4j database and the database will return the nodes uh the nodes that can that who specified text property has the biggest overlap with the user's query so basically the results that have the greatest number of words in them that are also in the user's query. So this means for a query like Oscar winners 2019, a full text search would return results containing 2019. And as such, it might be a bit more likely to return relevant results than would happen with semantic search. And then finally, there are a range of search methods which combine semantic search and full-text search in the hopes of kind of harnessing the best of both these techniques. And we'll see an example of this later in the course. So now we're going to try and create a full text index on our movie plots and querying it, and we can see the sort of difference of how full text search works compared to semantic search. So creating a full text index is very similar to creating a vector index. We just use the create full text index command and feed it the name for the index we want to create, a type of node again, and a property that we want to create the index on. So again, here we're going to create our index on the plot of the movies. So we can run this. And now let's see if our index has been created. And we're good. It's online. We have our plot full text, full text index. So now let's try querying the index to see how it's different from querying a vector index. So again, this is a similar process to querying a vector index. We just use the db.index.fulltext.query nodes rather than the db.index.vector.query nodes function here. So what we're doing here with our query, we're querying the plot full text index. And this second string here is our query. We're querying the plot full text index, and this second string here is our query. So you can see here we're querying for 1375 Imperial China, and we have found a movie, the first movie here, Mr. The Warrior, which includes 1375 Imperial and China. So it makes sense. This is the most relevant movie. It contains all of the words in our query. As an extra challenge to anyone who wants to do it, try running this query with the vector index as well and see how it changes, see which different movies you returned with the vector index and with the full text index and see how see how they differ. And have a think about why that is. Why is this returning this movie here? And why might it not return this movie with the vector index? Alex, did you have a question? Yeah, I just wanted to read out a question, which I think is good for what you're showing here right now. It's from Rohan, and he asks, why create an index on the plot and not the plot embedding? Because this is a full text index, not a vector index. So with full text, we don't use embeddings at all. This is just using, this is just splitting the text on the plot line into its component words and we're comparing the component words of our query with the component words of the text in the plot line and we're seeing we're returning nodes which have the greatest kind of overlap so there's no embeddings involved at all. This is just directly comparing the words. So like I said, this has some advantages over using embeddings. For example, here we're searching for 1375 Imperial China. This search is not trying to capture the meaning of this query in any way. It's just seeing which plot lines in our database contain 1375 or contain imperial or contain China and returning them, or even sort of slight variations in spelling. So for example, if I accidentally included an additional character, this search should find words that are similar to imperial with a Z, for example. So this is not involving beddings at all on a full-text index. And the advantage of this type of index is that it will return things with these specific terms in them so um like i said a challenge is to to uh to try this query with a vector index and see which movies it returns and maybe ever think about why that is not maybe not returning this movie in particular um do we have any other questions or is that it? There was another question in regards to the score. What kind of score gives that back or how does it calculate the score? Because it cannot be, you know, as before. Yeah, it's quite a complicated process. I won't go into the exact exact details of it here i can send some uh um send some uh some links with the with the exact but it is quite involved um but yeah the score is the score is similar in meaning to the semantic similarity score so the higher the score the better the match basically okay And then maybe we do one more from Simon. And he asks, what speed difference do we see when comparing full text and semantic search? Embedding the search phrase probably takes some time. Yeah, this is a lot quicker in general, because you don't have that OpenAI request that you need to make. You don't have to send the data to OpenAI, you don't have to create the embedding, return it, and then send that to the database and then to compare that. So in general, the full text is quicker. Cool. Thank you. All right. On to the next challenge. So now I want to, for the sort of last kind of 10 minutes, I want to talk a little bit about unstructured data. So unstructured data refers to kind of information which doesn't fit neatly into predefined structures and types. For example, text files, emails, social media posts, videos, photos, audio files, web pages. And unstructured data is often more difficult to analyze than structured data. So we're going to talk in the next 10 minutes about how we can use Neo4j to kind of analyze unstructured data. So a little bit about vectors and graphs. So we've already seen that embeddings are good at representing unstructured data such as text. However, graphs are also a powerful tool for representing unstructured data. Graphs can be used to represent the entities and relationships within long pieces of text. For example, a graph could be used to represent the relationships between different products and customer feedback in a customer feedback system. Or a graph could be used to represent various laws and regulations inside a legal document. We call these types of graphs knowledge graphs and we'll also see later how combining vector indexes and graphs can help us build really advanced search processes which go a lot further than semantic search on its own. So for the next few lessons I'm going to talk about how we can build graphs from structured data and upload them to Neo4j. So chunking is often the first step when processing large amounts of unstructured data. Chunking involves breaking a piece of text down into smaller chunks, which can be more easily processed. This is a very common process when whatever database you're using, when you're importing data into a RAG application. Text data is broken down into chunks. After this, the usual next step is to create embeddings for these chunks. So we've already seen how we can create embeddings. We're going to do it here for some text data. So like I said, we'll see more how graphs can be used in RAG applications in the second half. But for now, I'm going to go through the process of constructing a graph from unstructured data using the Neo4j GraphRAG library. So first up in this challenge, we're going to use the Neo4j GraphRAG Python package to chunk a piece of text and create embeddings from those chunks. So we're going to go over to Git Pod now, and we're going to open the create embeddings chunk file in our code. And so here we go. It seems like it's falling asleep on me. I try refreshing it a little bit., can be a little bit slow. Maybe we can do one question while it's loading. From Danny, which score categorized as good? One is good and zero is bad or how does it work? Yeah, for semantic search zero is zero is these two pieces of text are not at all semantically related, and one is they're exactly the same piece of text. So the closer to one, the closer in meaning they are. For full text search, the scores can go above one, but basically they are, the higher the score, the more overlap there are in kind of words and characters between the two bits of text. So the higher the score, the better for full text search. Cool. We have our first coding example open. So we're gonna use the first paragraph of the Wikipedia description of London, because it's where I live, it's the city I like. But if you want to use another piece of text, feel free, just post it in here and you can chunk that up and create embeddings for it just in the same way. So first, we're going to chunk the text. So what we need is we're going to use a text splitter from the Neo4j Graphrag package. This splitter simply splits text into chunks, which are less than or equal to the chunk size parameter in number of characters. So this is the maximum number of characters we can create for a chunk of text. And which we also allow a little bit of overlap between the chunks. So we have a 10 character overlap here. This is to help avoid sentences being kind of split in the middle between two chunks. So each complete sentence will hopefully be contained in at least one chunk. It can also help when constructing the graph, as it can help to ensure each chunk has enough content to extract the right entities and relationships when we're actually constructing the graph from each chunk. So let's run this piece of code and see how this splitting process works. So we copy and paste it in here. We save. And then we run the createon bed chunks. So we should see now, you can see here this is split into different chunks. So we have the first chunk, London is the capital and largest city in both England and the United Kingdom with a population of, and then we can see the next few chunks, settlement, city of Westminster, the national government. So we can play around with these chunk size, we can have a bigger chunk size if we want. You can see here, this is the first chunk, the second chunk is here, the third chunk, and so on. Okay. So as you can see, we've split each chunk into sort of 200 character bits of text here. Each chunk also contains a little bit of the last chunk. So you see the first chunk here ends with a population. The next chunk starts with population of. So we have a little bit of overlap between the chunks. Just to add as well, there are many different techniques for chunking. So this is a very simple one here that we're using in this library, which just splits the text based on characters. But there are many different ways to chunk text. You can use embeddings to find the best way to split a document into chunks. You can split based on paragraphs rather than just on characters. So we can split this into two paragraphs, for example. We can split based on bullet points. So if you're doing this in the future, if you're constructing a RAG app, this is something you definitely want to experiment with, is your chunking technique. So back to the course. The next step we're going to take is to create embeddings for each chunk. Alex, did you have a question? Yeah, there was a question, I think an interesting comment. It would be interesting to hear what you think of this from Audrey. With the now large context window of the L the other lens is there really still a need to consider applied chunking um generally speaking it sort of depends on how big your uh your data is so and also in terms of like even with the large contact winds that windows um you know if you're sending a 50 page pdf to the llm it's going to take a lot longer to process and it's going to cost you a lot more so in general i would say yes particularly for bigger pieces of data you only want to really provide the llm with the uh the most relevant context rather than just a giant document. So this is definitely still a relevant thing for RAG apps. Super, thank you. Okay, so now we're going to create some embeddings for each chunk. So as I said, we want to use semantic search with our data. If we want to use semantic search with our data, we need embeddings. Again, we're going to use OpenAI models for this. I'm going to use the text embedding three large model from OpenAI. So we copy our code here. We copy it into the second browser window here. And let's see what our own embeddings look like for these chunks. You can see we've got a rather large embedding here for our chunk. Again, feel free to try this with your own data and see how it varies, see how changing the chunk size changes the embeddings, things like that. Okay, on to the next section. So before we continue creating our graph, I just want to go into a bit more detail about knowledge graphs. As I said earlier, graphs are essential to many AI machine learning applications. Knowledge graphs are a great way to represent the information contained in a particular domain. They provide a structured way to represent entities, their attributes, and their relationships, allowing for a comprehensive and interconnected understanding of the information within that domain. And we can tailor knowledge graphs for semantic search, data retrieval, and reasoning. And in the next challenge, we're going to see how we can build a knowledge graph tailored for a RAG application. So you can see an example a little bit here of a knowledge graph that represents Neo4j. And finally to add, large language models can also be used to create knowledge graphs. So, you know, for example, if we wanted to create a knowledge graph from a piece of unstructured text, unstructured data, such as a long piece of text, we can first chunk this text up. Text chunks can then be fed to an LLM, which can then extract entities, relationships, and properties from it. And then these can then be used to populate nodes and edges in a knowledge graph. So LLMs are very good. This is one of the things they're very good at is creating knowledge graphs from unstructured pieces of text. I'm going to see how this is done in the last challenge of this section. So we're going to open the, we're going to go back to the Git pod and we're going to open this file, the build graph file in our editor. So you can see here, once again, we have a few sections where we're going to fill in our code. If you first want to scroll to the very bottom of this page, you'll see the text that we're going to use to create our knowledge graph from. It's just a few sentences about characters from Dune. We're big fans of Dune in the team. But again, if you guys want to try your own text here, it can be fun to create a knowledge graph from, I don't know, description of your family or whatever you guys would like. Okay, scrolling up to the first snippet now. So as I said, the first text, the first step often in creating a knowledge graph is to chunk up our text. So we're gonna do that here. So once again, I'm gonna use the fixed ties text splitter class to do this. So we take our code here and we stick it in the first section. Next, we wanna create embeddings for these chunks. Again, we're going to use the same code we used in the last challenge for this. We're going to use the text embeddings-3 large model from OpenAI to create an embedding for each of our chunks To help guide the LLM, we want to list the types of entities and relationships we're looking for in our instruction text. So this helps, this means that the LLM won't just extract any old information from our text, but it will look for specific types of information and specific types of relationships between those entities so in our in our piece of text here we're looking for person we're looking for a house we're looking for a planet looking for organizations we're looking for son of relationships heir of relationships rules and member of relationships and we're looking for, what we're also saying, person can have a relationship with another person, whether a son of a person. We're saying a person can be an heir of a house. We're saying a house can rule a planet. We're saying a person can be a member of an organization. So if you go back down, if we go back down to the text here, we can sort of see why that is. We're looking for persons, so we're looking for people like Lady Jessica, Paul. We're looking for house, so we're looking for house Atreides. We're looking for a planet, so planet Caledon, hopefully. The LLM will pick up. We're looking for organization, so we're looking for the benny jenner's jeseret organization um so we want to we want to provide these this sort of um schema to the llm to help guide it to ensure that it extracts the right entities and relationships it otherwise it can it can just extract anything it likes and you can end up with quite a messy knowledge graph so this is really useful to help guide the LLM. So we copy this over to our notebook. Finally, so next, sorry, we want to add an LLM to this. So we're going to use GPT-4-0 here with a few model parameters set, which should help the LLM work a little bit better here. But again, this should work with any LLM. And as I said, we support Vertex AI, we support Mistral, we support other LLM providers here. Okay. Next thing we want to do is we want to create our knowledge graph creation pipeline. So this is the class the various processes in knowledge graph creation together. So the text chunking, the embedding, the LLM extracting, this class will do it under the hood for us and chain these processes together to correct struct our full knowledge graph. So we need to provide this class with our Neo4j driver to access our database, the text splitter we want to use, the embedder we want to use, the entities, relationships, schema, as well as the LLM. And then finally, we have a few uh a few other parameters so on error this just says ignore any errors that are generated and continue and from pdf so we can still use we can use this uh this pipeline to uh extract stuff directly from pdfs but um but here we're just going to use the text directly so finally we want to run the pipeline. So like I said we're not going to use, we're not going to extract from PDF, we're just going to feed text directly to this. So we're going to feed the text about June and we're going to run the pipeline to see how our graph looks. So we save. And we run. So you can see here, it's going through the process of chunking the text, creating the embeddings. And we should see in a second, okay, we're done in five seconds. That was very quick. So now we can run this command to see what our knowledge graph looks like. It should look a little bit like this. So it's a little bit messy. I think we might have duplicated ourselves a little bit here but you can sort of see we have we have chunks which contain which contain our text so we have the first chunk the son of Duke Leo Atreides and Lady Jessica Paul is the heir in the house and we can see pointing to these chunks we have entities of persons like Paul we have Duke Leo Atreides, which has been extracted from this chunk. We have House Atreides. So we said we were looking for house entities and we found House Atreides here, which is pointed to chunk zero. We were looking for planets. We found planets. And we've also found relationships between people. So we can see here Paul has a relationship here, son of, which points to Leo Atreides and which points to Lady Jessica. And we can also see we have relationships between the chunks. So zero to one. So we have our first chunk. So these are indexed from zero. And we can see this is the first chunk of text but we also have our second chunk of text so we can see that the relationship between the chunks as well here with this next chunk relationship so apologies this is i think i might have run this earlier and this is uh this has doubled up a little bit but this is a cleaner version of what this this should look like um you have like i said you have your chunk uh nodes have, like I said, you have your chunk nodes, you have your planet node, you have your house node. You kind of see how from this piece of text, the LLM has kind of extracted what we're looking for. It's extracted our knowledge graph. We have our chunks, we have our house, we have our person nodes, we have our house node, and we have our person nodes, we have our house node, and we have our planet node. So again, I encourage you all to try this with your own piece of text, see what kind of knowledge graphs you get. We have a couple of questions. Sorry, Alex, go ahead. Yeah, yeah, I'll just finish up here because this is the last bit. So there's a few bonus challenges as well here for you guys in your own time. once you have a knowledge graph um try creating um you know something try creating a vector index on the embedding nodes for these uh on the chunk nodes for this graph and try using uh you know the vector index uh db.vec indexvector.queryNodes procedure to query this property and try the same with full text indexes. Yeah, sorry, so this is the end of my section. Do we have any questions? Yeah, a couple here for this last bit I think I'd like to cover here quickly. Artul mentions here text chunks can be fed to LLM, and they might create one kind of relationship in one another type for the same chunk. So I think how to get consistency here is a question that comes up a couple of times. I think earlier you provided sort of labels and relationship types to to help help guide the LLM by saying okay this is this is what you should be using this is a this is how exactly this one here so this gives gives one this is one way of doing it a couple of people said can we use an ontology or can we use some some other guiding system to feed the LLM or to provide the LLM as a pillar to say, okay, this is how I already have defined my data. Can you please adhere to this one? Yeah, this is essentially what we're trying to do here. So I wouldn't call this an ontology, it's more like a sort of schema, but what we have here are like entities relations and the kind of relationships between them. So what we're saying have here are like entities relations and the kind of relationships between between them so what we're saying here for example is you know a person node can have a relationship with another person node that is sunoff a person node can have a relationship with a house node which is air of and what this does is in our in our prompt to the llm we say look for these sorts of nodes look for these sorts of nodes, look for these sorts of relationships, and don't return anything which doesn't match these sorts of nodes and relationships. So you can edit this and try it with something else, something else that's not in the data, and you'll see that it won't return those sorts of relationships. So if you added added i don't know like a dog node or something like that here entity there's no dog mentioned in in in the text so it won't return dog nodes so this is quite this is a very good way of like ensuring consistency this this really helps it's not perfect and we we we do want to do i mean in the future we're hoping to add some extra bits of type checking to the package which will really ensure that you only get what you want but this is a this is a very good way of ensuring that you you you're only extracting the sorts of entities relationships there's the sorts of you're only constructing the sort of knowledge graph that you want from the data um is to provide this kind of schema to it when you are constructing the knowledge graph yeah and i'm just gonna share this in in chat as well we have jesus and i have a series called uh going meta where we recently covered exactly this in a bit more detail using an ontology adding an ontology to give structure to a genai graph rack application so if you're interested in this more in more detail and you know maybe watch the the last of the previous episode to to find out a little bit more how this can be done yeah cool um i'm a bit conscious of time um so yeah thank you for thank you for um for uh paying attention hopefully that was useful um i'm now going to hand you over to will who's going to hopefully draw some of these concepts together um and take you through how you can use some of the stuff you learned in this section to really build like RAG applications. So I'll stop sharing my screen now and over to Will. Thanks, Alex. Apologies, let me... There we go. I think that's worked now. Hey, Will, over to you. Thank you. I'm trying to share my screen. Can you see this? I think it's working. I cannot see your screen yet. Okay. Now, here we go. Okay, there we go. Yeah. All right. Yeah, that was quite a lot of content for the first half. Hope you're enjoying so far. Just take a deep breath a bit, just relax, stretch out a bit. And yeah, I think if you're ready, we can continue with the second half, which hopefully ties things together. It's going to be a bit more hands-on, but we are going to take a look at the Python package a bit closer. We're going to look at the different retrievers. So these retrievers are different types of retrieval objects that will help you with your Rack application. And finally, we are going to build a multimodal application at the end by combining textual and visual data. So if everyone's ready, let's go. So just to recap what RAG is, it combines retrieval-based methods with genitive language models to enhance response quality and accuracy. So what does that mean? So the keyword here is, as Alex has mentioned many times, search. And imagine when you prompt your LLM to give an answer. Before it gives you the answer, it searches over the database and then finally returns the answer to you. So by doing this, you can generate responses that are contextually relevant and factually correct. So what does that mean? So there are two main benefits here. One is your LLM is actually frozen. You don't have to train your LLM. You only need to search over your database. It's quite like a neat alternative to fine tuning your data, for example. The second benefit is your data stays private here so your LLM does not have had to seen your training data because you can keep it in your database. So yeah now I'm going to show you how you can use our open source python package Neo4j for graph regack, for Rack. Let's move on. Okay, and just wanted to double check here that everyone is, everyone has their Git pods set up, because we're going to use it a lot for this section. If not, yeah, maybe Alex can try and help you maybe in the chat. So, so here, okay. Yeah, so the first retriever we're going to look at today is the vector retriever. So we have many different types of retrievers in our package. And you can select each different types of retriever for your different use case. The vector retriever here is the most basic one. And it allows you to search over your vector index. So remember the vector index that you built in the previous section. You can use this Python object to search over your vector index. And what does that mean? That means you can search over the different vector embeddings that you store in the database. So how does that work? So let's say there's a, so yeah, you've stored all the vector embeddings in a database, you've built your vector index on top of them. Now, let's say you have an incoming query text, for example. So what you would do is you convert this query text using the same vector embedding method that you created with your stored vector embeddings. You use the same method, convert your query text into a query embedding. And then now you search for the most similar vector embeddings to your query text. So what this means is, so it's an approximate nearest neighbor algorithm that's used for this retriever. And what do most similar vector embeddings mean? If in embedding space they're very close to each other, that means it's very close in meaning. So that will correspond to the text being close in meaning or semantically. And you can not only create embeddings from text, like I mentioned before, you can also use audio images, any kind of unstructured data you can convert into vector embeddings. So let's open up GitPod now. We'll take a look at the first example, Factor Retriever. So don't forget you need to have your.m set up. Make sure you have your OpenAI API key and the Neo4j credentials as well. Okay, so I'm going to go through this. So the first slide here is really mostly setting up, connecting to Neo4j using the Python driver. The first thing we need to do is initialize our embedder. So this is the embedder that creates the vector embeddings. Here we specify this model, text embedding ADA2. And as mentioned before in the question and answers, you have to use the same embedding method for all your embeddings. Otherwise, it doesn't make sense to search over a different embedding space. So we are going to copy this. Oh, sorry, this is incorrect, actually. It's OpenAI embedding, it's not sentence transformers. We're going to copy this, paste it here. The next step, so now we have an embedder. Next step is we are going to initialize our vector retriever. And I'm going to go through each of these arguments, what do they mean? So copy this, paste here. So yeah, there are a couple things that go into this vector retriever. The two most important ones are the driver. So this is the Python driver we'll be using. We're connecting to this database. The index name you want to search over. So if you have gone through, if you ran the cipher statements in the previous section, you would now have movie plots in your sandbox database. So this is the vector index. This is the vector index we're going to be searching over. Here I provided an embedder, which is from here. So it specifies like, OK, whenever we have a new query text, we're going to use this embedder to convert the query text into a query embedding and here this argument return properties be it is something we provide to allow you to sort of return the properties that you want. Okay, and then finally to put everything together, we will copy this part. Paste it here, and yeah, this is how we use the Retriever. So, we provide this GraphRack object, and this GraphRack object requires a a retriever and an LLM. So retriever for the retrieval part, the LLM for the generation part. And the LLM here you can specify any supported LLM. So here I'm using the OpenAI LL lm and so this is different from the embedder this is the lm that will so the embedder so the retriever retrieves the context and then finally the lm is the one that you chat with so um the return result will be generated using an LLM, which the context is provided to. So now let's run this and see, run this. So the query text here is give me three films where hero goes on a journey. And I call rag.search here, where I provide the query text and I provide top K, so top K5. So what that means is, what is happening here is the retriever tries to retrieve the top five most relevant or most similar results to the query text, and then provides it as context to this llm and finally returns this result so as you can see here it provides three movies the last action hero never ending story resolve wizard of ours nice so that was the vector retriever. Hope it's clear so far. So yeah, here's some important tips that I recommend. Yeah, just going to reiterate, it's important that both the query embeddings and the stored embeddings are generated using the same embedding model. So in our case, remember it was text embedding ADA2. And yeah, it only makes sense to search within the same embedding space. Second step is you can utilize arguments such as return properties. And there's also another one called result formats, I think, to specify which node properties you want return. Or even you can transform the return Neo4j records. And lastly, there's an important parameter here, top K. Given this is, so what is this is retrieves like the top K most similar records in your database. And given this is an approximate nearest neighbor algorithm, the retriever, what it does is attempts to find the top k most approximately similar to your query vector. This is different from an exact nearest neighbor search. So sometimes I recommend setting your top k to be a bit higher than the number that you want, just three here. So sometimes it's better to set top k to higher values just so you can make sure you retrieve the relevant results. And yeah, so that's the vector retriever. I'm going to move on to... Is it important to set the temperature to zero? Or is that something you would... I mean, it's done here, so I was just wondering if that is another tip. It depends on your application. I'd say temperature for the OpenAI models is something that determines... It just makes it a little bit more deterministic. If, yeah, the higher your temperature, the more varied your output gets. I think, yeah, temperature is zero here mainly to, so that I can reproduce the result, but it depends on your application, I think. Okay, thank you. Yeah, good question. Actually here you can specify the different model params as well that goes into your OpenAI model. Because yeah, I think like if you've ever used the OpenAI API, you can set other parameters, there's temperature, there are quite a few others, you can specify them here actually, it's quite flexible. And yeah, that was the vector retriever we're gonna see how we're going to we're going to do something similar with another retriever so you might be asking now where does the graph come in and the answer is here this is the second retriever the vector cipher retriever. It's basically very similar to the vector retriever, except you can leverage the graph structure. So what does this mean? I'm going to show you this cipher command. I'm just going to run this. Actually, let's go through this first. If anyone here is familiar with Cypher, or not familiar with Cypher, Cypher is the query language for Neo4j. And what we're doing here is we're trying to find movie notes with the title Jumanji. And then we have small m here. So capital M here means like it's a movie node. And then we can set a variable here, small m. And we say, oh, we also want all the actor nodes that have this relationship with the movie nodes. So this means our actors that have acted in the movie Jumanji. And then we finally, yeah, we want to collect these actors. So if I run this, you can try running this as well. You see on the side here, yeah, it returns. So this is what is currently in the database. Jumanji, and then you can see, okay, there are four actors here. Nice. So how do we use this for RAC? The answer is you can use it with the Vector Cypher Retriever. So it's basically, the Vector Cypher Retriever is kind of, you can think of it as basically the vector cipher retriever is kind of, you can think of it as like the vector retriever, but you can add cipher statements to it to query things. So how is this good? We can, this means we can combine the knowledge of our graph with vector retrieval. And when would you do this? I'd say you typically do this if you understand your graph structure and you want to leverage and some kind of structure in your graph and like fully utilize your knowledge of this graph. So yeah I'm going to show it here. Here it's the same thing as before. So okay, now let's move on to the second example. I'm gonna close this. So we're in a vector-stackle retrieval example now. I'm going to copy this. So it uses the same embedding model as before. So that's the embedder. Now, okay, we have this. So I'm just going to copy this. I'm going to go through this quickly. So, okay, if you look at the vector cipher retriever, it takes very similar arguments to before. So you have the driver, which is the Python driver, Neo4j Python driver. You have your index name. So this is what we're searching over, the vector index. We have embedded again. Now we have this new argument, retrieval query. And I'm going to go through this. I'll zoom in a bit as well. So this is very similar to the Cypher statement we're looking at before. And you might wonder, okay, where does this node variable come from? So node is sort of like used internally in this retriever. And and it first does it first retrieves the it first retrieves the nodes that are most similar to your query text that you see later on and then you can find the actors that have acted in the movies. And then you can return these properties. So you return the movie title, movie plot, and actors. And this will eventually be provided as context to LLM. So let's try running this. Okay, so we can run this here. We need this last section which is kind of the same as before except I just changed the query text. I've copied that. I'll paste it here. I'm gonna use the retriever now. So yeah, as you can see this is the same LLM. We pass retriever and LLM to the GraphRack object. Now I ask here, who were the actors in the movie about the magic jungle ball game? So I'm going to run this. All right, nice. So it says the actors in the movie Jumanji, which is about the Magic Jungle board game, are Robin Williams, Bradley Pierce, Kirsten Dunst, and Jonathan Hyde. Nice. It actually managed to use the four actors from here. And then I think you can see the benefits. So for example, if you try to run the same query text with the previous one that didn't use the Cypher statement, let's see what this gets. So here, yeah, you can see it actually listed four actors. Bonnie Hunt was not in the database, although I think she was in the film. But yeah, it was probably like information stored by the LLM. But because we wanted to use data from a database, if you run it using a vector staff retriever, in the database, it was only these four actors. And this is what the LLM output. So it sort of grounds its knowledge according to your database. And I think now you can start to see, OK, why this can be very useful because oh yeah once you start I guess this is where like the benefit of RAC comes in the output of the LLM will be based off data in your database not not data that the LLM has seen on its own during training. So yeah, that's for the vector cipher retriever. Yeah, I think like the tips I recommend are similar to before, make sure you use the consistent embeddings. Make sure you utilize result formatter here so you can maybe transform the result, the Neo4j records or result that you want. And for this retriever specifically, you can really leverage cipher proficiency. So, for example, if I really understand how your graph database is structured, you can fully utilize this retriever. Is anything unclear before we move on? Because I think these two retrievers are, once you understand these two, it's much easier to understand the others. I'm just going to see if there's anything. Yeah. gonna see there's anything yeah um there was another a couple of questions alex answered most of those i think interesting one is um probably from why um they want to retrieve small similar documents from let's say 2023 or something in index but the index contains documents from many many years can you um can you define like a filter that then says um only show documents uh from from this specific year or with this specific um you know range or let's say you know value depending on uh on your query and could could you do this before pre-retrieval or post-retrieval or both? So I think the answer is you can definitely do it for both. Yep. And for example, if you know how to do that in Cypher, it's a query language, so you can use it as a filter. You can specify, oh, I only want movies from this year. And then, yeah, your retriever will only retrieve those notes. You can definitely also do it in post-filtering, although you probably have to think about, oh, yeah, how would I? Yeah, whether that's cost-efficient, things like that. Because at that point, how would I, yeah, whether that's like cost-efficient, things like that. Because at that point, you would have searched across the entire movie database. And if you only do it post-filtering, it's probably easier, but may not be the most efficient thing. So yeah, there's definitely a lot of things you can consider if you want to do filtering. Yeah. Do you have any any validation ideas um you can think of validation meaning so when when something is like here in the the the the actors for this movie do you could we do some kind of validation of or some you know way of proving sort of where where this where this knowledge comes from so that you can say okay with some kind of validation of or some way of proving sort of where this knowledge comes from so that you can say, OK, on this, I decide no because of this and that. VALIDATION. OK, I think, yeah. I think it will all be provided as context to that. So yeah, I think you can be sure it comes from the F4G database. Yeah. Yeah. Cool. OK, cool. I'm going to move on. So let's move on. That were the two retrievers. The retriever we're going to look at is the hybrid retriever. So if you remember from before, there were two indexes we talked about. So first one's the vector index, second one's the full text index. And we can make that useful here. So yeah, if you recall the full text index, it leverages keyword matching. And yeah, so why not we search over both of these so we can actually do that so this hybrid retriever leverages this not only the vector index but also so see in vector search let's zoom in event so you can leverage not only the vector index, but also the full text index. And what the hybrid retriever does is it searches over both indexes, and then it has a merging step at the end. So it ranks the retrieved results across both indexes, and finally returns the top key nodes here. So yeah, I'm just going to cover this quickly. Yeah, I mentioned this specter index, we search over a full text index and find us like a merging ranking step. And when would you want to use this? So when your data is diverse, for example, like in the 1375 movie from before, in some cases it doesn't make sense to only use like semantic search, but you also want to have a keyword search. So in those kind of use cases, you can consider using the hybrid retriever. Let's look at a practical example. So here, we'll close this. So we'll get a Hybrid Retriever. So copy and paste this. You only need to initialize the embedded for the vector index. Same as before. So now here we're going to copy paste the code for the hybrid retriever. Here we look at the arguments again. So yeah it's the Python driver so we're connecting to this database. There's the vector index we specified. We'll be plots. We're using the plot full text index from before. So I assume you've created the index in Alexis section. And there's the embedder, which we provide. And like before as well, you can specify which node properties you want returned. Let's run this. This looks the same as before. Now to protect this. What is the name of the movie set in 1375 in Imperial China? So I'm going to run this. OK. We're going to issue. No full text index. OK, I didn't create a full text index, it looks like. So I'm going to go back to the previous section and run this. So now I can show full text index. I want to make sure it's there. Okay, plot full text is there. I'll go back here. Run this again. Okay, turning to an issue here. Okay, live demos. Okay, ideally if you set this up correctly, it should work. I'm just going to double check here, if it's here. Okay, let's skip over this part perhaps. I think, Will, it might be an issue with the full text index name. I think the capital letter might be wrong. Oh, yes. That's a good catch. So thanks Alex. So here, it was the wrong full text index name. So, run this. There we go. So it says the name of the movie set in 1975 in Imperial China is Musa the Warrior. And I think if you go back to try this with the vector retriever, you can see how it compares. It says, okay, the provided context does not include a movie set in 1375 in Imperial China because remember the vector index was built over the plot descriptions. So maybe, you know, the term 1375 or Imperial China somehow just isn't in the plot. And here you can, you manage like leverage like a keyword search kind of thing basically. Okay, so hope that's clear. I'm gonna move on to the next retriever. Actually, yeah, just before that I'll, yeah, reiterate. Reiterate on some tips. Oh, yeah, reiterate. Reiterate on some tips. So yeah, make sure you use your consistent embeddings. Make sure you use the correct full text index name. And yeah, to effectively, to build effective full text indexes, make sure you create your full text indexes on relevant properties so that you can leverage like keyword search. And yeah, I'm gonna move on to the next one. So the next retriever we're gonna look at is the hybrid cipher retriever. And this is very similar to previous retrievers. So basically for this retriever, you can also leverage the graph structure for your hybrid retriever. And if, yeah, let's take a look, let's take a close look at this set for command this is very similar to one before so you want movies with this title movie set a warrior bracket musa and then you want all the actors that have acted in this movie so here in the round brackets it's the nodes square brackets is the relationship and you can also like specify the relationship direction. Let's run this. So yeah, you see like, okay, there's these four actors here. And yeah, ideally you want to like give this as context to our hybrid retriever. So yeah, like this diagram is very similar to the one before. You have, you search over both indexes, you have a ranking step, and then, and then finally, what is the final return one is you have an additional cipher query that can, you basically do a graph traversal according to the Cypher query and then what gets returned is, yeah, according to that Cypher query. So yeah, typically when you use this as similar to before you have like you have you have um quite quite complex data so like you want to search over like uh some something that gives like semantic meaning something that has like keyword search and then also you want to like traverse the graph so let's take a look at this close this. So now on the hybrid cipher retrieval example, we define our embedder, same as before. So now we, okay I'm gonna copy this, we define the inputs into our hybrid cipher retriever. So same as before, we have our driver, vector index, full text index, this is the correct one. No capital T this time. Now we specify our retrieval query. So this is very similar to before, our cipher query. We want to take the movie nodes, find the actors that have acted in this, like the retrieved movies, and then return movie title, movie plot actors. Use this as context for the LM. Okay, wonderful. So I need to copy this last bit. This is very similar to the hybrid retriever question, except now I want to know what are the names of the actors in the movie set in 1375 in Imperial China. This is a little bit different from the one before. Before we only asked what is the name of the movie, now we want to know what the names of the actors in the movie now we want to know what the names of the actors in the movie in the movie is so let's run this and nice okay so it's specified that, okay, this is the movie you're looking for. These are the actors that have acted in this movie. And how does the other them know this? It's because it's taking a look at this database, your Neo4j database, and then use it as context. So, great news. And yeah, here are some tips that I recommend to use this effectively, same as before. Have consistent embeddings. Use your full text index wisely. Use it on data that you can leverage like keyword search. And with this retriever, so the hybrid cipher retriever, you can really leverage your cipher proficiency. So same as the vector cipher retriever. Meaning, okay, if you have cipher scales, you can really encode it into this retriever. Okay, so we're going to look at the final retriever now. Something quite different, but it's still a retriever. And it's called the text cipher retriever now. Something quite different, but it's still retriever and it's called the text the cipher retriever. And what this does is say you don't want to manually write out your cipher statements. So before you for the vector cipher retriever and hybrid cipher retriever, you need it to you know, understand your graph structure and then you know write out cipher retriever, you need it to you know understand your graph structure and then you know write out cipher. Let's say you don't have to manually write that out. What you can do is get an LLM to do it for you. So in this example you have your natural text here on the left so yeah use English here to describe what you want and then on the right is the output of the LLM. So what comes out is the cipher statement. And this is what the Texas Cypher Retriever uses under the hood. So the Texas Cypher Retriever would allow users to interact with the database using just plain language. So you know it makes it easier to use, it's a bit more accessible. Use it with care though, because with any LLMs, the output can be non-deterministic. So if you do not set it up right, you can still run into LLM errors where the LLM generates invalid cipher. So okay, I'm just going to run this cipher command. So here what we're looking at is we want to find actors with the name Hugo Weaving, so the actor Hugo Weaving and the movie, all the movies that Hugo Weaving has acted in. So here you can see its return is eight films and we want to be able to get the LLM to use this as context. I'm gonna go straight to the example here. Copy this. I'm gonna go to getPot. I'm gonna go to git pod, I'm gonna close the others. Let's open the text-asypher retriever, copy paste this. So okay let's focus on what's going on here right now. There's a lot of things here, so here we import a text-asypher retriever, we use the LLM that the text-asyp things here. So here we import a Texas Cypher Retriever. We use the LLM that the Texas Cypher uses. So if you look at the inputs to the Texas Cypher Retriever, there's the driver, there's the LLM. There are a lot of additional things you can provide as context to this LLM, just to make things a bit more robust, make sure it outputs the right cipher query. You can provide the Neo4j schema, for example, here. Okay, like if you already understand some kind of structure in the database, you can provide it as a prompt to the LLM. You can provide examples as well. So that's, I think I find this very helpful. So yeah, like all these are used as like context to this LLM to generate the correct cipher that can be used as a retrieval query. And finally, so now that we've defined the retrieval, we're going to use it. So this is the same thing. So yeah, the same thing here. We plug it into, we plug retriever and LLM into a GraphRack object. And then now we have this query text, which movies did Hugo Weaving star in? So ideally the LLM should use the eight films that we saw earlier. Going to run this now. Eagle weaving start in the following movies. And yeah, I think these are correct. So if you take a look at yeah, yeah I think that was correct. So I'll text the cipher retriever managed to generate its own cipher and retrieve the relevant movies that Hugo Weaving has started. So actually, if you want to contrast this, if you try to ask this question to the vector retriever, so the vector retriever doesn't use cipher, it just does it off similarities such as the vector index. Here, it does not know the context. So that's very helpful with the text-asset for retrieval. And yeah, here are some tips that I recommend. Yeah, make sure you provide clear inputs into your text asset for retriever. So whatever you can, like schema, example queries. And be prepared to handle errors as well, because it's using an LLM under the hood. It's non-deterministic, so the generated query could still be incorrect. Nice, just to recap a bit, we've covered five different retrievers and now we're going to look at a more practical example. So here we're going to use the VectorCypherRetriever, the second one, the one that does similarity search with a graph traversal. We're going to build a multi-model application here now. So okay let's take a look at the data that we want to use. Well, I'm just going to run this quickly just to show you. Here, if you look, I'm searching for movies with the title Homeward Bound, The Incredible Journey. Return the actors that have acted in this movie as well. So yeah, here you see like four actors. And if you click on the movie node here, capital movie, you can see, okay, there's a plot description. There's an embedding of the plot description that we created earlier. There's also a poster, a link to the poster. There's also a poster a link to the poster there's also a poster embedding so this was the image embedding that we created from the poster image before and we're going to try and search over this poster embedding actually so let's take a look. So here we initialize the embedder. Close this. Okay, first we initialize the embedder. So the embedder we are using is this clip vitv32 embedding model that was created using the one from sentence transformer. And remember I mentioned like, okay, you have to use consistent embeddings. It's because this post embedding here was created with this model. So whenever we want to have new query text later, we have to make sure we use this correct one. So I'm going to copy paste this. And now, OK, the retrieval bit. So close this. We can take a look. Yeah, here. We are going to search over the movie posters embeddings. So the poster index name is movie posters. We're going to use this retrieval query here. So after we retrieve, after we search like, okay, what is the most relevant images? We want to return the title of that movie, the plot, and also the poster itself. So okay, we have our retrieval query, we have our post index, now we can like properly use our I'm going to copy paste all this here. Got a bit of code. Yeah, most importantly, you look at the inputs into the vector separate retriever. You have our driver, index name, the retrieval query specified. There's this result formatter, which I didn't show before. But it basically transforms the return result from the initially the initial return result from the retriever and yeah it transformed and tries to return something in a more structured way so that we can use now the query text I was I'm trying to get is, find a movie where in the poster there are only animals without people. And let's try and see if this works. Here it is. So they should print out the content, the score, which is like the similarity score, and a link to the poster image. So yeah, I asked, okay, give me movies where there are only animals in the poster and no people. Okay we got three movies here Homeward Bound, The Incredible Journey, Lassie and Babe. So we also have to return the plots and then these images so if you click on the link okay this one all animals no people it's very cool. Let's open this as well. Okay, it's all animals. Oh, there's one person there. So not exactly correct, but hopefully mostly correct. This one as well. Bunch of animals. Okay, vaguely some people in the middle there. So maybe you can forgive the embedding here. But I'd say it was pretty close. So yeah, that was the multimodal search application that we built. And yeah, I recommend like... So here we're using image embeddings instead of text embeddings. Make sure you use the correct embedding model for your query text and your stored data. So, you know, if possible you can even like utilize the text index somehow, you know, if you try to be clever with your cipher a bit. And yeah, you can see like how you can integrate across like different multi-modalities of data. So yeah, that was it, gonna wrap up now. Congratulations, you successfully cover a lot from this workshop. You learned about five different retrievers. You learned about, well, you managed to build a multi-modal app. And you managed to figure out how to use the Neo4j GraphRack package. So yeah, go and build something cool with it and let us know how it goes. And yeah, as to finish off, like here's like a table comparing the different retrievers. Yeah, feel free to take a look at this as well. So I'm gonna, yeah, that's it for me. I'm gonna hand it back to Alex. Super, thank you very much, Will. Thank you also, Alex. Great session today, great workshop. Thank you all for watching. And sorry, we couldn't answer all your questions. I think we have a couple of open ones, but I think it was really good. We tried to answer most, but I think we were able to cover the most important bits here. And it was really good um we we tried to answer most but i think we we were able to to cover the most important bits here and it was great to you know especially this this table i think is a is a great finishing point here with all these retrievers we covered a lot um even if it uh time went by quickly uh so we had six different five different retrievers uh covered in this session and we also explained all the basics. So really, really good workshop. So yeah, I'd like to thank Will and Alex again for presenting, for preparing the workshop and for guiding us through. Thank you all for watching. I mean, like I said in the beginning, if you are wondering where to find the material and everything, there's going to be something sent to you with the recording, with the link to this Graph Academy course in the coming days. So you can also catch up on those later. If you are eager for more, then we have another Road to Nodes workshop happening next week. This time we cover Neo4j Bloom, which is Neo4j's visualization platform. So you can learn how to visualize a graph in there and how to make sense of your data in a more visual aspect. And then we're running this very workshop again in two weeks' time on the 24th of October, but in a different time slot. So if you want to come again, you can obviously join us again in two weeks' time. And yeah, this all leads into our big conference, online conference, Nodes 2024 on 7th of November. So we'd really like to join us there with lots of workshops, lots of sessions. We have a dedicated AI track with many, many presentations from Neo4j staff, but also from our many community members so if you're interested in that join join it and and have a listen to all the the interesting presentations you can check out the agenda already and see what what's interesting to you and yeah if you are you know wondering what to do next uh join our community at the forum or on our discord server you know look at other graph academy courses there is just one recently being published on building knowledge graphs there's another one how to build a chatbot with neo4j so i think that's a it's a great add-on sort of to this one so if you are now you know know how to work with neo4j and build graphic applications i think you could potentially build a chatbot with this knowledge and and create interesting applications with gen ai and yeah if you want to do more with neo4j beyond the learning platform check out aura or a free which is our graph database as a service we can just try for free and have a database running just for you and yeah i think with that we are at the end, right on time, a little bit over, but not too much. Again, thank you all for watching. Thank you, Alex and Will, for this cool, cool presentation, the cool workshop today. I hope you had fun, everybody, and you learned something, and yeah, see you around the world of graphs i would say take care and | Road to NODES: Mastering Retrieval-Augmented Generation with the GraphRAG Python Package | 7,331 | Neo4j | 20241010 | In this workshop, we use the neo4j-genai-python open source package for Retrieval-Augmented Generation (RAG) applications. Learn to integrate Neo4j with generative AI models to enhance graph-powered applications and AI solutions.
GraphAcademy Course: https://bit.ly/4dKPgyg
Repo: https://github.com/neo4j-graphacademy/genai-workshop-graphrag
#neo4j #graphdatabase #graphrag #llm #genai #workshop #python #pythonprogramming | 2024-10-20T20:35:29.900596 |
https://www.youtube.com/watch?v=RYuw4oq0G84 | all right good morning good evening good afternoon everybody and yeah welcome back to going meta season two episode one it's been a while we took a few months break over summer and then um a little bit before that to to refresh to to recalibrate and now we're back back with going meta hi jesus welcome as well thank you you. It's so good to be back. It's been a while, but it's great that we're here. Yes, absolutely. No, it's been a constant thing in the back of our minds, I think. Constant. Thank you all for you all for chatting to us regularly, saying when you're coming back. A couple of messages on YouTube and on other mediums so it was great to see that um that there is um you know there was demand and there was a felt feel like people missed us so that's good we've missed it a bit haven't we i mean it's true that it's been summer but it's a it feels you know like there was something missing in our in our beginning of the month exactly also also for for us too right so we had uh we had the the conversation uh last week um we said yeah i mean this was uh like a very constant thing uh also for us to check in and to talk about things and uh to share what we think and what we saw but what recently happened uh with you all so um it was uh it was a constant that uh that we were you know i think we're happy to to to restart now um at the end of uh of season one we we kind of wrapped up a little bit we summarized a couple of things so we we had uh one bigger summarization episode i I would say, which we did a recap episode. I think that was episode number 20. And then in the last episode, 27, we kind of like paved in the way and said, okay, we need to look into more outside of knowledge graphs, outside of just ontologies. AI became the topic, I would say, for the last six, seven episodes that we had in season one. And we kind of like couldn't really move away from the topic. It was just clear that we had to have that. We had to talk about it. Everybody was talking about it. And it just made so much sense. And that's a great segue for us to talk about these developments and these interesting things that's happening around in the space and technology around AI. And we had the perfect, I mean, we had a show, we called it Going Meta, and we had the perfect vehicle, I would say, to talk about these things and to combine things and to take all of these different different approaches and to mix it together so it was great that it worked kind of like perfectly even when we when we when i think when we started with the whole going meta series that we didn't think about that it would go this way but it kind of like became a more or less natural path that led into it yeah yeah yeah and i think with kind of a, how would I say, I mean, build some of the, I mean, we've talked from the very beginning about knowledge graphs, about ontologies, about semantics, and these topics have somehow been, you know, like many others, been absorbed into the whole frenzy about Gen.ai and brought up very, very, very interesting experiments and interesting analysis that we've done. And we're going to pick it up exactly what we left it. And also listening to what we get on the feedback. So this is the topic that we're going to be talking about today, something that people have asked about. We've touched on the knowledge graph generation, but mostly on structured data. And using unstructured data is something that kept coming up in the feedback. So I'm really happy that we can address it. And also that there's this great project that we're going to be spending time on, which is the LLM Knowledge Graph Builder, which is going to be the center of today's session. So yeah, really timely. And I think it's a hot topic and I hope people are going to find it interesting because we're throwing these additional element of ontologies into it. Yeah, absolutely. it this additional element of ontologies into it yeah absolutely yeah and that's that's that's great how we we still keep combining the the the spirit i would say because some people were worried and said oh not yet another ai show no it's not going to be yet another ai show we we obviously we include ai to the things we are you know caring about most and knowledge graphs and anthologies that's taxonomies all of these these topics we talked about uh in the in the previous season and we will continue looking but we we add to it uh a uh an ai part i would say absolutely and and and and those of you who pay attention and who have followed us, you will find some elements that we have reused. And that's something that I love doing, we love doing, and we've done it in the past, like taking a topic and rethinking about in the context of Gen.AI, LLMs, and combining things. Because of the of the topics that we covered are sort of building blocks that when combined deliver more interesting results. And that's also a bit of what we're going to be doing today. So really looking forward to the session. Yeah, me too. Before we dive in, I just wanted to for anybody that hasn't watched a previous episode so don't worry if you are new to the to the show welcome even it's it's it's not a prerequisite kind of like to have watched the previous episodes definitely not you can always watch them they're all available in in a in a repository and on our um on our youtube playlist so you can you can watch them there so but don't worry about it. You don't have to have watched the previous episodes. Again, we are conversational, but we also have a demo part. So Jesus has prepared a little bit of a demo as usual. So it will be more of a hands-on thing, a little bit of a show and tell, I would say. And we go into some hands-on part. Everything will be shared in a GitHub repository afterwards. So you can always check that out afterwards if it's a new code we developed and if it's something new for you to try out. And obviously, because this is live and we have the possibility to interact with you who are watching live right now. If you have comments, if you have questions, if you have anything with us during the event, then obviously let us know, type it in chat. I'll see what we can find some time to answer these questions. Usually we find some time for it either at the end or in a space where it makes sense to break for a bit of a Q&A. And if you have questions afterwards, the best place to go to is probably the YouTube video link and just use the comment function there and add some comments there. And obviously, I look for your questions, your comments there, and see if we can answer them either in a future episode. That's always good, like Jesus said, that we can take your input, take your request, take your something you have come across and you think, hey, this would be a good topic for the going meta episodes in the future. That's always great to have that. Or if you didn't understand something quite or something didn't quite make sense for you, then let us know. Cool. Excellent. Yeah. cool excellent yeah then um i think we we can uh we can dive in uh in today's episode fantastic thank you very much so i assume people can see my screen and as we used to do we'll keep doing i'll i'll um open the session with with a few slides just to set the context, to explain a little bit of what we're going to do. And then we'll jump into the hands-on part where we see things in action. And today, what we want to do is create a knowledge graph. And we want to create it out of unstructured data. That's something that many people have tried before. And many people have suggested that think that ontologies are a prerequisite, an important component in driving or kind of guiding what this knowledge graph is supposed to look like and we're gonna see that that's true and that we have tools to make it really... I would say you know and this is a bit of an announcement also for you Alex that's uh that's true and that we have uh tools to make it really i would say you know and this is a bit of an announcement also for you alex that i thought that we're going to split these into in two sessions probably these in the next one because it's such a rich topic and and i wanted intentionally to spend you know today's session in like a like i said before in the llm uh uh knowledge graph builder because you know let me move to the next slide you know the idea of of today is is pretty simple so we have data out there that's unstructured and by unstructured we mean you know text data video data audio data that uh that's not in the form of you know typically tables csvs json something that we can easily parse and write scripts to ingest into a knowledge graph, but rather something that requires some sort of heuristic or some sort of intelligent approach like what LLMs are doing these days. And this, you probably will identify some of these drawings here as part of the ones that we typically use on how we consume these unstructured data. We apply some LLM processing, whether that's through some API or through some application, and we use the output in a structured form to populate the knowledge graph. What I've done is I've injected there this element, which is the ontology. That's an optional element, but important one, as we're going to see today. And that's the experiment that we're going to run. The idea, obviously, is that the ontology, we've talked about this many times, but an ontology is kind of an agreed sort of schema, vocabulary, domain definition of a particular area of interest. So if we, let's say, we're talking about news, we might have people, location, events, and a collection of categories that we describe in an ontology. We describe also how they relate to each other, and we use specific terms to refer to them. And if we make that public in a standard format, like we've seen in many of our previous episodes, what we can do is we programmatically can use that to drive the construction of a knowledge graph, and we can use that to drive the construction of the knowledge graph, and we can use that to expose our knowledge graph to consumers, and that enables interoperability among other things. And we know that ontologies can take us further. But in this case, it's kind of a consensuated, agreed vocabulary, agreed set of categories that kind of describe what our knowledge graph is going to look like. So that's exactly what we're going to do and uh very timely uh we we have this great project probably the hottest project this day in in neo4j labs which is the uh neo4j llm um uh knowledge graph builder right so this is the the the labs page and uh and linked to it it's uh like all the labs page and linked to it it's like all the labs projects is an open sourced one so it's linked to the to the github repository so you're more than welcome to to go check it out contribute explore look at it at the internals but most importantly because you know the idea is that this offers a code free tool to do the whole end-to-end process. Let's take unstructured data in a number of different forms, and it will take care of it and build the knowledge graph for us. So you have the links there. It's very well documented, so I totally recommend you go have a look at that. But I've also done my research, and I know that you, Alex, have had a session a a few months ago i think i think was just before the summer in may or june with the niels and and morgan where where they took you through i mean they both were heavily involved in the in the creation of this of this project and and it's a very interesting session where they take you through you know what it is how it's built and how to use it, I totally recommend, and you have the link there to the recording of that session and the post that Morgan wrote on it. I don't know if you want to add anything on that, Alex, but I think that's definitely something that people can go and check out. Yeah, absolutely. So it was a great, great, great episode. So it's a great tool in all in all. So it really is, like you said, a no code, almost no code tool that helps you get any kind of data, unstructured data into a knowledge graph. And it's working super easy and super quickly. And you can use the predefined app. So you don't have to set up anything, just an Aura instance, and you can use an Aura-free instance for that, and you get going, or you can go further and you can, as the code is completely available, you can deploy it yourself and decide you wanna tweak it some more, but out of the box, it works really well with free tools, and you can just put in anything you think of and see where it takes you. And it is a really nice experiment to try to see what difference does it make to add this knowledge graph and to add this feature and then have a chatbot-like interactivity with your data within a day, I would say. Yeah, absolutely. And I think it was so convenient for today's session because I want to focus, obviously, that's the subject of today's session, on how we can inject this kind of ontology as part of the process. Because we're going to see that by default, you can do it without, and we'll see the consequences of that and the quality of it and what happens when we start. I mean, I guess we're going to see and I don't want to anticipate the outcome and you will come to your own conclusions, but we'll see that the whole process of extracting unstructured data, I mean, something structured like a knowledge graph out of unstructured data, it's not an easy process. It's simplified significantly with these sort of tools, but there's a lot of guardrails that we're going to have to put in the way in order to produce something that we can use. But like I said, let's not anticipate, let's show it rather than tell it. So yeah, check this content out because I think it's really good quality and really useful. So today we're going to be working with data solos that I've been meaning to use for a while. You see the URL down there. But if you've been to London, you might have seen that all over the city, there are these blue plaques, these kind of buildings that indicate that some significant person has lived there, has a connection to that building. And there's a database online that you can explore, that's the URL. And I think it's quite interesting. It's, if you do a search, you can filter according to different criteria, you will find them where the location is, and you will find a detailed explanation on when was it created, who is it about. And for example, you're seeing there the place where Jimi Hendrix used to live in London. So it's a pretty cool data set. But I thought it's one of the many unstructured data sets that you find out there. And because that web content is something that the LLM Knowledge Graph Builder can deal with, I thought that's a good one to use. So that's what I'm going to show today. That's the dates that I'm going to try to build a knowledge graph out of. And I thought I'd mention that there's an interesting aspect to this tool, is that it produces a quite particular and rich knowledge graph, a graph that's formed of two parts. And again, that's a reference to some things we've done in previous sessions. But the data, unstructured data, typically will live in documents. And these documents might have internal structure. If they do, that's great. And you probably remember this session where we looked at some legislation, a document that has parts and sections and definitions. So we could kind of map in this document structure graph the structure of the document and then deal with each of the chunks and get insights from it. In the worst case, if it's just plain text, we might just chunk it. I guess you're familiar with this idea of chunking. It's a way of producing fragments of a document in order to be able to then embed and enable vector-based search. That's something that we've been doing again and again. But this is something that's captured, and I would say it's kind of the lower level graph. It's a graph of the structure of the document. In worst case, like I say, it's a sequence of chunks associated with a representation of a document, which in our case will be the web page. But on top of it, and more interesting, is the domain graph, where we extract the entities that are mentioned in these documents. So for example, in that page that I showed a minute ago, we will have a note representing the web page about the plaque of the building where Jimi Hendrix used to live and the different chunks that the Knowledge Graph Builder creates out of the page. But then on top of it, and these on top of is more kind of my way of thinking, because in the end they're just nodes connected to all the nodes, we will see entities like, hey, there's a person, there's a node representing a person, that's Jimi Hendrix, there's a node representing an address, it used to live there, there might be other things that are mentioned there. And these are entities that are extracted from this content, and that's, I would say, the more interesting part. And the two graphs are linked together. And like I say there, they will not be exhaustive, and it will depend on how you approach the process. But more importantly, they're complementary. And they're complementary for a number of reasons. But you mentioned before, Alex, that one of the things that we'll typically do to test our knowledge graph is to run some kind of chatbot conversation with it and that's built into the knowledge graph builder so one way of approaching this interaction is by doing vector search on the on the kind of lower level graph the document structure graph finding matching elements and then navigating up to the domain graph and finding related entities so this is the idea of graph rag. We've talked about it several times and we've shown how it improves and augments plain rag that would be just based on the vector search. So that's the kind of end product that you will see. And I thought I'd mention it because it's a very interesting aspect of what the Knowledge Graph Builder does. So that's it as an introduction. So time to try it out. So any questions on that? I'm sure you're familiar with the subject, Alex. You've talked about it in other episodes. So while I move to the Builder, so that's what it looks like. So from the page that I referenced before, you have a link to the... You mentioned it before, we could download it. You could run it locally with the Docker Compose. It's very easy and straightforward. But in my case, I'm going to run on the hosted one. So I took the easy the easy route so that's that's what you see if you go to that url and all you have to do which i did up front is connect it to my uh to my uh instance right so that would be your aura instance in my case i'm using a sandbox but you have to uh to have a neo4j instance that's reachable from from these these application and that's really everything you need to do in case you want to just try it out without any previous work. You just create a Neo4j instance, or you have some instance that's available online, reachable online, and it can be a sandbox. If you have an Aura-free instance you are using for something else, you can spin up a quick free sandbox if you have no Aura-free instance you take the Aura-free instance if you have something else available as a play playground you can you could just connect it and and go go from there exactly so this is yeah this is like an empty blank sandbox that I just created. So this is the browser interface. So you see that it's an empty one. And when you create it, whether that's in the sandbox or in Aura, you will get the credentials in the form of user password. You can even download the credentials in the file. The builder is really well built. And now I'm connected. So if I click here, I would disconnect. But in general, you type user and password, or you just drop the credentials file, and it will connect you to your instance. And then it's really, really seamless and straightforward. So I'm connected. So my NodeGraph builder is connected to my instance. I see I have the chatbot here ready. Let's leave that on the side for now. And I'm going to go to our data sources. So this is the page that I was talking about before. I mean, this is the page about Jimi Hendrix, but this is the blue plaques, plaques, sorry, page. And what I've done, sorry, I think I went with music and city of Westminster. So if you do a search, you find... I mean, these have done it manually. Of course, you can script that and be more exhaustive. But for our experiment, I thought I'll just explore a bunch of those. And I picked some famous composers that happened to be connected to the city of London. Chilpan, Berlioz, a bunch of those. So I have the URLs of the pages, because that's all you need, right? You go to the builder, and there's actually support in the, not in the hosted one, but in the one that you run locally. You can also use YouTube videos, but I think we've disabled them on the cloud one. But what we're going to use here today is web sources. So all I have to do is paste the URL. So in this case, it's the blue flag page for Handel, who used to also live in London. So I submit that. It will scan the page, and it will show us ready to import. This is a bit of a manual process. I hope that it doesn't take too long but I'm going to... my plan is to paste three or four pages there. Of course there's the demo effect that might kick in but hopefully there you go so we have one document i'm going to add another one for chopin who um who else we have uh mozart who was also i've saved the the urls up front but yeah essentially what i'm doing is adding them here and all the one and just to i didn't click submit there you go to add a bit of uh more modern day stuff i thought i put also jimi hendrix and john lennon who happen to be to have their own plaques in London. Anyway, so this is the manual part of it. So you have to one by one submit the URLs. But once you're done, it should be all be very straightforward. So one more. Jimi Hendrix. And... for Jimi Hendrix and um there we go that's it so for five of them should be sufficient but you see how and and by the way this is uh um you can use it through the app like we're doing here in an entirely code code free but since it's open source, you can programmatically do exactly the same and get in a sort of a headless approach the same result, which is more aligned with what we've seen in the past. Good, so we have the list of resources here, and let's do one thing. There's a button here, a bit hidden, which is Graph Enhancement, where you can do something quite interesting, which is central to our topic today, which is define the schema. So by default, you will find nothing. So it's an empty schema, which means, you know, go ahead, LLM, and extract whatever entities you find. be uh uh but no guardrails or no any any further input of what to expect uh what would make sense from a you know human perspective correct so what happens if we if we do that let's let's take jimi hendrix the page about him of course we have to select the LLM. I'll go with GPT-4.0, which is the default selection as well. But if I pick one, what will happen? And I say generate graph. What we're going to see is that there's some processing going on, and the result will be populated on my Neo4j instance in the cloud. And we're going to see that the result, you know, quite expectedly, it's a bit unwieldy because there's a lot of information in that page. If I don't put any gut rails, you will identify whatever is in that page and try to make it explicit in the form of a knowledge graph. That could be all right for an initial exploration, but it's certainly probably not what you want to do if you want to create a graph that's manageable. I guess that's what you can expect. As you mentioned earlier, the text will be chunked. Individual text lines or paragraphs will be isolated and linked together. And then each of those gets almost like a keyword or something or highlight okay and then you identify the things and that's what makes the graph it becomes yeah yeah exactly not very clearly defined yeah the problem is that it will you know it will be we should start seeing results here. So yes, you see the notion of a document with the chunks. The chunks are connected with the sequential relationship, like next chunk. And this is what defines what we said, the structure of the document, which I think it's probably a bit less interesting here. But we see that there's a number of entities that have been identified. So we've events we've found persons we've found instruments places songs that all these make sense the thing is because we didn't put any any schema definition any sort of shared vocabulary it will come up with the names that things are are are right and now the problem is you know when we reload the next document, that might grow without control. And that will define a graph that's hard to use. But not only that, this is, I think, something that I wanted to share after experimenting a little bit. And it's the fact that web pages these days, we should see here that the Graph Builder is done. So that's completed. We see here that it's done. The web pages contain a lot of noise in a way. So remember that we were, I think we've loaded the Jimi Hendrix one. And what I'm interested in is the key elements here, the text part. But there's lots of information in it. And that puzzled me when I was looking at it. For example, I said, well, there's a number of places, let's look at that. And we find that there is a bit of a number of unexpected things. For example, we have a mention of Stonehenge here. Wow, that's interesting. I mean, I know he played at the Isle of Wight festival, but was not aware of Stonehenge, right? And there's Dover Castle. And I said, what is that? And the thing is, you can expand it and see what it is related to. And in this case, it's kind of an orphan node. So it's an entity that has been extracted and you can look at the these are the chunks. So you can look at the these are the chunks so you can look at that's very very interesting because that gives you additional information on where in the page that comes from and you immediately see that hey that's not the content that i was after that looks like the header is like looks like the footers and all these things and if you look at the page guess what it is so if you click on the visit oops why is going on why it doesn't show it's interesting there you go so there you go stonehenge dover castle that's not i mean that that's part of the menu and you know fair enough that that's so all this to say that you know be careful when you load web content directly from the pages because you know these days and this is not the worst i mean i've tried i've experimented with the catalogs of the Tate Modern or the National Gallery and the way they're built, there's so much additional information that you will be surprised to find things that you were not expecting. So probably some pre-processing, maybe an idea would be to extract the relevant context, the content, the text, and put it in the form maybe of files, right, in an S3 bucket or something like that and consume that text rather than directly the web page. Do some noise before. Exactly. So this is just a kind of an observation and I suppose that's the way, if I remember correctly from the session that you run with Niels and Morgan, I think they use the Langchain loaders. I suppose these are standard loaders that would go in and suck all the content from the web page, all the HTML you don't know upfront. That's part of what we're going to be doing today. We're going to put some guardrails in there. We're going to say, when you import information, there might be places, there might be other things, but I want to take control of the kind of things that you import, because otherwise, it will be hard to manage. And we see, for example, there's a band member relationship. So we have a Gatorade that's a band member relationship so we have a a gatorist that's a band member of so you know uh kind of questionable in a way so it highlights the complexity of extracting something uh like a knowledge graph out of unstructured data so it takes this normal approach which is you didn't give me any any hints on what we're after so i will take you know every subject every predicate every object and i will build the graph out of it. So somehow that's the approach that it takes. And it's not wrong, but I want to kind of guide that a little bit with my ontology. Does that make sense? That makes total sense. Yeah. And I think if you have in a world where you only have one document, maybe this approach actually works as is. But like you said, if you add two, three, four documents, it'll become quickly a little bit messy. And already now we just loaded one document that's relatively, I would say, not very complex in essence, but already you have, and now the relationship type spent away, but we had, or away but uh we had all the labels we had like 20 i think different different relationships already um created and and node labels uh we have here what do you have 10 or so maybe um so it imagine if you upload 10 or so files and this grows almost indefinitely then you have quickly you have a huge mess and you don't really can follow anything because it just becomes very very unstructured that's right that's right no and that that's exactly it so so one one thing that i did is okay so i'm gonna go to this graph enhancement and I'm going to try different things because, you know, it's interesting that they offer already a number of interesting options. So I can say, okay, I want to get the schema for my text fragment and I can take, I copied the text of the description about, I think, Handel. So I copy the text there, right? So I can ask the builder to analyze that and extract some potentially relevant entities and relationships. So it's kind of learning in a way an ontology. I mean, it talks about a person, a location, musical works, events, and there's relationships. And I can say, I'm interested in these, in those. I don't want these. So I can introduce a level of control. Now, I can even enter them myself, or I can go and take a standard vocabulary. And one thing that I thought would be an interesting one to do is go to a very popular one. I think it's one of the most... I don't know if people would call it an ontology, but it's a controlled vocabulary that's shared in the form of an ontology with a collection of classes, relationships, properties, etc. And it defines a number of schemas, entities like events, persons. So it's kind of... I thought it would be a good idea. And we've used it in the past, Alex. You will probably remember in previous episodes. So it defines entities and terms for some of the things that I want to extract from the pages that I'm exploring. For example, I know we're using... We have addresses, right? Where's the plaque in London? So there's the notion of an address. If I do a search here, there's a postal address, which I might want to use. So I might want to, instead of randomly giving it a name, I'm going to use postal address according to the schema.org vocabulary. So I can come and say, okay, I want to use this term and I'm going to pass it to the to the schema.org vocabulary. So I can come and say, okay, I want to use this term and I'm going to pass it to the builder. So I want it to extract postal addresses, right? That would be one category. Another one is we've talked about these are musicians and there will be probably compositions, right? Creations, right? Creation, yeah. Music composition. There's the notion of music composition. So, I mean, I could go on like that, but there's a bunch of these already in schema.org, which I'm keen to reuse. For example, there will be music compositions, and I realize I'm not passing it as a formal ontology. And we'll see the problem with that, because at the moment in its current version, it supports the definition of labels or categories, as we call it. Labels is the way we define categories in Neo4j and relationships. But for example, you cannot specify the properties inside these nodes or inside these labels, or you cannot say how these relationships connect things. For example, in this case, I would expect people to appear. So there will be persons, and I suppose there will be the notion of person. It's schema.org person. So I can copy that too and I will add it to the to the builder so I know that a person is connected to a music composition through the relationship author I believe it is so we can we have music composition or we can have even the composer here that's uh the person organization who wrote uh composition maybe composer is a subcategory of of authors so that's that's a person connected to a music composition perfect so we can we can use that so we can say composer is the relationship that i want to to extract as well so I'll put it on the relationship but I'm not being as you know you can see that I'm not being explicit about composer it's meant to be between person and music composition so it's a relationship that will be there and well I could go on right but I could create my own and I will do. For example, if you notice, an interesting thing is that at the top of the page, there's a description of which government entity. In this case, it's English heritage, but might be London or whatever. So there's an organization that erects the plaque and I was interested in that so I wanted to maybe capture maybe the government organization or some form of organization I think there was an organization I believe there's even a government organization somewhere so there you go. So there's a government organization. Well, anyway, like I say, if there's an existing one in schema.org, I'll pick it up and pass it to the Knowledge Graph Builder. And the connection, there's obviously not a relationship for this government organization erects that plague so I'm going to create my own so it doesn't need to be all coming from I'll take care of that later on specifying that this is not coming from schema.org but created by me what else so we have music composition we have persons and um uh well there you go that that's that's just an example but i'm i'm feeding the the somehow a part of the ontology to my uh to my knowledge graph builder and we're going to see what the effect of that is going to be so let's make sure that i I'm running the same as I'm running my test. So we have a posted address, we have a music composition, a person and a government organization, and then we have the composer. I think I used author in my test. Let's see how this goes with composer. And erected plague at, which is just to capture that line so okay so let's use that so i just need to close and i'm going to run uh the generate graph for the remaining uh pages right so if i run that that will um looks like it's running two in parallel so we should be seeing uh the the results popping up in the graph but this time it should be a bit more controlled. It should be a bit more less, I mean, I wouldn't call it random, but a bit more using the specified relationships and categories that I've defined. So I'm going to close some of these because I have way too many. And let's give it a bit of time so it's chunking. So now what we're going to see is that yes, it's going to use the terms that I've passed, but then I have to double check that these terms are being used in the intended way. So for example, like I mentioned before, that a composer or author connects a person to the music composition and not the other way around or not in any other way. That's not what I'm expecting. Let's see. It's taking a bit of time. Let's give it time to process. Do we have any questions so far, Alex? Just to give it. Yeah, we can take the time and answer some questions. So first, a couple of comments from YouTube. First, last, they say awesome, that works. And they built something, similarly, they tried to build something where they feed unstructured data into a graph, but they weren't aware of this tool. So maybe that helps you define your work a little bit closer and helps you. You don't have to hack it together. So that's great. Hopefully it helps you on that journey and maybe you together um so that's great hope hopefully it helps helps you on that journey and maybe you can let us know what you uh what you do with it and how how you proceed and there's a question from cameron does it make sense to introduce an ontology of a web page as organ is organizing principle and then derive further patterns, ontologies from the unstructured data itself? Sure. I mean, I think we've kind of, you know, sort of didn't anticipate that question, but it's a little bit what we did, maybe with one single, it was a one shot exercise. So we took just the content for, I think it was for Handel and you put it there and it will analyze and try to give you the kind of entities so you kind of learn it from an example and then you can decide which of the categories and the relationships you want to keep and the ones that you want to do. So I would say you can learn it from your data, yes, or you can either because of course we have to think how people would use it in the real world, right? So let's think of an organization, right? So you have an enterprise ontology, you have an agreed vocabulary, you have a catalog where you have standard canonical terms for certain things, so you want to use those terms in your knowledge graph. So it's very common the case where that exists and you might need to extend it for certain elements, but in some cases of of course, you will just literally learn it from your data. And what I wanted to convey here in today's session is that it will be an iterative exercise. And I'm a bit surprised that this is taking that long. I mean, this is the danger of doing it and run the other two. Should we? Yeah. I don't know if we can. But sometimes, yeah, I mean, it's. Cancel these and maybe process the other two. So we have a document processing. Yeah, I should have probably gone with the local install. It will be a safer bet for for a demo but it's um but we'll see that's um you know it's worked of course during the test as it always does but yeah we have some more questions so um so maybe we can we can see hopefully this this uh this goes otherwise we'll uh we'll see what we can do but yeah sometimes this is a bit um yeah unreliable uh with a demo unfortunately um the the sorry now i'm dropped my yeah here from himans, they ask, how to validate the relationship generated by the LLM for knowledge graph creation? You are reading my mind. I mean, that's exactly what I'm going to show the moment we get some of these loaded. Let me start again. So if I delete the files, maybe, I don't know if that's going to help, but delete the entities. And let's put it again. Let's take a single one. For example, the first one that I had for handle. Because that's exactly what comes next. Let me paste it here, hit run, and then talk about it. So submit that. Yeah. I don't know if it's a question of limited resources. And again, I'm sure you will have an entirely different experience if you run it locally, and that's probably what I should have tried. But anyways, yeah, sorry. Generate this graph. I hope we've preserved the schema. That's good, yeah. But yes, I mean, that's exactly what comes next. So what I'm going to show is that, as has happened in the test, some of these, I mean, the relationships that I've passed as input are used, but sometimes not in the right way. And when we see cases, oh, that's great. That seems to be progressing a bit better. So we see cases where I don't know where I are. Let's now put maybe another one. I don't want to be too over optimistic, but it looks like we could. So sometimes you will see a, what did we call it, like an author, a person authoring another person and I say, hey, hang on, that's not right. I said that a person would be the author or the composer of a music composition. So these sort of things you will detect, sort of violation of the expected use. There you go. So is it completed? So this is completed. Fantastic. So let's kick off the other one. So I'm going to start this one, but we can start looking at the generate the graph. So let's look at what came out of it. there you go so we have the the relationships that we define so we have the composer the erected plaque at etc etc and we have persons and we have posted addresses music composition and government organizations so all looks you know decent uh so far but let's let's see what what that looks like so for exampleoser and I would expect there to be... Let's change a little bit the presentation so we have Handle there and another node representing... Yeah, let's make that a bit bigger as well. Yeah, maybe you can consume so we have a sort of a duplicate there and we have a number of music compositions that looks pretty good so that's actually much better than i than i thought so let's try to find another one for example what about postal address a composer erected plan at there we go that should be hopefully some some of those might not be correct so there you go so we have a that's that's something uh that i was not expecting so uh i was not expecting mozart to erect a plaque uh and address so that's uh uh and and we find some mistakes of all the nature because what about government organizations? So we have London County Council that might hopefully have erected some plaque. There you go. And these are the chunks, right? I understand that when we explore it here in the browser, these are kind of flattened, but think of these as the chunks that if I expand it, they will normally be connected to the document where they come from. So that would be... This is the next chunk. There you go. It might not be very readable. This is the document and if we expand it, this will probably be connected to... I don't want to expand it because it's going to have an explosion, but these are the kind of people that's connected to the entities that are mentioned in this chunk. And the relationships. I'm surprised it's a lot cleaner than I was hoping and so what I saw basically but you know that's great anyway. So we have a... here we go, that's one example. So there you go. So this is Mozart's first symphony, which is described there, and he's the composer of that, of course. Now, Maria Anna Mozart is definitely not the composer of Wolfram Mozart. So that's one case of... And Leopold probably is his father, so he's probably not the composer of one. So sometimes the relationships are not used in the intended way, so the right relationships are used, but not following the domain and range kind of expected. And we have a tool for that, you might remember, right? So we have a way and there's several. These days there's... I don't know if you've had a live session on that but there's the pydantic framework where you can basically define constraints on what the expectation on the graph is what what should it look like and and we've done it with shackle here so that's what i think we talked about it a little bit in a previous episode but we didn't uh and that was not a specific go-to episode about this yeah exactly. Yeah, exactly. So what I've done is I've created a very, very simple shackle shape. You might remember how we use that. So there's in new semantics that there's a, I'm sorry, it was shackle. We can import a collection of shapes and all you have to do is specify you know the serialization that you're going to use this is turtle and I just need to paste it here if I use the inline and I've created these this small fragment that I'm going to show there's a couple of episodes on these if you want to dive deeper into this but essentially what I'm saying is I'm defining a shape here, very simple one saying there's nodes of type person, and there's an author relationship. These author relationships are supposed to, I mean, we didn't call it author. We call it composer, didn't we? You called it composer, yes. A previous example, let's make sure that we use it right. So here we call it Composer. Right. Let's fix that. So this relationship is called Composer. And this is standard syntax. This is Shackle language, which is based on RDF. We've talked about it before, and it's a way of defining the shape of your graph, defining constraints. So there's a property called Composer that's meant to connect the person to a music composition, right? And this is the type of the target sort of node. I'm going to do the same. So basically, this should identify violations of this type, right? So this is wrong. A person cannot be the composer of another person. And the same with the government organization so there's erected plaque at which is connecting that to a postal address if it's something else that should be wrong so i've just defined two basic constraints so that sort of answers the question that has been raised so how do i control that effectively the llm behind this tool is following my guidance and using the ontology in the right way well there's no way no way because the LLMs to some extent, or at least not in its current version, but what I can do is define constraints on my graph that will detect any case in which that's not used as I was expecting it to. So now remember how I can run the validation just by saying, okay, go ahead and shackle validate, and it will produce a summary of all the cases where this is not respected, right? For example, we see that there is a node 115, which is of type person, where the composer, which is meant to be of type Music Composition, is not. Right? So we can find the details about it. So we can find about that node. So there will be a node that's connected through a, we said, what was it? Composer to something else. And let's return this subgraph. But I want to focus specifically on this one, so where id of n equals is the 1, 1, 5, right? That's one of the problematic ones so we have a case there well effectively that's what we saw before right this is wrong so this is not the composer of these other persons so I can use that and I'm doing it manually here but these of course can be automated so I could say you know whenever you find something like that, you can delete the relationship. And the same with the other rule that we defined. We can uncover these sort of situations and control the graph that we're generating. I hope that answers that question. Now the second part and the one that I want to use to conclude is, hey, we created that graph following an ontology that I passed as input in a kind of a bit flaky way because it's not really the ontology. I passed the labels and the relationships, but these are standard terms that are defined in the schema, the ontology vocabulary, so I should be able to present them and export them afterwards and the answer is of course we can and this is something that we've also done in previous sessions and all I have to do and this is going to be done in one go but I'm sure you will I'll share the code afterwards as always but what I'm what I'm doing here basically is I'm saying, okay, I'm using one namespace, which is schema.org. That's the vocabulary, the ontology that I've used. And you remember that I created some relationships of my own. And I've put that, it doesn't exist, but imagine I've defined that in my own ontology. So I've called that JB Extensions, which is my own schema, my own ontology, because it doesn't need to perfectly align with schema.org i can add my own extensions so that's a placeholder for that yeah but it's very important to to think about it i mean if if you do this in in your uh company or for your project you you might be relying to schema.org or some other ontology for for general purpose terms but you will for sure have your own definition of some topics, some concepts, some things, and it makes sense to add that in addition to make it more relevant to your set of documents, to your data in essence. Absolutely. And all I have to do afterwards is map the internal name, which for some reason in the Graph Builder lower cases the rest of it, but I just follow the standard naming in schema.org. The one change that I have to do is we didn't use author, we used composer, I think. Composer, so that's the term in schema.org. And I define these mappings. And this one, erected-plag at, is my own terminology. So I define that. This is persisted in Neo. And now I can export it. Something it didn't like. Why was that expected? Oh, I didn't put a semicolon. Silly me. There you go. Looks like that was the oh yeah there was oh this double dash that that's not valid a bit silly so I suppose it broke there so let's run this second part perfect yeah I am good So we now have the mappings defined. And what I can do now is, okay, I'm going to take a subgraph, for example, and let's fix this, right? So we're going to, for example, when I find these type of violations uh i can say when um i mean i'm again doing it manually here in general whenever you find uh something here and an object where oh um it's not of type uh what was it supposed to be? It was a music composition. How did we call it? Music composition, yeah. Music composition where actually we're not that. If we find something connected to a node and we knew that that was problematic, what we're going to do is we're going to remove we're going to delete this relationship right so i'm not going to delete the the target because maybe uh leopold mozart is you know it's something that we want to we want to keep uh no actually it's in this direction if we want to be more precise. But this should... What did I write wrong? So I'll double where. So, and again, no return, just delete. Yes. Okay, so it's deleted one relationship. So now it should look better. So if we go back to the subgraph around this node. There it was. Yeah. Why is there no? Because there might have been a, okay. So we fixed one, but not the, so that's, maybe let's pick another one so for example that um well i mean it doesn't really make a massive difference i mean we could take um one one um six one 8, for example, 1, 21. There you go, Queen Charlotte. That's another case where not. But anyway, there's some errors. But anyway, what I wanted to show just to conclude, because I'm conscious of time, is that now we can export that. And since we've defined the mappings, we can say, okay. And remember, there's a built-in http endpoint in the in the browser so all we have to do is rdf um neo4j is the name of the database uh and cypher there's a cypher endpoint what i can write my um uh for example let's see what query uh let me find something interesting here so we have uh this um Mozart who's the composer this is node 111 composer right so we can take one simple query and say I'm going to pass the cipher oops And the query is the one that I was showing before, but for Node, we said 1, 1, 1, right? So let me... 1, 1, 1, yeah. This is the reason, yeah. So it would be something like match a Node N connected through, I hope that that doesn't fail me composer of something where id of n is 111 and we return subgraph as g which we have to name like this. I hope I've made no mistake, but I have, of course, match, subgraph and composer or ID. Let me see if this is a valid query because that's common. Yep. Okay, that's a very interesting error. Oh, of course, it has an embedding. That's not. Oh, that would be the, I mean, it's trying to serialize. I mean, I could drop the embedding, but it's going to take me a bit more time. And I think that's, I mean, we can just for the sake of the example, I mean, it looks like there's an option actually in the importer where we can decide whether we want to, where is it, post-processing jobs, materialize, enable hybrid search. I think there's an option. And it's a shame we don't have Morgan here to tell us. But I think you can enable, calculate embeddings of, or maybe not through the application, but you can optionally define whether you want the embeddings to be calculated for the extracted entities and not only for the chunks. But just to see it working, we could maybe be a bit of a hack, but we could say for this one, for 1, 1, 1, we could delete. I mean, don't do this at home, we could say add delete so it's uh delete or is it remove remove n dot embedding and we don't need to define the subgraph here there you go oh Oh, no changes. Oh, because there's not. Okay, so we have that. What did I do? You just returned. You removed the remove part of it. Hang on. So match n where idN and return. You just returned it. Yeah. Hang on. Let's go back. Remove command.embedding. And maybe I'm probably spending more time that I should on this. But that didn't change anything. You mistyped embedding. Oh, there you go. Thank you, embedding. So that should remove a property. Yeah, and we're now going to do the same with the other one, with the first symphony, which is 114. So we're going to do the same. There you go. And hopefully, if we now try to re-serialize that as we did before, that's going to produce some nice RDF out of it and entirely defined according to the schema of the orgontology. So it's exporting a person connected through another node of type music composition, right? And with a label that's the name and the comment, etc., etc. So all these, you know, I hope it didn't, it was not too confusing with the tiny bits at the end, but what we've done is we kind of somehow injected the ontology as target model in the knowledge graph generation part. And then we verified that the resulting graph was correct using our constraints. And then we showed that we can export it according to schema.org, producing some perfectly valid RDF. So yeah, I think that was the whole. I mean, I hope that the that was the whole i mean i i i i hope that the the the concept was was clear when understood i'll share the the different bits that i've used uh in the session but uh happy to take despite being over time any questions if there are any yeah we can we can look at a couple questions so so to um as as we have time a little bit um one question comes from uh from time trader and i think i don't think we can answer this but i just wanted to read it out is so support for rules domain range data type etc coming in future releases um i don't know if you're aware of of something i i i don't know i mean i i am mean, that's something that I should, and maybe I thought for a minute, but it was too late to bring either Niels or Morgan as guests to the session. We have a follow up episode next month. Maybe we ask if they can join us then. And then we can answer some of the questions. I know they have a pipeline. I know they're working on things, constantly updating stuff. So it might well be that this is on their agenda, on things they want to add. I don't know if and when. And I would expect to be able to do that to some extent in the programmatic approach. I mean, maybe not through the app, because the app is limited in the sense that you can only enter labels and relationship names. But maybe through code, you can introduce additional constraints. But, you know, we'll definitely answer that with Morgan. Yeah. And something from, I don't know, more of a comment than a question here. First, last is the guy I mentioned earlier that they built their own tool that does something similar. For them, validation required basically collecting feedback or reinforcement from users and or manual validation. validation required basically collecting feedback or reinforcement from users and or manual validation so i think that's a that's a big a big topic which we we showed today and we showed in previous episodes i think as well as like you constantly have to validate if if this still makes sense if this is is the right schema if this is something that that needs updating or that something is you cannot just um and I think this was one of the last episodes where we had this model agent and this query agent kind of like playing back and forth together, LLM playing the roles of humans here. But I think you have to have some kind of validation to look into this. Otherwise, that's right. Yeah. And the idea was to show that I think someone commented somewhere that it's no magic. Of course, there's no magic to some extent, because it's amazing that you can pass some text to an LLM and it will be able to detect, hey, this is an entity, this is a connection between two entities. So it will do a good part of the job for you but but in order to get something usable and something that you can really build applications on you're going to have to first iterate you're probably going to have to start small you know with a a small set of categories relationships a small schema small ontology and then iterate over that and and along with that you will have to put guardrails to make sure that whatever comes out of these llm process is what you expect so so yes i mean it's it's uh it's a crafty you know process but it's uh it's definitely magic to some extent i mean we can do things that we could not even imagine you know not not that long ago and think of the flexibility that that that this that this brings right so yeah yeah um and maybe maybe one more uh question here from don uh he asks have there been any previous sessions on automating the mapping of existing ontologies or data wall schemas to create graphs i think we did we did a couple of those i don't't remember exactly which episode numbers. Do you remember anything in particular? 25, 27, I think we did twice. I mean, we looked at structured data, we looked at the schema of some Kaggle datasets and we mapped it. I mean, actually we didn't map it. What we did is we kind of learned of um uh a schema out of it but the mapping would be exactly the same so we took one approach you know one shot llm based and another one a sort of agentic approach where we kind of create a a proposed uh schema then there was another agent being the critic of it and iterating a couple of times yeah that's what that's what i meant meant before yeah exactly so yeah check check a couple of times. Yeah, that's what I meant before. Yeah, exactly. So yeah, check this episode out. I think that's interesting in the previous sessions. And then to complete, maybe do one more question, because I think that's also an interesting one from, and I'm sorry, I probably mispronounced your username here, Pankothandaraman. And they ask, can we use ontologies to store session data of a chatbot text data in a session in Neo4j? I mean, you don't really need, well, you could use an ontology, of course, to define, but, you know, we have examples of that. I mean, persisting as a graph, all the interactions of, you know, conversations with the chatbot and the kind of the chunks, the fragments of the graph that are being used to answer them. So I'm sure you've run sessions with Dan Bukowski and company that have built a lot of examples around that. So yes, we can definitely share. Yep. Check out these episodes. So yeah. Cool. I know we're heavily over time, so just let me browse through these super quickly. But I wanted to mention it before we close. It's the Road to Nodes. Workshops are coming up on the... Oops. This is not what I wanted to do. I lost Jesus this is not what I wanted to do. Now I lost Jesus. This is what I wanted to do. Sorry, Jesus. I wanted to briefly mention the Road to Nodes session. So on the 19th of September, we start with Neo4j Fundamentals. They will be in two time zones. One is EMEA Asia friendly, and the other one is North America friendly or America in general friendly. So check these out. App building on Aura graph visualization with Neo4j Bloom and also mastering GraphRack. Also this one happening at two time zones throughout September and October up to nodes, which brings you to the next point. Obviously it's happening on the 7th of november you can register now um i'm looking forward to uh lots of great sessions we will announce the agenda in a in a few days i think i have seen so many interesting sessions so it's it's really going to be a beautiful event again looking forward to to that in in november and then where do you want to go next i saw a question earlier on how do you get in touch with Neo4j, with the community, check out the community forum, check out the Neo4j Discord server, go to Graph Academy for interesting sessions. There is one on knowledge graphs coming in the future. So stay tuned, but also there is something on how to work with LLM, how to build chatbots is already available and aura free obviously um if you want to play with the llm knowledge graph builder app uh and and use user database use you can use aura free and uh go from there and yeah with that um thank you for jesus again for welcome back to to season two it's been fun it's been great i'm looking forward to next episode we can definitely continue this and hopefully maybe uh niels or morgan have time to join us um in in a month uh that we can uh we can discuss more and uh and see what else we can do enhanced the enhancing knowledge graph builder with ontologies i think that's um that's interesting and yeah thank you for your time thank you everybody for watching and uh | Going Meta - S02 Ep01: Using Ontologies to Guide Knowledge Graph Creation from Unstructured Data | 4,349 | Neo4j | 20240904 | Season 02 Episode 01 of Going Meta – a Series on Semantics, Knowledge Graphs and All Things AI
Topic: Using Ontologies to Guide Knowledge Graph Creation from Unstructured Data
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Knowledge Graph Book: https://neo4j.com/knowledge-graphs-practitioners-guide/
Previous Episodes: https://neo4j.com/video/going-meta-a-series-on-graphs-semantics-and-knowledge/
LLM Knowledge Graph Builder: https://neo4j.com/labs/genai-ecosystem/llm-graph-builder/
Knowledge Graph Builder App Video: https://youtube.com/live/NbyxWAC2TLc
Blog: https://neo4j.com/developer-blog/graphrag-llm-knowledge-graph-builder/
0:00 Introduction and Welcome
1:33 Recap of Season 1
2:24 Discussion on AI Integration
4:03 Today's Topic: Knowledge Graph Builder with Ontologies
7:53 Demo Overview and Setup
18:00 Differences in Graph Structures
25:03 Schema and Ontology in Graph Building
29:20 Challenges with Unstructured Data
51:03 Validation and Constraints with Shacl
1:10:00 Q&A and Closing Remarks
#neo4j #graphdatabase #ai #knowledgegraph #genai #graphrag #llm #ontology | 2024-10-21T12:43:11.473995 |
https://www.youtube.com/watch?v=bRD09ndyJNs | hello good morning good evening good afternoon why is this view so weird hi alex how are you yeah it's a bit different now it looks more as we're used to yeah this is the other one was a little bit more zoomed in than i wanted to be good stuff how are you how are you alex thanks uh accommodating I know I don't make your life easy that of of attendees because you know we were supposed to be the the monthly webcast on the first Tuesday of the month and we've tried probably all days in the week already right yeah we we kept I think we kept it relatively consistent but there is another there's a one or two outliers uh where we just couldn't couldn't manage this is one of those those weeks um so um obviously first used every month uh the third was um a public holiday in Germany was me being on the road in London, you not being there as well. So we couldn't really make it. So we tried to... Yeah, I'm sure people understand. I mean, the great thing is that we've kept delivering month after month and that's the important thing to me. And I'm really, really happy that we're on on 21st already so me too and uh and and speaking of of of that i think uh if if you haven't uh looked uh at the the recap blog um that that we published and please uh please have a look at that it's uh out on our um the neo4j developer blog let me you should have the link here let me put that in why is this so weird sometimes medium makes completely ridiculous links oh here it is. This is much better. Yeah, I think that was a great one in the end. So we had a substantial amount of content and material over the months and over the years. And it kind of made sense to kind of put a bit of structure in. And you've done a great job at putting it in written, because I'm sure that will help people find their way in the super rich catalog of content that we've created. Exactly. And we had defined by, admittedly, by coincidence, they came to be basically these four themes out of the background of content we had during these 20 episodes of Going Meta and a couple of them identified sort of themselves, which made sense to include them and to spread it out a little bit. So if any topic is not quite interesting to you but generally the topic of semantics ontologies taxonomies knowledge graphs all that if that is something you're generally interested in but you think okay that episode right now maybe not so much or a different one then go go back to that blog post and and and see that there is a couple of themes that were that were identified a couple of episodes and you can find the one that probably makes most sense for you then or which you think is more interesting, could be more interesting for your case. Good. And I'm sure you want to talk for at least a minute about something that's coming up, right? Oh, yes. Yeah. It's notes, obviously. So happening in, well, yesterday, I think, to the day, or, you know, it depends on what times you're in. But on the 26th, so in three weeks, more or less, is when Nodes is happening this year. Let me put a link in chat as well. So it's really going to be a fun and interesting event and very exciting on all kinds of fronts. So if you haven't registered yet, go to the link I posted here or go to neo4j.com slash notes and you find everything there. The agenda is now live since a couple of days. So you can check that out. If the trendy hot topic of vector, gen AI and everything, chatbot, chat GPT, all this stuff, open AI, you name it. If that is interesting to you, there is like, I think I wrote an email with a couple of highlights, five or six sessions on LLM's OpenAI, Gen AI Vector last week. But I had to skip, like, I think probably another easily five to ten. So if that's your game, then definitely notes will be interesting to you. If you think, oh, my God, leave me alone with all of that. We have other things as well. So we talk about all kinds of things on, visualization is a big part, you know, everything, app development, smarter, better app development with graphs is a big one. So I think you'll find something interesting there and it's happening across all the time zones. So depending or independent rather, independent on where you're based, you have sessions live in your time zone and then the rest you can obviously watch on demand. So that's the beauty of it all. So that's going to be available afterwards on our youtube channel that's great yeah everyone join notes and because it will be it's going to be it's going to be great for sure yeah so um yeah so i guess we're on to today's topic, which is- Today's topic, yeah. Fitting to what I said earlier. Surprise, semantics in the title. Yeah, and as usual, I'm going to try and share my screen, put the video to the side and do a short introduction as we always do. Let me know, Alexlex when you can uh see my screen i should be sharing my whole desktop but i'll go into slideshow mode so that is you know just just you know the previous title and like i say when i say that i don't make alex's life easy i was saying hey what about these the difference between what it's like we ended up agreeing yeah and and that's that's exactly the reason why why we went with this topic today right so we we hear loads of of talk about semantic search and i think there's very two very distinguishable and very clear types of semantic search and that's how we uh agree with you alex suggested a much more reasonable title that was much clearer. And the two approaches are the one based on vectors, based on embeddings, and the one based on graphs and path exploration, which is something that we've talked about in previous episodes. But that's the subject of today. And of course, in the backdrop, of course, is the fact that a few weeks ago, a couple of months now, right, we released the vector index and enabled vector search capabilities in Neo4j. And now we can combine the two and explore the two together. And that's exactly what we're going to do today. So that's the history and the rationale for today's session. And well, like I mentioned, I'll spend three minutes giving the context and we'll go ahead and get our hands dirty with Neo4j on the sandbox. So where do we start? If we want to understand semantic search we need to understand semantic similarity first and if we want to understand semantic similarity the two types of semantic similarity that exist we need to understand how data is uh represented in general in in in ai right and in ai this this is kind of two uh big spaces right The one that's called the area that's called knowledge representation that uses what's called a symbolic representation of information, the one you see to the left where graphs fit. And then there's the more machine learning focused where information tends to be represented in the form of vectors, right? So something, I mean, knowledge like the one you see in that sentence, like apples and oranges are both fruits, can be captured in many different ways. So you have seen how we have represented that in a taxonomy, in an ontology, in a graph, basically. So you say that an apple is a type of fruit and an orange is also a type of fruit. And in the context there, you see that there's a relatively short distance between these two concepts. So there's some kind of proximity between them, some similarities, some... And that's what we mean by graph-based. I mean, that's of course the biggest simplification, but essentially the graph provides the context that explains the semantic similarity between two concepts, like apple and orange. And again, semantic is a general term to refer to the fact that we don't do a string similarity. So you can think of dog is close syntactically or from a string point of view to doc as the O-C-K, but the meanings are completely unrelated. These would be one type of similarity, and there's search that enables these type of inaccuracies, which is fine when you want to find words that are not explicitly, or not explicitly, but not exactly spelled in a particular way. But we're moving from that type of search and that type of similarity to one that's more based on the meaning. And the meaning can be captured in a graph, and the graph provides the context. So that's one type of search. That's one type of similarity. One type of, one approach to AI, which is knowledge representation. Then on the other side, we're not as explicit. I mean, the bit to the left, someone has to go and say, I know that an apple and an orange are both subcategories or types of fruits. On the right-hand side, what we have is the result of probably exploring and learning over a large amount of examples. So the sentence that you have there might be a fragment of a chapter in a book, or an article, or a blog post. And that's the approach that we follow in the machine learning space. So we learn from loads of examples, and we find that consistently, apple tends to appear in text close to orange and close to fruit. So we're capturing that in some way. So it's much less explicit. So what you see to the right is the way we tend to capture this knowledge. And it's in the form of a numeric sequence, right? Well, it's a vector, right? So we represent, and we won't go into how these embeddings are calculated, but there's a numeric representation of an apple, an orange, and a fruit. And this numeric representation has very particular characteristics, and is that it reflects the fact that there is a short distance between apple and orange. And that's completely independent on how you type it. Because if you do, let's say, if you had the embedding for, let's say, a plum or pear, I mean, it will probably close because these are topics that tend to appear close together or related to each other in the corpus, in the data that's used for training. So both, as you can see, capture some meaning, some semantics, but in very, very different ways. Let me take a pause here. Am I making sense, Alex? Is that clear enough? It's clear. It's always a little bit difficult, I think, if you're not into the topic of vectors. I think it's a very very uh pointed pointed topic no pun intended here but um but it is um it if you if you look at it like this it it can be very complicated and can be very confusing because you you see okay i see 0.2435 comma 3.7 and so on. And I don't know what that means because obviously I'm a human and not a machine and I cannot read that in that way. I think when it first explained to me, somebody said, okay, think about the RGB color spectrum. You have a three-dimensional space and you think you want to point out a point in that three-dimensional space, and you think you want to point out a point in that three-dimensional space, and that point is a color. And if you have 115, 50, and 64, that is a specific color. And that color is in that three-dimensional space somewhere. And then you can look at that. And that makes it a little bit more understandable. That makes it a little bit more clearer. Also with depicting it as arrows through a space in a field, that helps a lot, but it is a complicated thing to understand. But once you think, okay, you don't necessarily, you have to don't worry about as much, maybe about the individual values, but more about, okay what what is it what is this about what is the point of this this uh this vector then i think it makes it makes more sense and you understand it a little bit better absolutely and and the whole point is that the the representation to the right we're not i mean it doesn't even make sense to try and make sense of it because you know what's uh the powerful thing is that this distance and there are some functions to compute that distance you know can reflect the fact that these are related related concepts and again why you know there's obviously a very good reason to go with the one in the right and and uh you probably have figured that out the one to the left is human curated we have to go i mean it can be discovered from the data but someone has to go and say hey an apple is a type of fruit and that that requires well some domain understanding in that simple example but think of your enterprise right so it's human curated it's controlled is is and that's these are very important characteristics when we compare them but the one to the right it can be completely automated so you can just give a huge amount of data and it will be it will learn these vectors for you so automation is a huge factor there because well if you think of of the the a session with chat gpt you can ask about anything in the world. You cannot even think of capturing everything in a graph. So you will pick fragments of it, the ones that you care about, the ones that your enterprise cares about. But, of course, with huge amount of resources, you can fully automate the process of learning these type of representations. And they're very powerful. And the great thing is that we can have the two and combine them together. But I think it's important to understand the two and that's what you know the example that we're going to be using today. Good so well so this is the starting point right so we have two different ways of representing capturing knowledge, and depending on which one we use, that will open different possibilities for comparing similarity and applying search. So this is material that we've shared in the past. For the first approach, we will start with, let's say, some unstructured documents. Again, we're going to use a collection of articles that we've used in the past. There's a process that will extract entities out of it and make them explicit, and then we'll enrich that with an ontology, with a taxonomy, with a classification, something that provides additional context, something that puts in context the apple with the orange, or a topic with another because they have a common ancestor, or because they can can be classified in ontology. Because when we have these in a graph, then we can find the similarity between two, semantic similarity between two documents based on the fact that there is a path that connects the two across the graph. Advantages of this, I mean it takes longer to build, it requires expertise, but it gives you control over the responses that you get. So you can explain that this is similar to that because there is this path across the graph. And we touched on a bunch of standard similarity metrics. There's a whole episode on this. I can't remember the number, but we'll add the reference in later on. So this can be done, and it's a very powerful way of computing semantic similarity. Now, in the vector space, what happens is, you know, everything, every element in the world is represented as a vector. And we can do pick two and tell what's the distance between them, what's the semantics, but these are implicit relationships. These are, we have less control over, you know, I kind of understand that a cat is close to a dog, but it's not that I can really explain it. I cannot provide the rationale, the reasoning that the model is implemented, because it's just capturing this numerical representation. In the end, it's purely a distance function. It's like a mathematical distance, as you can think in a 2D or 3D space, except that it's a much more high dimensional space. That's the idea. These are the two approaches. And I'm going to try and show these in an example. And like I said, the first part, we're all familiar with it. If J is a graph database, we can capture the relationship, build the taxonomies, build the context around elements in the graph what's relatively new is the vector index and and this is just the screen capture of the announcement you have the links down there so there's a blog post with the announcement but probably more interestingly uh for the attendees here is the is the link to the manual as well where you can see how to create a vector index how to populateulate it, and how to create. It's pretty straightforward. I mean, we will add properties to the nodes that are vectors, that are long vectors, numeric vectors, and we will index them using the vector index. Once we have it there, we can do pretty performance searches uh based on based on that so that's that's exactly what we're what we're going to be doing so with that let me stop uh with the presentation and as always i've already started my sandbox here uh fresh one it's already created and i've been extremely diligent this time and as you can see there's already all the content there on the on the github repo so you will see that we have all the all the elements so i'll probably go a bit quicker on the construction of the graph and population of the indexes and all that and then focus more on on comparing the two types of of semantics similarity and and semantic search so yeah in the and if you watch this and have any questions then um please please uh use use the chat while you can uh and i'll i'll have a eye uh or two uh edit and uh it will read out your questions absolutely so uh yeah so you the decipher if you want to code along, but then more importantly all the resources are here. So we're going to be using ontologies that are already in this folder and the data folder contains the data and even the embeddings that have been created. But let me take you step by step. So like I said, we have a completely empty database. And the first thing we're going to do is import a collection of articles, unstructured data, right? And you will find that in the data directory. So if you go there, that's a collection of technical articles in the form of a CSV file. So you'll have the URI, the body, the title, and a few extra fields. So that's pretty straightforward. I won't spend too much time on it. A very quick load CSV that will just create a single node. So that's what it looks like. So load CSV of the file, and it creates articles and populates the title, the body, and the timestamp of the articles. So we have a bunch of them, small set, and we can see that if we click on any of them, we have all the details here, right? The time, the title, the URI, and the body, which is the full text of the article, if they happen to be, you know, referenceable. So if you click on it, you will get of the article if they happen to be, you know, referenceable. So, if you click on it, you will get to the article itself, which is pretty handy. But yeah, so we have our articles. So, the first layer is there. And now what we're going to do is add some context. And again, I'm going to run this quick, but we have used in the past something like Wikidata for that. In Wikidata, these are technical articles that talk about programming languages, that talk about databases, that talk about systems. So, I thought, well, Wikidata is a super rich knowledge graph with a definition of loads and loads of categories that form a great taxonomy. And I'm going to use that. And again, this is in your resources folder. If you go to the ontology directory, you will see that there's a DBpedia extract, and you have the query on how to produce it. So if you're curious, you can grab the Sparkle, run it on the Wikidata Sparkle endpoint, and that produces an export that I've represented in SCoS. SCoS is a language to create knowledge organizations, taxonomies, basically. And that's sufficient for me. As you know, we can import SCoS as one of the RDF-based languages, and this type of thing we handle with Neo-Semantics. So the next thing that I'm going to do is, as usual, create the constraint so that Neo-Semantics doesn't complain for a change. Alex, instead of the error thrown I created up front. So there you go. So we have an index for the URIs so that it takes care of the linkage based on the URIs. The next thing is I will go with all the defaults in the configuration. Again, very briefly, in NeoSemantics you can make a number of choices in terms of how you want your RDF imported. In this case, I'll go with the basics and I'll say that I don't care too much about preserving the vocabularies but um i will go ahead and input um oops sorry i will input the ontology with a single line instruction as course because that's the vocabulary that we're using import fetch we pass the url of the file that i just showed the remember that it's the raw version not this one because these are the human readable and it's html you have to go with the raw one so if you pass that to Neo4j and you indicate that it's total serialization, in a few seconds, you should get it imported into your graph. So now we have the taxonomy formed by 11,000 roughly entities linked to each other, and we can see what that looks like. So that's a small fragment, but let's zoom here. So we have, make it a bit bigger. It's using the Wikidata IDs, which are not very user-friendly. So let's go with the preferred label, which is probably much nicer. There you go. And we can make it a bit bigger to make it readable so you see there's the notion of a programming language and that has multiple subcategories subclasses so again this is the kind of thing that you can tweak in the configuration by default neo semantics will create a a subclass of relationship that we call SCO to make it compact. But if you're more comfortable with all the other name, you can configure that too. And the elements in the taxonomy are labeled as classes. So there you go. So you can keep expanding and these will form a large network. Ideally, it should be close to a tree, but not necessarily, but it's a rich classification of the systems, databases, programming languages space. So, we have the two parts. We have the articles, we have the taxonomy, the classification. There's a little step of data cleaning that we have to do which is uh um if you think of a tree uh a classification right at the taxonomy uh you don't expect that to find things like uh shortcuts like uh for example you have um um as girdles would like this. Yeah, yeah. But the reason, I mean, as you know, Wikidata is a collaborative repository, collaborative knowledge graph. So it's not always perfect, right? It's an evolution. And sometimes you find, I mean, it's not bad. I mean, it's just redundant information, right? So you have the fact that, I don't know, since we're going with oranges and apples, I mean, if you have a blood orange, these red oranges is a subcategory of orange, which is a subcategory of fruit. If I, you know, that normally forms a sequence of subclass of relationships. But, you know, if I could redundantly say that this blood orange, the red orange, is also a subcategory of fruit, which is something that I can't unill because this subclass of is transitive. So that's exactly what I'm doing with this. I'm saying, well, if you find a shortcut of this type, I don't really need it. I mean, it doesn't hurt, and I could keep it there. But I'm a bit of a perfectionist here, and I like to have my tree clean. It's not necessarily a tree, but I don't need these redundant relationships. And I remove a few thousand of them. Still keeping, no, I remove 500 of them. Right? So there you go. So now we have the two components. We have the context, the graph, and we have the articles. Now what we need to do is extract the entities that are mentioned in this natural language text and make them explicit so that they connect with my graph and they form this contextualized representation that we were talking about before. Now because you know it would have been a super long episode I'm going to save you from having to do that and I've already but we you know know, remember Alex, we did that in episode two, I believe. So if you want all the details, you can go there and have a look at that. What we're going to do today is go to the data directory, and you will see that there's extracted entities files. So what I've done is I've done it for you already. So for each of the articles, we've run the process of, effectively what we use, if you remember, is using the natural language processing wrappers that are in Neo4j. So, and by this, you can use your preferred, I mean, these are wrappers for the three main cloud, you know, natural language services of the three main cloud providers. I believe we use the Google Cloud one so basically with uh simple instructions in um in in cypher in apoc you can invoke you can take that string invoke this uh natural language processing entity extraction procedure from google and it will return a collection of entities that we will then represent as nodes so in order to save us the time of having to do that, I'm giving you already the list of entities that were extracted. And it's very interesting that the Google service, in particular, brings them back annotated with the Wikidata Wikipedia URIs, which is great, again, to simplify the process of linking. Because as you know, because i've mentioned it several times the linkage i mean the entity extraction is one thing but the entity linking is is not an easy one right sometimes you have to apply heuristics sometimes you have to approximate but in this case because we have them annotated with the uris then they just uh stitch like like like glue and that's that's perfect so here what I'm doing. I'm taking this file where I have my extracted entities. Basically what I have as you have seen is for this article, identify with the URI, these are the, in this case there's two entities. It's about RethinkDB and about NoSQL databases. For the second one we we have another two pair of entities. What I'm doing is scanning the CSV file and making those nodes. I will have to match the article on the URI. I will match the class that will exist, hopefully already. If it doesn't, we'll lose that. But in this case, I'm just picking the ones that link nicely to the ontology that I have. If I find the match, then I'll link them with that refers to and again that's my choice of naming but you're free to to call it the way you prefer so this should be a relatively quick one and what we have now as you can see is our old articles now hopefully enriched with let me make this a bit bigger how do we we do that? Here, we can zoom it in. And we have the note representing the article, which is the blue one. We have a number of entities that have been extracted through this process. So this article, I don't know if it's readable, but it's about Python, Zookeeper, SQL, Java, and some others. And these are already connected to our taxonomy. There you go. So that's how we've kind of closed the gap between the two. Make sense? So that's the kind of a super fast step-by-step description of how this works, but you can look more into the details of the actual entity extraction. So, okay, we have all the elements we need now to start using these to calculate semantic similarity, at least graph-based. But we have the other part, and the other part is the vector-based. So what we're going to do is now we have our articles nicely connected to entities and classified in a taxonomy or contextualized in a graph. But at the same time, I'm curious to see what happens if I take the body of the article that I use for entity extract and make explicit here. I'm going to take that and I'm going to embed it. I'm going to vectorize it. And again, here, I'll simplify it. I think I'll share the code because it's pretty straightforward. In this case, I've used the OpenAI API where you pass a text and it returns an embedding. So I have it in a Python notebook. Again, I've done this in advance and there you go. So the kind of thing that you get is for each article, you get an embedding. So I've done that so that we don't have to to run it live and and save some time and now just like the other uh data elements we have it here so we have the um article embeddings and this is in json format no particular reason i just felt like you know creating json this time and similar structure so for each each of the articles, I have the article URI and I have the vector resulting of embedding of the body of the article, the text. So what I'm going to do is parse this file and enrich my graph with a new property that will represent the body of the article but in the form of a vector. Make sense? Yeah, very cool. But because i want it indexed i'm going to first create my my index and that's a very simple instruction that you will find it all in the link that i shared before to the documentation we're creating a vector index i'm giving it to the description oh fantastic thanks for that So I'll give it a name. That's going to be my article embeddings index. And it's based on nodes of type article. So I'm going to build it out of nodes of type article and a property that will be called embedding. It doesn't exist yet, but I'm going to populate it as I said. So this is the similarity that I'm using cosine in this case, and the size of the vector. So pretty straightforward, just like any other index, it's already built in. And the next thing is, okay, let's enrich this article description with the vector representation that I'm going to get from my JSON file that I just showed. So again, remember that there was a pair article id vector so all I have to do actually let me show you what that looks like because you know it was a it was a json file and if you are familiar with the apoc load json it's just another one of these super useful functions in apoc that will let you point at a source of json and and just parse it right so if i uh return in this case it's just a list so if i return value dot id i think was the first element so there was value dot id and value dot vector these are the two value dot vector. Let's show the first one. I don't need to show more. That should give us an idea of what we have. There you go. So we have, it's been truncated, but we have the URI of the article and the vector. So that's exactly what I'm going to store, right? So instead of putting it, printing it out, what I'm going to do is I'm going to look up the article by ID. Right? I'll find the article and then I'll add a new property called embedding where I will put it out. What I'm going to do is I'm going to look up the article by ID. Right? I'll find the article and then I'll add a new property called embedding where I will put the vector. And these will automatically be indexed in our vector index because I've already created that and indicated that that's the property that's meant to be indexed. So if I do that, again, it's a pretty quick process. In this case, again, you will have to generate the embeddings. You could do it all in one go, and here you have to pick your preferred... I mean, I've used OpenAI, but you can use Vertex. You can use your preferred model, and you could perfectly do it on the fly, right? As you write to the graph or in batches, you can scan the notes, the articles in the Neo4j graph, take the text of the property, call the embedding function, and store directly. I've done it in these intermediate steps so that you can use it, and it's already done there for you. But I'll share the code on how to do that with OpenAI because that might be useful for people there. Perfect. So now we have it all, right? So we have the articles contextualized with our graph and also embedded and indexed in our vector index. So what are we going to do? We're going to start computing similarity. Let's get started. So I'm going to take two articles and I look them up by a title. I can look them up by a URI, but I thought it would be more readable like this. So I'm looking for an article that's called Performance Testing Neo4j Database Using Bolt and JMeter, and I'm taking another one, which is Couchbase GeoSearch with ASP.NET Core. So these are two articles that have a property. Each one of them have a property called embedding. So we can use the function in GDS that's cosine similarity. That takes two vectors and returns literally the distance between them, the similarity between us, distance and similarity, what inverse from each other. But it says it's as simple as this. So this will tell me, okay, the distance between these two similarities, sorry, between these two is 0.74. And that has a meaning. I mean, that has a meaning in context, right? If you put it, you know, compared to others, you will be able to tell this is more or less similar than another one. But that's what you get. And that's what we were saying before. It's great. It's super powerful. but it will not give us the explanation of why these two articles are similar or how they are similar. So we have to trust the accuracy of the model that we have used to embed. And again, I've used here a general purpose model, like OpenAI, I can't remember which one of the models was it, but you could perfectly use a model that's trained on this type of data and maybe get more accurate embedding. So this is, I'm not saying that this is the perfect solution, but I want people to get a clear understanding of how that works. So we have a way now of given to articles, take the vectors and say, that's how similar they are. And that's super powerful because ultimately what we want to do is say, because you're reading these, you might want to read this other one, or I'm interested in things that are similar to these. So that's a vector search. And as you can see, what I'm doing is just using the, not search in this case, it's just vector similarity. We'll get to search in a minute. And as you can see, what I'm doing is just using the, not search in this case, it's just vector similarity. We'll get to search in a minute. What would it be like in the graph-based approach? So you can do a number of things. So as you can remember here, we want to follow explicit paths and understand how these things are connected. So I take the same two articles and now I say well articles are connected to my classification through the refers to relationship so from my article I can get to some concepts some semantic descriptions that are connected hopefully through my context my taxonomy my ontology so i could explore that graph so here these are just uh possibilities and what i'm doing is using the path similarity implementation that we also talked about in in the in one of the previous episodes right the one that i uh mentioned here path similarity is one of the simplest ones. 16. Excellent, thank you. You got my back there. So essentially, you will find how long the path connecting the two elements in a taxonomy is, and based on that, determine roughly how similar or how distant they are. So that's what this function does. So we pass the two uh uh concepts and uh well if you can remember there was an option a configuration option when i mean we cannot assume that uh the taxonomy the the context is a perfect tree and all all the elements have a common ancestor like a thing like a root element maybe that root element. Maybe that is the case, maybe that is not. So the implementation, the definition of this algorithm, this similarity metric, says that when you can, by default, assume that for the root elements of the multiple taxonomies that you might find, you can assume that there's a virtual, a simulated common root. But in this case, I don't want that to be the case. If there's not an explicit path, don't make up this simulated route. So, this is a configuration option. And what I'm doing, if I run this, is I'm going to get, well, first a similarity metric, but then more interestingly is the explanation. And you probably saw in the second part that my path similarity not only gives me the metric based on the formula that you have seen here, which is this calculation, but I can also compute the path and return it because that is the explanation that I'm after. So I know that these are similar points too. and again, you have to put it in context. It's I can't remember this one if it was zero to one. But again, the idea is that these two articles are similar because they happen to one of them refer to Neo4j, the other one to Couchbase, that if you follow in the taxonomy, Neo4j is a graph database, which is a database management system. And on the other side, Neapuche is a graph database, which is a database management system. And on the other side, Couchbase is a document-oriented database, which is a data management system. So there you go. You have a path, a common ancestor there, which explains why these two are similar. And that's the curated aspect. So we are not relying on whatever data has been used to train the large language model that we have used to embed our data we know exactly the rationale the reasoning behind the similarity between these two concepts to these two articles we have a more visual representation of it so you probably remember from chapter 16 that there's a variant of that method, path similarity path, that returns not only the value, the metric, but it returns the actual path connecting the elements. And that's probably a much more visual way of understanding. And you see that Couchbase, this article, which I probably should make a little bit bigger so that we capture a little bit more of the title. This article refers to couch-based server, which is like we saw a document, follow the arrows, there you go. So that's the similarity path, and that's the explainability angle, which is extremely important, and that's something that we normally miss from from the um um vector base you know more uh sub symbolic as we call it where we don't uh make explicit the the the knowledge but rather learn it in the form of a vector so i'll take a pause here making sense so far yeah sorry yeah it it makes a lot of sense it's uh it's i think it's a great way of explaining these two approaches towards similarities or towards finding similarities in your data. And no matter what you have, you can use either of those two systems to find it out and to discover if there are similarities. But if you're happy with not quite knowing what came about, then... Exactly, because I'm sure you've heard about the problem of hallucinations and these kinds of degenerative AI models. Typically, by definition, they're generative, right? So sometimes you want them to be creative, right? And that's great. But sometimes you want them to be accurate and not only accurate, but explainable. Or you want to have control over the information that's used to provide an answer. And that's why approaches like retrieval augmented generation are being so popular, right? When you pass the question to your LLM, but at the same time, you pass the context required to answer that question. But, you know, without going into this, we might cover that in future episodes. Here, I really, really wanted people, because it's important to understand what we mean. And when we say semantic search, yes, semantic search is any type of search or similarity that goes beyond the purely syntactic or string-based one. So here we're talking about meaning. And I can tell you that this article, whether it mentions couch-based or not, I don't really care. I know that these two are related because there is a common topic. They talk about database management systems, even though that's not explicitly mentioned. So semantic, sorry, not semantic, but textual similarity would never pick that up. So that's semantic search, but one is very different from the other. And again, I'm not saying that one is better than the other. The answer is probably better together, but this is the foundation and I think the explanation of it. And yeah, and I guess that, you know, I'm conscious of time and I don't know if there's going to be questions, but the next step would be here, we've taken two articles and say, well, what's the similarity, how similar or how different they are. But normally what we do is effectively do the search and that's where the vector index comes into play. So what i have normally is okay i um i have um let's say just one of the articles and in this case i'm looking it up by uri but i could do exactly the same as before this is the one about testing of j with j meter so what i'm doing here is okay i have one article and and I have the content embedded and I want to find which are the other articles in its proximity. And this is semantic search. I'm not giving you two articles and telling me how close they are. I'm telling you, based on the index, and that's where the new feature is super powerful, tell me which are the closest one. And that's what I'm doing here. I'm calling my vector index and I'm saying, use the article embeddings, which are the closest one. And that's what I'm doing here. I'm calling my vector index, and I'm saying, use the article embeddings, which is the one that we just created, and give me the top five based on the embedding of this article. So don't follow the path like we're doing before. Just take the embedding and give me the five closest one, excluding, of course, the article itself. I don't want it to be returned because if I did, I'm going to get one that's identical to itself, and that's not very useful. But this gives me a list of all the similar articles based, but as you can see, the article itself is a perfect match, which is not very useful. So if I remove it, I get the top four, and I get a metric. And yes, it does make sense of course you know this is an article about neo4j at least the first two that talks about neo4j that even the even the the the text search would have picked that up now the question is what happens with this one and and why do we think that's accurate and that probably is but you know we we kind of have to trust the model that has generated these embedding. And the question is, can I trust it? And for future episodes, I mean, which are my options? Can I train my own model? Of course you can, but it's expensive. It's a huge effort. Should I fine tune it? Should I retrieve or meant to generate? There's many options. But here, let's understand what this semantic search returns and that's what we're getting and if i did the the equivalent based on the on the other approach uh let me show that to you because here i would get um something similar in this case what we're doing and and notice that here i'm not doing proper search and I get back to that in a minute, but I'm doing the comparison and I'm getting a similarity metric, but I'm getting also, and I'm not listing it here, but I could get for each one of them the path that explains why these two articles happen to be related. And that's the explainability, the control. And just to complete the explanation, and you will see how that impacts the the results and how i have control over the results these are the answers based on the taxonomy that i've imported from wikidata but let's say hey i'm an expert in my domain so i know that uh there's a the notion of some of software stacks right i know that there's the notion of software stacks. I know that there's the GRAND stack that combines Node.js, Neo4j, GraphQL. There's the LAMP stack that combines, what was it, the MEAN stack that combines ExpressJS, MongoDB. So let's say I have additional knowledge. I know more things about my domain, and I want to use that. I want my semantic similarity to take that into account. So I take this ontology, or this additional knowledge representation, and I'm going to add it to my model. I mean, nothing stops me from keeping adding additional elements of information. Remember that this is all curated. I can bring them in and out as needed so in this case i'm i'm doing an input uh request pretty much like what we did with the with the neo semantics before but in this case it's not a scos representation it's uh i think it uses owl right there you go so it's owl classes it uses still the labels uh from scos but. So it's OWL classes. It uses still the labels from Scots. But it's a different type of representation. Doesn't matter. All I have to pass is the reference and the serialization format. And this will bring additional elements. I mean, I could have been too quick to import, but I could have previewed it first. If I want to do a preview of what this brings, that's exactly what I was showing before. Let me make that a bit bigger. And we have things like there's the notion of a software stack, and we have three of them. I mean, there's three mini taxonomies that describe the LAMP stack, the GRAND stack, and the MEAN stack. So it is another way of providing additional context for my search. So maybe if you're reading about PHP, you want to also read about Perl because there happens to be a technology stack that combines the two. And because I know that now, I'm offering you a new path for exploration. I'm providing you additional context, and I control that. That's the powerful thing. I decide what information is used to return that answer. And if I remove that, that's been persisted in the graph, and I rerun exactly this query, I'm going to run it again. Now I should get new results, hopefully. And there you go. So now you see that suddenly it's jumped to the third position, the one on GraphQL that was not there before why well because now my search my path exploration is finding that there's a stronger connection between these two because now they have graphql in common which is part of the grand stack see what how that goes right so this gives you the control and the power and the curation of the knowledge that's used for the semantic similarity whereas the other one relies on the the the model and the data that has been used to train train the model but again important the really a powerful approach comes from combining the two and and and the first step is to understand them properly so that was the objective of today's session so I'm going to take a pause I've talked a lot but I hope it was useful and and and that uh triggers some some questions hopefully that was the yeah it is it is um one thing I I want to add and maybe it's also I think probably I'll end it and end in a question um but when you said it's better together so I think it makes sense if if you stay with the picture of the fruit that came to me earlier. Okay, let's think about tomato. Tomato is botanically a fruit, but obviously you don't want to put it in a fruit salad because it's not really right. So if you have it in a classical ontology, it probably would land up under fruit because it's botanically categorized as a fruit. But if you look for something for, let's say, a fruit salad, if you go hardcore into the semantic search pattern through, it might come up as a suggestion, maybe not top ranked, but potentially down there. But in a vector search, potentially it would grab it still for, not for your fruit salad, but for a vegetable kind of based thing, because it would still pick it up as it is locationally closer to a cucumber, which is obviously not a fruit. Yeah. So if that makes any sense. Absolutely. Yeah. And yeah, and that we, things are becoming a bit more subtle because, you know, they're semantically related. So a tomato is close to an apple. Yeah. But if your question is more, and we didn't get into how to answer natural language questions, because another thing that we could do and that we can do with large language models is, hey, I'm making a salad, what should I do? And in that context, maybe the training data provides additional information on whether you should put that or not in a salad. So the fact that they are semantically close, that's a fact. And then on the on the um symbolic approach and the knowledge representation approach the graph approach i've shown the you know the most basic construct in an ontology which is a classification a taxonomy but there's many more i mean you can express disjointness i mean you can overlay multiple ontology so you can say yes the two category two instances of the same category because they're both fruits but uh in the same way i've added the technology stack i could say hey in in the context of a salad these two things should be disjoint or that or maybe this isn't compatible with that so you can keep enriching that and again these um implementations of of semantic similarity graph-based semantic similarity, are probably the most rudimentary and ones that can be the starting point, but the important thing is that you have the controller. So you can traverse the graph in whichever way you want. So you can say, find a connection across my graph, but block going through nodes of a particular type. So really, the powerful thing is you can use standard metrics, but you can also totally own that process and determine how you want similarity to be computed. Because whatever you come up with, you will be able to explain it. And you can say this was discarded because it went through a node of type salad and salad is a blocker for this type for for a for an orange for example i've seen an orange in a salad sometimes yeah you can keep adding context and and um and what i've shown is is really the the the most basic type of representation which is a classification yeah cool thank cool. Thank you very much, Jesus. There were a couple of questions, so let's take them on. Peter asks, would you normally have multiple paths between articles? Absolutely, absolutely. And I kind of quickly went over that, but yes, I mean, absolutely, because remember that an article connected to a number of entities, a number of things. So an article was about Neo4j, about Java, and then for each of these concepts you will have a path, so you will have a number of them. Again, another option for you to determine how you want to combine them. You want to add the more paths, probably the more paths means the stronger the similarity. Maybe you want to average them, maybe take into account. So yes, absolutely. There will be, and as you keep adding, because that's what we did when we included the technology stack. So you're offering new paths for exploration. So the more knowledge, the more curated the knowledge about the domain, the more control and the more additional paths you can offer for exploration and the richer the semantic similarity will be. So big yes to that. Absolutely. There will be multiple paths between articles. A comment from Mark, using the context to explain to skeptical managers is critical. I think that's very true. If your black box has something and somebody asks you about how did you come up with that 20% similarity is like, well, my machine said something and printed it out in the end. And maybe this case sounded like a non-critical one because worst case, you recommend an article that's not relevant and no one will click on the link. But think of a more important decision. Think? Think of an investment, think of a medical decision based on other examples. You really want in some cases explainability is a must. Yeah. So totally agree. And one more question from Ryan. I want to figure out a way to do this for property records including spatial similarity where then dave suggested maybe k nearest neighbor uh if that could be something to to use the the the actually the it's it's approximate uh nearest neighbors i mean it's kind of a different implementation and and i i mean on that i i will have to do a bit more research, but maybe the documentation you'll find. But it's based on nearest neighbors, the search on the vector index. But the question was whether it can be combined with spatial similarity. Exactly. I mean, the spatial is not yet another index. So again, the more information you make explicit in the graph, the more elements you have to... maybe your similarity in the end could be a combination of these contextual connections between entities, but you can bring into that the geographical, the geospatial proximity as well. So absolutely. So yet another another element to to bring into the into the the the semantic similarity comparison yeah absolutely and one more from dave um vector index is the community edition or where do you get uh to able to play with vector index vector search i think it's currently a beta feature so it's available in in all the editions um obviously jesus you just showed it on on sandbox um if you download you need to download the latest version but not not not 512 but at least you need to it was 11 11 exactly anything after 5 11 um will will will have it um as a beta functionality and then you can explore. Yeah, absolutely. So that's, yeah, like I said, I use Sandbox and you have it there. So approximate nearest neighbors. Yeah. Thanks, Mark. That's what's implemented. So, yeah. Cool. I think that is a good way to end the episode. Thank you very much, Jesus, for presenting this, for going hands-on again and showing the code, especially with the vector search. Very interesting and good to see it hands-on how it works, particularly because it's such an interesting topic and so many people are wanting to know more, but I think there's a little bit of a gap between what can it do, how does it work? What can it do for me? How can I use it? And what do I want to use it for exactly? And I think hopefully we clarified a little bit. and and what what do i want to use it for exactly um and i think that's that's uh i think we hopefully we clarified a little bit uh and on that topic as always all everything we've used is already in github so please go and play with it and let us know how your experience goes and yeah enjoy Yeah. Cool. With that, let's close it out. We'll be back in November, a date to be confirmed, but we'll look to be back in the regular schedule first Tuesday of the month. So that would technically be the 7th of November. Let's see. We might be traveling that week. So we need to find a way to see how we can how we can get that get that to work but um i'm sure we find we find a way um between uh now and then um obviously notes on the 26th of october so please uh check it out have a look uh register for it and next tuesday on the 10th um the final road to notes workshops is taking place uh this one on geospatial data with will lion so if that is something you're interested in check it out tuesday 5 p.m central european time so yeah early afternoon in europe early morning in the in the states and somewhat evening ish in asia So if that works for you, register now and hope to see you on Tuesday. Yeah and with that thank you Jesus, thank you everybody for sticking around on a Friday afternoon if it's your afternoon or Friday morning for Mark, early morning indeed and yeah thank you Jesus again. Thank you Alex and see you all next time. Next time exactly, take care bye. | Going Meta - Ep 21: Vector-based Semantic Search and Graph-based Semantic Search | 3,650 | Neo4j | 20231007 | Episode 21 of Going Meta - A Series on Graphs, Semantics and Knowledge
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Previous episodes: https://neo4j.com/video/going-meta-a-series-on-graphs-semantics-and-knowledge/
Recap of 20 Episodes Going Meta: https://medium.com/neo4j/20-episodes-of-going-meta-a-recap-5a0ccd689c6c
Vector Search: https://bit.ly/4cuZoeS
Cypher manual: https://bit.ly/4cjICzf
Semantic Similarities (Ep 16): https://youtube.com/live/WwTxlyjY35I
Semantic Search. A worked example (Ep 2): https://youtube.com/live/y6eCKIRsA64
0:00 Intro & Welcome
6:40 Vector-based Semantic Search and Graph-based Semantic Search Overview
19:45 Prep the Graph
35:30 Computing Similarities
50:55 Q&A
59:21 Wrap-Up
Check out https://community.neo4j.com/ for questions and discussions around Neo4j.
#neo4j #graphdatabase #semantic #knowledgegraph #rdf #semantics | 2024-10-21T12:47:21.085260 |
https://www.youtube.com/watch?v=9DDdFKVvZQc | Hello, Jesus. Good afternoon, everybody. Thank you for joining. How's it going, Jesus? Where are you? It's going great. Well, I'm in a different room, absolutely. And one of my concerns was actually whether I was going to have a good reception. So please do tell me if things come, you know, all right. So any any any uh interruption or whatever so just to answer your question i'm i'm in spain so i'm i'm in an unusual uh location so unusual for me anyway spending just a couple days with a family but uh yeah it's a very short visit so i'll be i'll be back in the uk tomorrow so it was just uh an accident an accident that we're running this session from here but you know why not i mean we have we run one uh one of the early ones when i was in new york at the at the knowledge graph conference so yeah i remember just to make the the webcast more more international right so yeah anyway same quality of content doesn't matter where it comes from. Exactly. That's the most important thing that we make sure that this definitely hits the mark. And, you know, I change location, you change location. I think that's the beauty that we can do this and make it as available as possible to as many people as possible. So I think that's the core of the target. Before we jump in, I think I will do this until October, but I'd like to mention notes briefly. Call for Paper ended last Friday. So if you missed that, there might be a way, if you say pretty please until end of this week, I might be able to get you in if you want to submit something. So, you know, think about it quickly. Otherwise, you can obviously register for notes already. So it's now seems October a long way away, but if you want to, the registration page is up and running, so you can already grab your ticket, basically your virtual seat, and be ready for it. So you can do that, and all you need to do is go on the webpage. I'll post that in the chat as well. Let me find the link, which I don't have ready right now. Sorry for that. Here it is. So it's the same I shared generally over the last couple of weeks. But yeah, it's going to be a fun event, very interesting. We're currently setting up the agenda. So in a few weeks, end of next week, maybe already early, early week after that, you should see early, early agenda coming together. And I'm looking very much forward to it. So we had lots, lots of submissions. So we have the, you know, difficult task now of sifting through all of them and finding the right ones. So that will be that will be done um by michael hunger and a couple of other for the team and um yeah i'm looking forward forward to the to the day in october so that's that's what i wanted to bring quickly get out um yeah looking forward to that should be a should be a great one which edition are we at i mean it's it's already the fifth edition yeah yeah yeah yeah crazy yeah that's good but now let's let's dive in uh to to what everybody has come to to see and want what what to hear about and want to learn about is um is going meta so in this episode uh the title is easy full graph migrations from turtle stores to neo4j in general the the aim of this so if you maybe are new to the episodes or new to the series and didn't didn't watch all of the episodes so you don't have to this is not a it's a continuation somewhat but it's not like necessary that you have to you didn't have to you know watch all the all the 17 now previous episodes you're welcome to of course and by the way by the way alex i know we're in big numbers now and you know it's harder and harder to keep track but there is a typo there it's not 17 he's 18 in the in the banner so it's true i know i know we'll be in 125. It will be hard to know, but it's one of those things. But yeah, no, you're right. I mean, maybe I'll make some references to some of the past episodes, but the great thing is that they're all up there online, all the recordings, all the resources, everything if you want to go and reproduce them at home, it's up there. So yeah, that's always a possibility. Exactly. So everything is there. So if you watch it outside of YouTube right now live, then go to our YouTube page, neoforj.com, youtube.com slash at Neo4j. And then you find our page and then on the live you see the all the previous episodes and then there's a going meta playlist there's a going meta website which i could also link i think i have it in the video description so there is there is there is that and yeah we we talk about knowledge graphs, we talk about semantics, ontologies, taxonomies. That kind of is the couple of the keywords, I guess, that usually fall when doing this. RDF is a big, big part of it. And I guess today, again, RDF will be something we're going to look at or talk about when, I guess, also using new semantics to work to work with with that kind of data and get it in a it's it's a graph pattern already but getting into it in Neo4j and having making the making the data a little bit more usable in in in certain, I would say. It makes sense to migrate from RDF data to a graph, to a property graph like Neo4j is one, but I don't wanna speak ahead of time. No, no, of course, and you're right. I mean, I suppose the content that we cover here is for, you know, I think it combines some elements of the traditional sort of RDF or semantics stack, Neo4j. One element, of course, is moving data across from one to the other. And, but yeah, we just try to experiment and we've tried all sorts of things. Yeah, some people will find that we're trying things that normally are know are associated with the semantic stack but uh but yeah that's what we like it i mean we we try new things and and and i believe that that's the interesting thing of these series and and today you're totally right so um i'm actually going to if you don't mind start uh probably sharing my screen. Is it, should I start presenting? Yeah, forgotten it's been. So. I don't see it yet. So once it's. Yep, let me share my screen. It's just the usual couple of slides. So I don't want to take a lot of time, but just go straight into coding. But the idea, like i was saying yeah you know it it's just one slide because it's it says it in the in the box right so the idea is to to to do simple uh full graph migrations from from triple stores to to neo4j and and um and the idea again is is quite simple we keep reiterating that. So we really value RDF as a standard that makes it really, really straightforward to exchange graph data. And that's been a theme during this series. And I'm happy to be aligned with the W3C. If you go to the page, it is actually defined as a standard for data interchange in the web so it's like i say a great way of streaming out your graph or you know ingesting graph data in a in a format that's neutral dates as we're going to see it's entirely independent of the way you you store your data you persist your data and um you know it's important also to understand that uh you know my position personal but i think you know that's the shared approach at neo4j is that um you know while we find rdf a great format for exchanging data we have the impression that developers data people tend to feel more comfortable with the level of abstraction that property graphs offer and and again i i you know when i think of alex and me in a graph i see two nodes right and there's connections between us because we work together because we're friends because we run these webcast together because we have common friends, right? So, and it's easy for me to think in those terms rather than kind of decomposing Alex in statements as, you know, Alex's date of the start date at Neo4j is XYZ or Alex's name is Alex Erdl, Alex's employee number, blah, blah. So which is exactly the idea of RDF. I mean, in terms of working with data, manipulating data, I'm very comfortable with the level of abstraction, again, that the property graph offers. And Cypher is a great environment to manipulate the data. But I appreciate, and that's exactly what we're gonna see here today, that if I want to exchange data, RDF is a great mechanism. And that's the topic for today. And the idea is pretty, pretty simple, right? So I'm going to play with a couple of popular triple stores out there. In particular today, I'll use Stardog and OntoText. I was working on Neptune, but I didn't get it to complete. So I'll add it to the repository, but we probably won't cover it in this session. But the principle is identical. I mean, these three platforms store data as RDF. And because RDF is just, like I said, easy to stream out of a platform and easy for us to ingest, thanks to new semantics, the migration is going to be really, really straightforward. That was where I introduced the easy, a full graph migration in the title, but that's the idea. Does it make sense, Alex? Yeah, yeah, that sounds good. Maybe before we dive in, there was one comment from episode 17, the repository folder is missing. That is right. Yeah, yeah. And that's entirely my fault. It's true that it was a bit probably the most experimental of all of them. And it's the one where we tried the open AI interface. It was kind of I had mixed mixed feelings about the results. But anyway, yeah, I'll put it all up there. Sorry. I'll make sure that happens after today with both 17 and 18. yeah thanks thanks for thank you thank you jesus thank you for the comment uh yeah um so we we don't uh forget about it yeah that sounds good um um i think that makes sense that's a that's probably a common a common path i would say or lots of people might have the same, not necessarily challenge, but maybe think about doing it or want to do it or have a data store similar to this and think, OK, what do I need to do if I were to be migrating this over? And I think that's great to show it how it works and how it could be done. Yeah, and that can be, you know, it can be, of course a migration because you want to to work on a property graph because you want to run some analysis in neo-vij let's say you want to run some sort of graph algorithms and and then enrich graph and then bring it back to the to your triple stores whatever the logic in this case we're going to be just focusing on the on the pure migration the moving data out and what we're going to see is that depending on which environment you work on and today's environment you know just last slide is a python notebook but uh depending on what environment you use i mean you will have additional tools in the form of drivers for example you know startup will see that they offer a very nice Python driver that makes it really, really easy to work with a Python notebook environment. Onto text, I haven't been able to find one, it might be my fault, but what I'm going to do is just purely using HTTP requests, which is something that we can do in a notebook environment. But, you know, again, depending on what's uh the uh you know the environment you want to run your migration on you will have better or more or less support but um yeah these two approaches that we're gonna see today uh cover kind of uh at least two options for these and like i say neptune i'll add it later on to the repository, but the ideas should be entirely valid. So let's get started. Good. So what I've done is I'm going to work entirely in cloud instances. So for Neo4j, as usual, while we were starting, I created a sandbox, which should be now up and running. Perfect. So we have it here which should be now up and running. Perfect. So we have it here and the connection details and everything. I've done the same with Stardog. I mean, I looked at the free cloud offering and I've created a simple database that I've called Migration Test. And I'm going to interact with it through the Stardog Studio, which is the equivalent of our Neo4j browser to interact, to write some queries and get the results. Of course, because this is RDF, we'll be writing Sparkle, but the idea is pretty similar. So I will not spend too much time because the focus today is on the migration itself, right? So, but just so, you know, what I did is I created a database that I call test database and it contains music data. Again, it's a data set that startup provide. And let me, I think I have a couple of URIs here that are going to be useful so I can show you what that can look like. But if I want to extract information about Nirvana, the band, we will be able to write something like this. This is the URI that uniquely identifies Nirvana, and I want to say, well, what? Tell me all the triples that have Nirvana as the subject. I'm going to call this predicate and object, and I'm going to return precisely that, predicate and object. So if I run something like that, this should return, well, everything we know about Nirvana. Well, not everything, but because it's just the outgoing triples, not the incoming one, because, you know, I have to be able to speak about that we'll look at that in a minute but we see things like yeah nirvana is a is a band is a producer is a songwriter uh we have a a description of it here which is a longer text description of it it has a name nirvana the band and it's connected to a number of of uh of individuals through the member relationships so we can uh i don't know we could extend that query and say okay uh if i follow the member predicate i would get to the actual members and maybe i would like to get for these members uh some uh predicate object information about them right so and I could actually return the members. So, well, I said that I wasn't gonna do it and I've done it, I forgot the dot. So I don't want to spend too much time basically running queries here, but you know, the idea is, you know, it's a simple dataset that, again, it's a public one and I'll share the link. But here we get the information about the members of the band, right? Kurt Cobain, Dave Corle and Chris Nubosovic. So yeah, that's, like I say, an Adia-based database that we can query with Sparkle. But today, we might just check that things have been migrated correctly with the query. But the idea is not so much exploring it, but looking at how to migrate. And like I said before, I have created this notebook, what I'm going to basically write the logic. And I'm gonna take you through it. I'm probably gonna make it a bit bigger. Alex, tell me if it's readable. Oh, this is readable. Yeah, maybe I put our faces a little bit more further. Let me get that done, But yeah, you can continue. Perfect. So as I mentioned before, Stardock in particular offer a nice Python driver, which is the one I'm importing here. So I've run the install for the PyStardock, but also for Neo4j. And after the imports, what I do is I pass the connection the connection details of course you can copy those but i'll disable them so don't so that you don't use my uh but anyway that that's basically where you know the the instance the endpoint which you get from your uh stardoc studio so you get easily from here so you get the endpoint url so that's pretty much what i'm what i'm using there and um and this is the connection details user and password so you start a connection right pretty straightforward so you use the credentials and you in it indicate the database that you want to connect to in my case is test database which is the one that i was showing here there's i think there's another one that comes by default, the catalog, and you can create additional ones, but I'm using the test database. So that's what I'm doing here. So I have a connection now. And again, this optional part is how I loaded the data because by default it was empty. So this is, I'm not going to run it now because the database is already loaded. But basically I imported the music data set from this URL, which is again another one provided by a start. What's interesting and what we're going to focus today is this section here and that's all. That's why I said it was easy. So this block here is all we need. So basically what we need to do is, we have to create a connection. Maybe we can zoom in one more step, Jesus. Sure, absolutely. Even, does that look better? Yeah, that's perfect. Yeah, thank you. Good. So, I mean, that's a previous instance. So let me make sure that I'm pointing at the right sandbox. So let's open it here. And while I do that, I'm going to look at the connection details. So the URL is this one. So I'm going to look at the connection details. So the URL is this one. So I'm going to copy to the clipboard and get back here and replace it with the right one. So that's my database. That's my authentication. Probably the password has changed as well. Let me copy it here. So it's NeoPJ and this is the password. There we go and i'm going to use i mean i can decide to use uh the database that i prefer but i'll use the default one which is which is neo pj right which is uh created by default and which should be um empty Let me connect to it through the browser like this. I can see what's in there. And while that connects, I'll keep moving here. So what are we going to do? So we same way created a connection to the startup database. I'm going to create a connection to the Neo4j database. And these are the credentials and the URL. And once I have the connection created, the driver created, which I've called NeoDB, I'm going to initialize it. You know that before we can import RDF, in order to use Neo semantics, we have to both create constraint. You might have seen me create every time I use it. But again, you know, won't spend too much time on the details. That's a requirement. That's what this statement does. And then we need to initialize the way we want the RDF to be imported out. I'm going with the defaults. So I'm using, I'm calling the init in NeoSemantics, the GraphConfig init. And then I actually go ahead with the actual migration. And the migration is as simple as, well, if we start from the end, we're calling on the connection that we have to Stardog. Remember this con here is the connection that we created to the Stardog database, to our start of cloud. And they offer a quite convenient function that's called export. Right? So that's, that's exactly what I'm using. So the export returns bytes. So I have to decode to, so that it transforms it into a string, but that's, you know, the RDF as it's streamed by the export function is the thing that I'm directly ingesting on the Neo4j side. And to do that, again, you might have seen many times as using the RDF import method in Neo Semantics that takes RDF in any serialization, by default, the export produces turtle, which is what I'm specifying here on this side. And as you can see, all I'm doing is, you know, import whatever comes as a payload. And the payload is a variable, a parameter, that I've set as the output of these export on the startup side. Does that make sense? Yeah. So that's it. So let's write, and let's see what happens. This indeed sounds very easy. Yeah, exactly. So, you know, it's pretty much, I mean, I think I've run most of these, but let's rewrite again in case some credentials have changed. So I set my connection details. I create the start of connection using the driver. I'm not going to populate the database because it's already populated and what i'm going to do here is uh like i say initialize my neo4j database which by now should be already up and running so we can see it here i'm going to do something because you know i don't like when it connects things for me uh but yeah it's uh it's uh you know the neo4j database is absolutely empty, no labels, nothing in there. And let's run that step here. So all we need to do is, now it's happening actually, it's exporting the database and at the same time is importing it here. Then we close the transaction and hopefully that should leave us in a state where we have the graph loaded on the Neo4j side. So it'll take a few seconds, but what we're going to try next, which is obviously the proof of the pudding, all that's done, it's not a very large dataset, but we should see now on on our neo4j database that we have a bunch of albums producers bands artists again those of you who have seen neo4j uh new semantics before you know that everything is prefixed uh if we didn't specify and we didn't uh you will assign numeric prefixes and as 0, 1, 2, 3, because in RDF, everything is uniquely identified with URI. So that's, I know it's not the nicest thing, the NS0, we could have assigned a nicer prefix, we didn't do it, but it's not the point of this session. There's plenty of previous ones where you can see how to how to do that but what's important is that now like i said we have um in the graph in if you like so if i click and and want to look at some albums and here's what you see how my uh wife is a bit flaky but anyway let's uh take one of them anyway so let the music play uh that's by barry white but you can see how a node like we said before right the node alex aggregates all the statements all the information that we know about alex so the same here so we have a node representing this album and it collects all the triples that we have ingested so for example the release date the description the name the uri the name, the URI. So all this happens transparently and that's what NeoSemantics takes care of. But now we can start navigating and see that this album is produced and actually performed by Barry White. It has a track that we can keep expanding. I don't know in this case, but there you go. It's also connected redundantly to Barry White, there's another writer and so on and so forth. So we can keep expanding here. So that's it. So migration couldn't be really easier because literally, again, thanks to the entirely generic nature of RDF, we don't really care about the types of entities. There's no notion of tables. I mean, there is. And we could be more precise and we could say, hey, just load that portion of the graph. But if we say, whatever you find in this triple store, just dump it into Neo4j. RDF does the job. I mean, it's already a graph. So it's trimmed as a graph as triples, and then we ingest it on the other side and we have exactly the same information and i think that's that's very convenient i mean now you know i get some comments when we uh import this is a small data set and and when you import larger ones it might not be the most efficient uh format and and and it's absolutely normal, right? That's, I mean, you have to think of RDF. And again, we can look at that. Let's go back to the studio just to look at it for just a second. So this is a triple, right? So for example, Kurt Cobain is of type a songwriter. We know it has a description. So everything is a statement. And each statement is self-describing., is an atom of information on its own. I mean, if you think of it as these are the three members of the band, if we were thinking in tabular terms, you probably would have a table of members or persons or individuals or artists where you would have maybe a column with a name, a column with a description, a column with a label, I mean with the description, a column with the label, I mean, whatever you have in this database. And for each line, you would have specific fields. So a triple is like each of those fields on its own. Basically, you provide the coordinates, right? You provide the X coordinate, which is what's the primary key, the unique identifier that something applies to. The Y would be the column. In this case, I'm providing the name and then actually the value. So each atom of information, each field, if you think of it in a data set, is a triple and you have to provide the X and the Y. Now, while that isn't super generic and it's great for data exchange and it's great for aggregating data i think we i wanted to combine the data that comes from startup with data from other parts i mean i can bring it all and it will link nicely assuming the schemas link nicely and everything but what i was going is uh this is super generic but it's not the most efficient i mean if you stream data like that think of the huge amount of data redundancy that you see so the subject is repeated at every triple the predicate will be repeated every time so and especially these days when we're getting used to formats like arrow that are super efficient i mean all those formats that exchange large volumes of data are based on the idea of of defining uh uh repeatable structures so if you define a data frame with columns with fields then you can compact it a lot more and do a lot more compression rdf is super generic but it will not be the most efficient way of changing data so be patient if you're loading a large a large data set and that happens you know across the board i mean moving data into neo out of neo into triple store because of the nature of the of the format itself so that's that's it's it's what it is basically but i think it it kind of did the job in terms of hey that's that that was really really genuinely easy right so yeah um and. And I would say, you know, let's check that what we have done was what we intended. And basically, the intention of a migration is that at the end of the process, you have the same information on the new store as you have in the original one, right? And we have the tools in NeoJ to prove that. Let's take, for example, and I have, I think, another URI here. Shake it off. I have one of my daughter's favorite songs from Taylor Swift. Yeah, well, everybody gets the earworm for the rest of the day. So let's find that song. And again, we can look it up on the i mean we could continue with nirvana but you know taking this one so we have this uri that uniquely identifies a resource so if i return that that would be the song um identified by um these um uri and as we can see it's connected to well to a number of individuals taylor swift as the right i mean it seems to be co-written by a number of individuals and it's part of an album that where the track is found so um right so i let me again go back to my my cheat sheet here so i have this um well i'm actually gonna write it so we can we can go to to startup and say i saved it here now that i remember so this is a general general triple that would get me everything we know about shake it off right and and again okay because um let's try to dissect this but basically i'm saying give me every triple subject predicate object that where either the subject uh is shake it off or or the object basically what i'm saying is give me all the triples that are uh where shake it off the subject so like but also the ones that for for incoming incoming relationships where shake it off is the is the object so if I if I run this query uh that should return me in this case well just eight triples which is a pretty uh small data set easy to to test and then I go to the Neo4j side it would be something similar to that but in order to get the relationships I would need to do something like that i mean incoming and outgoing let's call this rel for example and i'm going to return r and rel so what i'm returning is uh the all the triples about the the shake it off song itself but also any relationships that come in or out of it so if i do that and i put it on my um rdf endpoint which i get by running post um rdf neo2j which is the num the name of the database uh and cypher so i'm using the cypher endpoint so my query is um this one oh and i have to of course avoid the double quote inside double quote let's make this one single quotes but basically what i've done is i pasted that cipher and i'm going to um run that and that returns the rdf but I like it more in, let's make it format and triples because it's gonna be more similar to the format and triples. Now it looked like subject-predicate object and it's easier to compare, but we see that one, two, three, four, five, six, seven eight the eight triples that we that we're getting on the other side right so if i look at my studio i get exactly the same collection of statements right so for the same query so uh important to see that there's no data loss and that the information that has been imported has been totally preserved and can be regenerated which is another important aspect if you want to like i was saying before right maybe you want to um you've loaded your data into neo4j you want to do your analysis you could even actually let's why not let's run it as a as a background thing here let's try to open this in bloom and uh and yeah let's say you know you have it in neo4j because you want to run some uh some algorithms and and then enrich your graph and eventually bring it back to to um to your triple store or do whatever with it so um so yes that's that's pretty uh i would say you know mission accomplished so we have migrated the data in a very very straightforward manner and we have seen that that the data has been has been preserved so that's test number one and well this if this one's up here let's hang on let's run it neo4j and the password. I don't know if I'm doing this right. Am I supposed to enter it twice? You're right. No, I think you didn't enter it the first time. Now, yeah, should be good now. Okay, so that should be... Yeah, it's good. Don't wanna save that, so that should... Let's do a quick test, because one of the cool things in Bloom is that we can run the algorithms without having to write code, right? And that's something that... Anyway, well, I'll leave that because it might take a few seconds to initialize, and I don't want to diverge, you know, distract the attention from the topic of the day, but we'll get back to that. Good. So that covers the first, you know, the first example, right? So, RDF data in Stardock, we moved out with these export statements, and again, you can be way more granular. I mean, we're it out with these export statements. And again, you can be a way more granular. I mean, we're going with a full graph migration, which is what export does. But of course you can be more selective, more precise and say, hey, I'm going to write a query and specify the portion of the graph that I want to import. Ultimately it's exactly the same. So you can be more granular, but what you would be doing is, like we described in the graph here, you're streaming out RDF, and NeoSemantics is persisting at a Neo4j. So same idea, still valid. Now, if we move to the other database, to the triple store that we're playing with today, which is Ont ontotext that means this has gone to sleep but again they they offer it's not like um like a sandbox environment but it's more kind of a public data set and in particular this one is uh is a demo uh data set on the um i I4.0 knowledge graph. So it's like, it's an ontology. In this case, it's a schema. So kind of a model for industry 4.0. So if we... Ah, okay. I think there was a link here to... Yeah, exactly. I think that's... I can open here a tab for additional information but it's yeah it's an anthology where we define a number of concepts and well that's kind of the visuals of it and it's pre-loaded in in onto text in in graph ap in the form of of triples again and we can run some example query. That's a very, you know, the one that comes by default, getting all the triples here. But again, subject predicate object is the format. Now, if we go back to our notebook, the problem is, like I was mentioning before, is I couldn't find a driver for Python driver for onto text. But again, the idea is pretty simple. It offers a Sparkle endpoint and that's the URL of it. That's the one as you can see that I get from here. And all I have to do is write a full generic query that basically streams out every single triple of the graph. That's what it looks like. OK. So it's basically take every triple, and we can test it here. It's very similar to this one. So it's a construct query as opposed to a select query in Sparkle. It's one that returns triples instead of tabular tabular results so i can set a limit here basically i'm saying take any any triple subject predicate object and stream it out and i can say put a limit of 10 and i can run that and that will i could put the raw response and it shows you know subject predicate object but you can download it i mean you can export it in any any of the rdf serialization formats so that's that's my kind of work around for that type of scenario so i don't have that nice driver but i can say okay run this query and that's an http endpoint so i'm using the request uh a module in python and i'm saying okay uh get which is basically a gate http request on this endpoint which is the the where the database is located um and these are the parameters it's there's a query which is my full generic query that is take anything and bring it on. And I'm specifying the serialization format. Like before I'm using turtle, but could use any other. So basically that returns an export result in the form of the result of a get HTTP request. And what we're doing, and this is literally copy-paste from the previous one, so I'm going to connect to my Neo4j instance, and I have to update that with the new details, because that's in the previous examples of connection details, that's the new URI. I'm going to replace the old one. And I'm connecting with the credentials that appear here. So replacing that. And I'm going to create a database so that we don't overwrite the other one, one that I'm going to call I40 Knowledge Graph. So let's go to the database and i can do create database i40 knowledge graph it will take a few seconds but like this i kind of work in two separate uh environments you see that the database is already available so i can now from my notebook I can it's a like I say it's a copy paste so I'm starting a connection but this time on a different database I force it kg I'm initializing it exactly in the same way I'm creating the constraint and initializing or the default options and then what I do is okay if my request before return at 200 which in http terms means okay success and then what i do is i do exactly the same thing as we did before we call the rdf input inline and the payload that i pass is the text returned by the export that we run before, these export up here. Does it make sense? Yeah. And if you were to do this without having previously loaded anything, or you can just keep the database as Neo4j, but in this case, it makes sense to do a different one, but yeah, it's very similar it's almost uh like you said it's almost the same the same literally uh if we go back here is copy paste except you know even the last line except that the payload instead of getting it from calling the the the export in the driver what i'm doing is uh getting the result of this HTTP request. But the concept is identical. So both operations, one more Pythonic, more driver rather than HTTP request is producing the RDF and I'm consuming it. So if I have made no mistakes, let's give it a go. So this, oh, did I, i don't know if i run this maybe it will still exist for my previous execution but but that's uh that's pretty again it's a small data set so um we should have it um see there you go so that's it yeah so there's something there yeah so there's all the because it's an ontology there's lots of of types lots of labels but again we can run a simple test and i have my cheat sheet here with some uh uris and um if i go to my sparkle end point uh i'm going to look at, for example, one of the concepts, which is this one. Instead of a control account, I'm going to use a select star from here. So basically, I'm returning all the statements, I mean, the predicate and the object about a standard organization. Okay. If I run that and I want to put it in the form of the table, we see that standard organization is a class. Actually, yeah, it's class according to RDFS, class according to OWL. It's a thing. It's the same as a standard organization. Well, you see there's 10 triples here. I'm limiting it to 10. So maybe there's more. Let me remove that limit and see what that returns. But no, it's actually 10 anyway. So yeah, and there's a comment label and some other equivalent classes. So if we try exactly the same query on the NeoPJ side, we should check that the input has been successful. So that would look something like match. I mean, I'm looking up by uh uri which is the quickest way of of doing this but and actually we would get um in this case i believe we were specific about the the outgoing ones because i only put the ones that had these as subjects so if i return r and rel that will return the visuals but again what we're interested in is in the post request on RDF. This time, instead of Neo4j, it's I40KG database. And I'm going to use the Cypher endpoint. And just like before, my Cypher query is this one. And as always, I have to be careful in the browser of the quotes, the way we're using them. And that should be it but just like before because it returns if not total by default I'm going to say the format I want is and that's again entirely because it makes it easier to compare. And we can see one, two, three, four, five, six, seven, eight, nine, ten triples, just like the other case. So we have the thing, we have the class according to OWL and RDF schema, the equivalent classes which were standard organization and same as. So pretty much the same information, well, so pretty much the same same information not pretty much exactly the same information that we get on the other side right so these these 10 10 triples so no data loss again and uh like we saw before entirely possible to completely regenerate the um the rdf as if as itF as it was imported. So in case we wanted to bring it back for whatever reason. So yeah, I think I should probably put a comment in the, because this is the onto text example. Let's make it clear. onto text example. Let's make it clear. But yeah, that's pretty much it. I hope that that was clear and simple enough, because that's, again, one of the requests that we often get from our Neo4j users. How can we do a simple migration? And yeah, it doesn't get much simpler than these in terms of of migrating data from one platform to another and again this this is something that we've shown in the past that doesn't work only a triple store to neo4j we also use it to move data between neo4j and neo4j so sometimes it's a convenient way of moving data from one instance to another a whole graph or a portion of the graph so again keep the idea that as we said at the beginning the rdf is a great model for data exchange very flexible very easy to use but also with the caveat that you know don't expect the best performance because we are literally streaming uh atom by atom of information so take a pause here and and see if we have any questions does that make sense alex did that kind of cover the the the topic clearly what happened today i can't hear you i don't know if it's yeah because i'm i was wondering um that was very clear uh thank you jesus um the question i had was is is stardog and onto text as the two examples you showed, did you pick them because they are the most famous ones? Or is this a particularly easy example? Or is this the procedure you would go for no matter what triple store you use in general? Yeah, yeah. So I think that they're some of the most popular. I mean, I mentioned in the slides, these two plus Neptune. I work locally with RDF4J, which is another, you know, it's more an API for Java API to work with RDF, but also offers kind of a mini triple store. But the, you know, the three are some of the most popular, some of the ones that our users mention more frequently, but the ADA is exactly the same. And it's the great thing is that, you know, they all share the fact that they offer an out of the box RDF because they natively and store the data as RDF they expose it as RDF and that's a format that we can easily consume so yeah probably the most popular but the principle is exactly the same with any RDF store okay good and then there was a question from why what's the advantage and drawback of migrating a triple store into a neo4j i think you mentioned some of those like you want to say like you say streaming atom by atom can be a little bit slow and it's not a sort of high performance i mean if you have i guess would it if you have the same data available as csv would you prefer to go by a via via csv or import or would you still do the rdf route right okay so if i had uh csv data and i i mean i would need to choose i mean because i suppose the question here is what type of grab store you want to use and and that's kind of the the the the the i would say the the question that's been going on forever. And, you know, whatever I say, it's going to be biased. But I think at least I try to be objective and articulate my answer. So I really value RDF as a format for data exchange. And we prove how it's extremely useful to very simply have a graph and reproduce it on a different platform. But it's not. And again, that's my impression. But I think it's backed by the fact that, I don't know, go to Stack Overflow. And you will see how you know there's uh surprisingly or unsurprisingly more questions tagged with cypher than they are with sparkle maybe that is because you know developers ultimately the people who interact with uh with stack overflow are the people who are building software asking questions that's because they they're they're they're facing these type of issues so i believe that uh I believe that when working with data as a graph, I think it's a more natural, closer to developers, closer to data people, the level of abstraction that Neo4j offers, that the property graph offers. That's my intuition and that's what I feel. But again, if you talk to maybe some startup guys and they will say that no way that this is. So, yeah. And then that's looking purely at the model, at what the data looks like and what the experience of interacting with the data from the point of view of your query language and the model. Then beyond that, of course, there's a whole host of questions that you have to ask yourself in terms of what's the underlying data store. Because things like what type of workloads, what type of high availability you want, what type of, do you want an open source platform? I mean, there's so many questions that you need to ask and that are not model dependent but more platform dependent then you know again i i would have plenty of opinions that i would be able to to justify but i think you know whatever i say my people might take it with uh you know with a bit of a pinch of salt because they might say hey you work for neo that's that's true but we're doing these're doing these webcasts where we try to kind of be objective about the two things. So I hope what I say gives you some hints and some clues and some ideas of the kind of things that people look at when working with graph data. Yeah. Yeah. I think that makes a lot of sense. I mean, in general, we try always to say um you know use the data use the tool that fits your data and no matter what you do think about you know what what kind of data model do you want to go with but what also what kind of you know challenges do you have with your data what what what's the ultimate outcome you you want to achieve by by using data and what what in terms of data combined with a platform and if if you you um you realize that maybe you want like you said right you you want to do a little bit more with your with your rdf type data you want to do some algorithms some some uh some graph algorithms in particular you want to find out a little bit more about the the the connections beyond the first degree which is i think super easy visible in in rdf but if you go a little bit further two three four maybe more steps it can become more complicated and then it can become a little bit more overloaded you know might make sense to at least look at it in in a in a property graph way and uh and see if that helps you um with with uh with that approach absolutely yeah integrations i mean what kind of ecosystem you work with where are you going to get your data from what kind of which which are the options i mean we've seen today that you know if you want to get data out of one of these platforms, you have different experiences. If you're a Python writer, maybe you prefer, you know, startup because they offer you these, these driver then, then onto things like that are important. You know, how you consuming your data, what kind of visualization, what kind of integration, like you say, with, with analytics, with machine learning you know all these elements are important and you know it's it's it's it's a big decision and you have to do your due diligence when you're starting a project and and try to understand which you know uh benefits one has uh compared to the other i mean today we're just focusing on the mechanics of moving data across and showing how you know the rdf gives us you know a really really straightforward way of doing it. But all those questions are relevant and you have to, of course, answer them before you choose the type of platform that you want to use to manipulate the data. Yeah, that's always true. Always think about that and then do some testing, do some evaluation, obviously, and see if that works. And we showed the free tools, at least from our side, you can give it a go and see if that is something you see is maybe worth further exploration. Sorry. There's another question from why, and I don't know if I missed something here, but is there a competitive modeling tool for graph db such as neo4j of protege i think we talked about protege before but i don't know if i mean modeling tool does is it is it like something like arrow or yeah it could be it could be i mean that's yeah we we we often use Arrow as a modeling tool. But again, we've shown, like you said, Alex, I think only a couple of, I don't remember how many, but we've shown how you can perfectly use Protégé as your modeling environment and create your ontologies. And then the model that you create with Protégé, I think in one of the episodes, we use it as the target model to ingest data in the graph and to automatically build the pipeline. I mean, we've also used it to, I think the last time was to create a streamlit application that would read the ontology. So again, these tools, Protégé, BullParty is another one, they produce models in RDF and we've shown how easy, the moment it's RDF, we can bring it into, you can create programmatic, how would I say, actions on it, whether that's to ingest data, whether that's to run inferences, whether that's to run validations. So yes, there's arrows as an option. There's the data modeler as well, which is kind of a variant of arrows as well, right? That's the data modeling tool that's part of the workbench. Exactly, yeah, that's a version of it. But I would say that, you know, use the one that fits your your need and and uh and yeah and you will find possibilities for for interoperability again in some of the episodes in this series yeah and then there is a question i think two two questions um more or less the same um uh same way um how how is it can we can you explain how how the neosemantics knows what to do with with what basically so why how does it know that something becomes a node and something becomes a relationship sure that's uh that's pretty straightforward. So it's describing the documentation, but I can put a pointer. It's basically three rules. I mean, triples come in two shapes, right? So a triple can have a, and maybe I can start sharing my screen again, just to show an example. So let me share it again and maybe bring the Stardog. So this example we're showing. Can we see my screen? Am I sharing it? Yep. Yes. Cool. So this is a collection of triples about the song Shake It Off that we mentioned before, right? So as you can see, there's two types of triples. I mean, let's call it types. But these three at the bottom is what we call literal triples, right? And they're literal because the object, the third component of a subject predicate object is an actual, we call it literal value in ARIA, which is a string, is a number, is actually a property value in a way. So these literal triples become a part of the node structure. So all the ones that contain that, that share the same subject, for example, shake it off, will become a node. And all these will become attributes of the node. So literal triples become part of the internal structure of the node. And then object triples, which are the ones that have in the object another resource. These are relationships. This is a connection to something else. So, for example, we're saying that ShakeItOff has a writer, which is Max Martin or Taylor Swift or this other guy. And so this is another thing, another resource that will have itself. We can actually check it. So we can take this Max Martin and say, what do we know about Max Martin? And we can keep navigating the graph. And so if we do that and run the query, we will get all the triples about Max Martin, right? So there's a bunch of them. And again, we'll see that some of them are incoming, some of them are outgoing. But in this case, we can see here at the bottom, let me hide that, that Maxin uh has a description has a name so some some literal properties as before which would become part of the internal structure of the node and some relationships that will connect max martin to to other things so these are the two rules and then there's a third one where uh we have a type nodes and type nodes in rdf it's a keyword rdf type which basically gives you the the class the category that that this resource belongs to and this in this case this is a songwriter and a producer so these types can be represented in neo4j in two different ways so you can use uh what we call in neo4j a label which is a tag that you associate to the to the node which is the the option that we use by default, or you can create a separate node and link them together through a type relationship. But these are just the three rules that we apply. And that's what semantics does, under the hood, I mean, not under the hood, but internally as you ingest, which is the kind of standard natural way of of of uh of importing uh um data let me uh stop sharing yeah so that that's it i believe it's it's described i can i can send you a uh a link to a uh post where i describe that uh i'll maybe put it in the in the in the chat but that's that's it the logic is is pretty simple yeah yeah that's good data but i'll i'll shall i put it on the i'll put it on the chat for you alex and then you can yeah yeah put it in the chat and then i'll put it and i get perfect thank you so that's um that's basically a longer description of what i just um described yeah yeah i'll put it also in the video description so um so people felt they'll be able to find it yeah and and and i i i appreciate that you know we we've just focused on the migration part and we kind of assumed that people were you know familiar with this but uh yeah again there's just plenty of previous episodes that we can yeah look at for more details absolutely um yeah um i think that's that's it for today. Thank you for watching. Thank you, Jesus, for showing this. I think this is very good. I mean, if any episode seems to me like the best one to try out at home, basically, is going to be this one, because it just seems to be three lines of code and then you're good to go. So it's very friendly and makes it very easy to try and see. So if you are on the fence of giving it a spin, see what Neo4j can do with RDF type data. And if that works for you, then hopefully we showed you how in this episode. Yeah, in terms of upcoming sessions, tomorrow at the same time, so 5 p.m. Central European time, I have Maria De Maro as a guest, and we talk about linguistics and graphs. So she wrote a paper on that. So how to analyze conflict dialogue, argumentation-based dialogue in NLP and graph. So that should be a very interesting session tomorrow. And yeah, I'm looking forward to seeing you then. Hopefully you can make the time. And if not, obviously all the episodes will be available as recording on YouTube. So just go to our YouTube channel, and you find all the playlists of Going Meta, also the other channels, the stuff we're doing. So just check it out if you want to. And if you like it, then please give it a thumbs up or a better yet, or both, actually. Subscribe to the channel or follow the channel that would be nice um and yeah um thank you jesus again for presenting for for giving the demo and for um for showing how easy it is and obviously yeah for taking the time today i'll see you soon perfect no my my pleasure see you soon and i'll make sure that all the content for 18 and 17 is up with the rest in GitHub. So thank you all. Perfect. Thank you very much. And yeah, have a good rest of your day. Bye-bye. | Going Meta - Ep 18: Easy Full-Graph Migrations from Triple Stores to Neo4j | 3,800 | Neo4j | 20230705 | Episode 18 of Going Meta - a series on graphs, semantics and knowledge
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Previous episodes: https://neo4j.com/video/going-meta-a-series-on-graphs-semantics-and-knowledge/
Neo4j Sandbox: https://dev.neo4j.com/try
neosemantics: https://neo4j.com/labs/neosemantics/
Arrows: https://arrows.app
Blog on Neosemantics: https://jbarrasa.com/2016/06/07/importing-rdf-data-into-neo4j/
Check out https://community.neo4j.com/ for questions and discussions around Neo4j
#neo4j #graphdatabase #semantic #knowledgegraph #rdf #semantics #migration | 2024-10-21T12:48:59.650904 |
https://www.youtube.com/watch?v=w-PwMyzokSw | all right welcome everybody to this episode of going meta we are on episode 17 and this episode is going to be about open ai and neo4j and how you can integrate rdf uh with that i am subbing in for for alex this week and yeah i think i'll hand it over to you jesus and tell us more about this perfect yeah thanks max yeah it's um different host and different time because normally this should have uh taken place next week but both alex and me we're going to be traveling so you kindly step in and a couple days earlier but you know hopefully equally good it's going to be a an interesting one so let me um as usual bring my screen um i'm gonna share my screen let me sure i'm doing it right so share the whole screen, hopefully. I can also mention while we're waiting that our developer conference, Nodes 2023, is open for CFPs and early registrations. So if you want to have a talk about Neo4j or something like that, you can head over to that link down there and uh sign up excellent that is definitely great advice and everyone should be so yeah exactly good stuff so hopefully you can you can see my screen and and well you know that you know how we do it right we've done this before i don't know it was the last one we did but it's probably close to to a year or less than a year i believe ago yeah but we open with a couple of slides really just uh uh to introduce introduce the concept of what we're going to try and and go we go straight into into coding which is what we normally do in in uh in these growing matters the fun part exactly exactly so yeah well like like you said i mean today you know, we couldn't, you know, keep postponing it any longer because, you know, everybody's talking about OpenAI, so we had to do our episode on it. So, yeah, I mean, the idea is we're going to try to use OpenAI and in particular, well, I'm going to show at the beginning chat GPT, but then we're going to use the OpenAI API to show some more advanced integration. But the idea is pretty simple, right? So we want to accelerate the creation of a knowledge graph in NeoPJ by getting data from OpenAI, from the API. And the idea, as we know, is we interact with these api using uh using natural language right so our questions are going to be natural language but we're going to see what type of answers can we get of course we've all seen it returning uh amazing you know natural language explanations and um but we're going to explore uh uh return more structured responses like in the form of things like JSON, Cypher, and we'll eventually get to using RDF and connect that with NeoSemantics as well because we're going to see and we've talked about RDF in those terms many times here. So RDF is a great vehicle to exchange graph-shaped data. It's a very, very good way of... Does that make sense? Is it? Yeah, I think that sounds great. I mean, RDF is this kind of thing that has all these references and you work with ontologies that are set up to actually give you a semantic meaning, right? Just beyond what the data is actually capturing, right? That's exactly it. So it's a standard to exchange data. I mean, there's multiple serializations. And you're right. I mean, every statement, because RDF is described in terms of statement or triples as they call it, subject, predicate, object, and these statements use unique identifiers called URIs that refer to vocabularies, ontologies, that provide additional context and definitions of the concepts that are being described. So that's exactly it. And I thought it would be a good mechanism to exchange information between the API, between the ChatGPT and Neo4j in this context. And like I was mentioning before, we'll go from there to looking at a more programmatic integration, right? Because interestingly, APOC, the extended version, has just released a couple of additional methods that provide integration with this OpenAI API. So that's really, really cool because in a couple of lines of code, we're going to be able to generate a natural language request to this API, get some results. And if we get them in RDF, as you know, it's the objective of this session, we should be able to import them directly using new semantics or visualize them against semantics. But that's the idea. Let's go one step at a time. But that's the outline of the session. I can also mention that these ML functions are now available in sandbox as well. So if you want to go and try them out there, you can do that as well. Absolutely. Not only you can do that, I am going to show that that is the case. Let me, I'm going to run these in the Sandbox. So as we speak, we're going to create an instance and let's leave this one running. And I'm going to bring up my uh chat gpt a new a new chat i'm going to put here on on the side so that i'm going to try to get the two things uh running in parallel so i have the the chat here and have my um sandbox once it's created it's already here perfect and yes you're totally right so i i learned that these functions were already deployed in the sandbox so we're going to see them in in action so we should have that uh ready now so we have the user and the password let's leave it connecting and uh well let's start with the session. So I'm going to begin with a normal chat first. So before we hit the integration with APOC, let's look at the kind of things that we can ask from ChatGPT. And I'm going to take any random question. And the one thing that came to mind, normally I prepare this session with some code, but it's interesting because this one is very much natural language, right? So we're going to be presenting prompts and my plan is literally just a few notes of the kind of questions that I wanted to ask. But let's take a random one. So I was saying, which uh what uh commercial products actually slow that noise it's my keyboard commercial oh that's my sandbox still starting we don't need that we can this is maybe a bit of a distraction but let's focus here yeah actually probably i should go with something a bit split like that. And then we put this one on the other side and let this one. So I was going to say what commercial products are associated with the tennis player Rafa Nadal all that's and you know question like any other so if i ask this question to the chat gpt i will get a a very nice answer in natural language which is right but what i want to do like i said at the beginning is we we're going to want to populate a knowledge graph with this information right so what i'm I'm going to ask the bot is to, can you provide the answer? And we can do it in many different ways, right? So we can, of course, ask it to, well, I don't know, we could, in Jason, like, I mean, we can even put it some structure. I don't know what kind of JSON will it generate, but let's let it surprise us. Well, there you go. So we have, yeah. So you can give it kind of system commands to kind of, you know, take your answer and kind of reshape it in in different formats that is that is right so we uh now we have kind of a less less rich answer right better better structured so we have a we have something that we could eventually i mean why not i mean i see the the the the name of Rafa here. So I could say, why don't you, I could ask maybe to produce a cipher to import that JSON into Neo4j. Let's see what that does. that that would be a you know uh there you go so it's telling us that there is a function called um json which is uh totally right so that would be one way of of doing it so that's great advice. So we could even try that. So this is basically reading the JSON. So it's unwinding and creating actually the player. It's hard-coded, but then it's iterating over the keys. I mean, we could even give it a try. Because sometimes it's not totally accurate, but let's let's let's put it to the test why not so if we take the the json and we put it in a parameter let's call it json and we store this right so we have this data and if we take this uh code now will it work so basically what we need to do though is to change it from uh reading it from a file to reading it from this parameter right so instead of so we need to um unwind not value but we need to unwind oops um what is it is it right but it it doesn't seem i don't i don't think that's built for this type of of output is it because you know it seems to be unwinding on the player it's because it's not a collection so we have like lists of products so it doesn't uh depending player because there's not a player uh element here no clearly clearly it's not not not polished but anyway that's that yeah that's that's not what i you know wanted to test anyway but it's uh it's you know one possibility is to try and and have it uh have it structured like that so um yeah it would be great if the two would be in sync but anyway so that that will be one one one possible way but what i was going today is um to to try and and and do it in a way that basically i didn't have to face this kind of problem right that the format of the input determines very much the you know the the the code that we have to write to to ingest uh the you know this json in particular because you know it could have been organized in a different way right so we have so you want to be a little bit more precise about what you're asking for and such and you want to be structured about how you're actually doing yeah because you know the thing is you know i i wonder if we could ask the bot to generate an alternative JSON. So basically the same information could be conveyed in different structures. So we could have a route, the Rafa Nadal, and then we'll have a map with the... It could have been a list or it could have been a different way of nesting. And this is, in the general case, I suppose undesirable, right? Because we have to adapt your ingestion code to deal with that. And that's what I was thinking. Maybe you trying to explore the RDF option would be an interesting way. Let's see what it comes up with. So if I say, can you provide the... I mean, actually, let me copy the previous question. I suppose we can go back and say the same, but now actually let's ask it to produce it. but provide the result in RDF serialize as turtle because we have multiple realizations let's see if you know what that gets so hopefully that will produce something that we can read and okay so perfect. So as you can see, these are the triples that we're talking about before. So it's telling us that Rafael Nadal is a player. There's a brand called Nike and some others. So it's describing products and the products have labels and they're endorsed by Rafael Nadal. So they link to it so you know that looks that looks uh decent so it's uh it's a collection of of statements that we could eventually uh try to injure let's let's be ambitious and try this time again if he could otherwise i can write it myself but can i uh ask you to produce a cipher statement to import that into Neo4j using neo semantics or m10s i don't know if that's help it finding the way to do it okay to import that that's looking good perfect so it's uh i suppose it's just nesting the whole rdf but inside a call to the method RDF input inline and that's exactly what I was meaning before so because RDF is a collection of statements that they themselves form a graph I don't need to care much about you know that the structure and the kind of graph that I'm producing because it's already it's already a graph right so so just keep everything in line and just throw it in there right exactly and then delegate to to neo semantics to process those triples and uh let's uh let's copy this code maybe adjust it a little bit so let's get rid of this json which was not very useful but if i if i paste this code and i don't know if this is being um very useful to split the screen like that but is it is it readable on your side max or is it too yeah i think it's all right i think it's all right right so basically as we see all it's done is you know there's a collection of triples here serializes rdf but they're inside the this call uh antennas uh Neosemantics, RDF import, inline. Inline is the flavor of the procedure that takes the RDF's payload, so that's exactly what we're doing. But instead of just returning the termination status, I'm going to, because it's a single call, I can run it from here. It's going to complain because before you can import as we know this has happened many times it's going to ask us to create first a um a constraint that needs to be added before new semantics can operate so I will uh quite handily it's part of the error message so all I need to do is copy this paste it here uh and where is it did I know there you go so that's the constraint created and if I run it again now it's going to complain because we need a config first so the first time actually let me because you know I'm thinking of this as if I, I mean, I'm, of course, familiar with the new semantics plugin and I know what to do next. But what if I said, hey, I wonder if it gets to that. Certainly, that means... Oh, brilliant. That's great. Oh, stop it. I don't need to... I mean, I don't need to run it. Yeah, basically, it's found exactly the missing piece. So we need a a configuration we need to initialize it right initialize the the the graph and that would be perfect and then of course continue with the whole input in line which they needed to repeat so let's just copy that fragment which is exactly what we need is it letting me copy just a portion of it or am I being copy so yeah that's useful and probably that removes pressure from me generating, uh, the manual because people can do that and get answers for everything. Right. So we don't need to let's skip the docs for next iteration. Right. Exactly. Okay. So, well, we can, we can, yeah, we can set that up. Um, uh, we can create the. That's done. And finally, we can try it finally once more. And actually, before I do imports, yeah, no, let's go ahead and do the input. Why not? Didn't like it. So, there was a dot in line 49 I wonder if that could be. Is it some URI that's malformed? I don't know. Let's give it another try. Okay, so that was it. It looks, I mean, this is not a problem in general is when we run it through the browser sometimes the escaping of some uh strings creates trouble but it's um in general it's fine so it's imported the the the triples the collection of triples and now we have a graph that we can explore in in neo look at that oh wow yeah let's you know get move this a bit to the left oh there you there you go. We could take a bit more state for this. So look at that. So we have a collection of triples imported. Well, now it's been translated into a property graph, but we have kind of the, as expected, we have a node representing the player, which we can, didn't add to it the the label just have the uri but anyway that's the note representing rafa nadal and we have a number of of things related to it like uh for example that there's nike footwear that's endorsed by uh rafa there's which are mile looks real meal i don't know that but you know there's there's a number of products. So that's pretty cool, I would say. Because we actually, again, it's this idea of using, getting ChatGPT in this case to generate RDS. A context we can explore, right? Like we can grab whatever kind of question we want that we want to dig down to. Exactly. I mean, there's only one one thing that i would question here and it's um you know what happens if i want to because you know this is one one call to these uh maybe i don't know i want to combine this with with information coming from other sources and i would like things to merge nicely right So if I bring additional information about Rafa Nadal or even about, you know, Banco, Sabadell, whatever, or any of these items, I want them to link nicely. And if, you know, in the RDF world, things are identified with URIs. And if you see by default, what ChatGPT has done is generate random URIs. So it's defined a prefix up here that's called example and i think it probably adds it to the comment right not not here in the previous one where it generated so um sometimes it adds a uh comment here we go so the player rafa nadal so let's see if it has an additional information no it doesn't but anyway it's not using any any standard uh so i was wondering if in the comment it mentions the fact that he's come up with this kind of random uh uri but it's not using any kind of standard way of referring to to these things because we could have used, for example, any, I'm thinking of DBpedia or Wikidata IDs. Right, right. Make it simpler. But you might have to ask for that, for it to actually include that. So it looks like by default, it comes up with its own terminology, which is great. And the same with, I don't know if it's using any specific vocabulary so just rdf rdfs so it's using the the you know these example uh vocabulary which is okay because it forms a graph but it would cause you know create difficulties if we then want things to integrate nicely by using the same uris or even using standard standard vocabulary so let's exactly do what you said so i'm going to say can you can you um regenerate the rdf but uh using uh let's use schema.org it's a standard vocabulary and wikidataata IDs for resources. That's now work. Let's see. So it might build something similar, but hopefully, oh, it's still using example. It's, well, but Wikidata. Right, so that's been useful on the... But he's clearly ignored me on the Wikidata thing. I think if you ask it from the beginning with a question as well, it might be different because now it thinks that it needs to reiterate whatever uh reiterate whatever you already know there's one thing that has changed if you see now uh the predicates uh used here are fixed by schema which is one of the uh additional constraints that we added so let's use uh these um vocabulary terms but then the thing that it's still not doing is identifying Rafa Nadal. Okay, so you think we can use the same, I mean, like, copy the question, but... I just add to the... The whole thing? Yeah. Okay. I mean, I'm not sure, but... Yeah, I don't know. It's worth a try. with their Wikidata. Did that say Wikidata? Wikidata. Yeah. Wikidata IDs. Okay. Let's, sorry. That's the wrong answer. Let's see if that improved things. But I also think like chat GPT, no, it doesn't do that. I mean, I think it also has to do with like it tries oh it did it did yeah so it still has the the example but it's now using the id so now that looks like it's a step in the right direction but i'm not sure that the ids would actually be correct because i know that i know that uh okay we can chat gptPT has a way of guesstimating things like that. Exactly. Yeah, because we have to keep in mind that this is kind of machine learning. I mean, I don't know. I'm certain that chatGPT is not loaded with the wiki data. The wiki data is a structural source that we can query with Sparkle, which we have done in the past. But it looks like it can generate some plausible statements. Let's see what they look like. Let me delete the graph. If I empty everything that we've imported, but I kind of re-initialize. Let's keep this time the namespaces because we're going to be using them. So, if I copy this new fragment of RDF, so let's initialize the graph and now let's try to import again. So it was called n10s input in line. It's RDF import, I think. Yeah. Input in line. And we pass the RDF here, pasting it, and then we said that this realization is total. I wonder if we're still going to have the double quotes issue. Looks like we do. Let's try to find it. Invite input as expected. So let's see what that is okay so that's the quote yeah because you know we're inside single quote so if we have any any quotes break it but again this is the that's it so it's uh you know it. But again, this is the... That's it. So it's, you know, the problem of running it in the browser, but it's... Yeah. You can escape it and get it to work, but probably... Yeah, exactly. Copy paste this. But anyway, so the graphs should hopefully look, you know, pretty much the same. So if we look at it, we still, I mean, it's changed the coloring, but now, you know, we have a bit more annotated things. Right. The difference now, let's make this a bit bigger like we were doing before. So we have Rafa Nadal at the center and now we have all the predicates. Everything is prefixed by schema.s.c.h, which is the prefix for schema.org. Now, whether these are real predicates in the vocabulary or they have been made up, we'll find out. So what you were suggesting, Max, is that maybe these IDs that in theory, it's adamant that they are the ones, they might not be. Well, we have an easy way to find out. So we can actually, if we have Rafa Nadal and and we click on this that should take us to rafanadal's page and you're right this is building so yeah clearly not not the not what we were after so well anyway that that's interesting so it's a it's a great observation because you know the yeah that's uh let's keep in mind that this is a machine. I mean, it's a completion end. So it's basically guessing what an ID might look like and it tries to make its best estimate based on that. It would be interesting to know if it would be consistent. So for example, if I come and say, for example, let's, yeah, I mean, for example, this Banco Sabadell. So this is one of the things that one of the brands, one of the products, brands in this case that Nadal is associated with. Maybe he's done say can you provide additional facts about bamco sabadell i would say again uh in rdf and do we have to keep some memory let's let's see what it does i suppose it will will provide it in RDF in the same way and using the same skill. What I'm after... There you go. So it's... I'm trying to find if it's going to try to keep consistency, right? Because this might be wrong, right? If I click on Banco Sabadell, maybe it's going to take it somewhere else. But... And yeah, so football club. Perfect. Yeah. But will it, if I try to enrich my graph with another fragment of RDF, will these IDs be used consistently? It looks like they are, right? This 4680, it keeps Q2 4680. So it looks like it's, yeah, not totally random, but we can... I mean, it's in your context, right? And I think... Exactly. But I wonder if I start another chat session, whether that would be... Yeah, it might be different based on how you phrased your question, I guess. Yeah. And I see that there's some quotes here that we might have trouble with. Hopefully there's more. Let's see. So we are adding this additional bunch of triples. And if everything works fine, now we should have, well, the graph growing, right? So, and that's what added now. So yeah, perfect. So if I, if i so yeah we have banco sabadell and and what is done is well it's added a location which i don't know why it's not coloring it nicely but it should if i do uh style set it should hopefully recolor things or make them more consistent anyway no for some reason it's not uh but well anyway we we can see the content here of the node so maybe it doesn't have uh it has a name but it doesn't have a label right i mean but it's linked things nicely right so it found that sabadell is located in in spain in a place called uh alicante it's a member of Spanish Bankers Association. So it's pretty cool. So it seems to be consistent in the naming of things, which was the objective that we were up to. But of course, the word of caution here is the one that you notice, which is the use of generated... I mean, you could probably use some kind of tasking framework like auto auto what's it called aldrigpt or something like that to make it actually go and look up these things and right yeah exactly so that that's that's the thing yeah because this this is what people call hallucinations right so it's when it's trying to complete to provide something that's plausible and apparently correct. But if we make a proper scrutiny, we realize that these ideas are actually and yes, that's actually an interesting exercise, right? So for example, because we can parse that and we're going to do it in now in integration with APOC, we could take these IDs, generate a request to Wikidata and eventually verify that they are the right thing. I mean, it might be a heavy process and I mean, the whole thing is kind of questionable in a way, right? Because ultimately what we're getting is you know we uh if we're getting wiki data uh fragments we might as well go directly to wiki data and run the queries in sparkle like we've done you know but i i guess the you know but the the assumption here is well first of all you know this this will be in theory i don't know if a super super set of of what's uh what's in in in wikidata and the other thing of course the most important is that uh well we're providing a natural language question so we're asking give me facts about banco sabadell without having to write the sparkle and being even aware of that of that query language so but anyway i think that that kind of um proves the the sort of the first point, right? I'm going to try to maybe bring back the slides that we were using at the beginning. But we've kind of shown how we can run these questions. I mean, in this case, it was ChatGPT, not the API, which we're going to test in a minute. But we've seen how we can have it format the answers in formats that are better structured and that we could eventually consume in a relatively easy and straightforward fashion. And RDF in particular was especially convenient because the combination of Neo-Semantics made it really, really, really easy to ingest. But- You can actually make the graph grow with new questions as well. Exactly. Exactly, exactly. So that's the whole idea. And what I wanted to take it in the next 10ish minutes or so is exploring these new functions in APOC. functions, right? They did the indie in April. And I guess we can do a quick search. A for I don't know if that's going to be a good search. No, that's April. It's probably not indexed by Google yet. It's very recent, isn't it? April new VJ. So if we go to the that would be the documentation. Is this the, that's not the extended one. Yeah, you need to extend the version. OK, extended. That's one. And here we go. That's machine learning and OpenAI access. So for those of you who want to give it a try, you have all the three methods that are currently implemented. So we have one that generates embeddings. So that's something we're not going to touch on today, but it's a very interesting topic as well. So it takes some text and generates an embedding as a collection of numeric numeric values in in forming a vector uh text completion as well where we uh um pass some text and it will provide an answer and um and the one that we're going to be testing today which is kind of replicating what we've just done right so we're gonna have a chat completion api where we give the context might be just uh yeah some context and a question and we will have it uh produce some uh some response for us so that's what we're gonna do I'm going to actually copy this and let's go straight uh here so I'm going to um place that let's run a single test on this one it's going to of course complain because i have to define my key so let me um do that should i put your uh screen off for a second privately here don't worry i'll all right if you give me just uh just one one second i'll i'll take it from uh yeah i could have um actually what you said but okay so I have my API key and oops I think I've lost but we can copy it again so what this is going to do for us if I it's asking it to well you can you can see the actually the way I've done it and in order to make it easier from here on, let's take the prompt part and let's make these a parameter as well. So if I run that, it's going to complain because it doesn't have a prompt. But I'm going to define a parameter, which is the prompt, which will contain the value. In this case, it's just the context. So there's a system, which is the context for for the the way we want the the api to behave i want answers with a single word and the actual question is what planets do humans live on so if i set these as a as a parameter i can now come back to to my uh request which is very simple so as you can can see, it's the APOC ML OpenAI chat. And we pass the prompt and the key that I have set before. And if I run this, it might take a few seconds, but it returns us a structure that contains in the choices section the message, which actually contains the answer, which is the error. So pretty cool. So let's try that. Before that, let's prepare it a bit more. So we actually don't want, I mean, this is informative and it tells us the model that's being used by default and so on and so forth, but we actually really only want, we could say something like uh return or we can return the value just for information but what we really want is value dot um actually dot choices right we said so this is a json structure and we want the choices and in particular the first one because it's i think by default only returns one and we get the message and in particular the first one because it's I think by default only returns one and we get the message and in the message we get the content so if I've made no mistake so it's the value choices the first one message and uh content so if I redo that it will regenerate the request and yeah that's that's the actual content that we care about right good so now let's let's try our our thing right so um and for these i have kind of copied my my prompt here but what i want to do is actually define my own right so i'm going to copy it and it would look something like this which kind of replicates what we were doing before so we have some context uh in using the system role saying provide the answers in rdf serialize as turtle and using schema.org and wiki data for resources and then the actual question which is what are the which are the brands associated with rafa nadal right so that's that's our our prompt i mean we should not expect to get the same results because we're using different model right here by default is using a a different one so probably not it's not going to be as rich as the other one but let's uh let's see what that looks like so um if i uh run it um hopefully we will get something if not identical similar to what we were uh getting before it's taking a bit longer but ideally that would here we go produce oh wow so that's the whole thing but here i'm just selecting um oh that's interesting that's been quite different actually yeah because it's producing a sparkle query which is not really rdf that's uh you know being creative here or maybe it's uh oh it's generating a sparkle query followed by the answer but that's not what i want i want rdf serialized data so maybe we should adjust the prompt maybe you know we should say something i realized you could you could say like you are uh i guess you could say like you are uh yeah you're a triple store that answers only in our rdf and something like that would that be like i'm not super familiar with that but should i use the system role for that yeah exactly so the system role is basically um you can set up different roles for it to interact us. So you can say that it's an assistant or it's a... We pull us, we pull us, but no sparkle. Where is something like that? You can say only respond in RDF or something like that. Only respond in rdf or something like that only respond in rdf um use uh turtle civilization civilization and using schema.org and wikidata for use i don't know let's give that another try and see what happens now oh this is setting the prompt and now i have to call again um the actual method that will take that prompt and um and return the results that looks better but uh yeah it's a bit less rich than and i would expect something similar to what we saw before right so if we check these this is probably not going to be raffen at all this killing me not discography there you go so uh did you get uh other choices back or uh i don't think so normally i think by default it returns only the i don't think so. Normally, I think by default, it returns only the first. But anyway, in this case, what I wanted to show is what the integration would look like. So we understand that it's not perfect, and there would be ways. Actually, it's not perfect. It's far from perfect here. You have to engineer your prompt to give you back the exact kind of data that you want. Exactly. So maybe, I don't know would it be should we give some hints like the the the id but it didn't i guess that defeats the purpose right if i have to tell him uh rafa nadal's id is this one by the way and and then but still the brands that it connects to it would be um will be still still broken but i mean i mean you could do that if you have pre-existing data let's say you have an a graph right in your database that you can pull in that and say hey i want to know more about this id or something like that yeah yeah that that's that's a good point so instead of of it being a fully natural language question, it's driven by my, let's say I have some notes in my graph that are curated and I know that are using the right IDs. And I would say, hey, for this ID, can you provide representing Ruffin at all? Can you provide me additional facts? And then it would probably enrich it in a more meaningful way. Okay. So, but the good thing is that the use of schema.org seems to be more consistent. So if I click on that, that should take us, hopefully, to the definition of the plugin. So that's right. So this is using the right... So the use of the. So that's right. So this is using the right. So the use of vocabulary terms seems to be consistent. So that's good. And can you perhaps make it so that you get more properties and not just like the. Yeah, it's hard to tell. I mean, but anyway, well, let's look at what this integration could look like. But you know, that's so we're seeing that this is actually returning us a collection of triples. And when we get a collection of triples, I mean, we can instead of returning it, we can say with these as RDF payload, let's call it we can now call as we were saying before I can call my semantics RDF this RDF payload. Right. Indicating that it comes in turtle format, right? So that's the reason. Actually, if instead of importing it, I'm doing a preview. That should be good enough. And that would show without writing to the database it would return the nodes and relationships that it produces and uh and we can return everything right like this we get we get the whole context we get the value we get the the rdf that we're getting and we can debug if it doesn't make sense or if it doesn't do what we expect it to. But basically here's where we are chaining in... Of course, there's some wrong... What is it complaining about? It's column on the command. The problem is because it's thrown an exception, we't tell if this is should we expect it to be consistent i mean it's the fact that it has produced this before if we write it again it might be yeah so yeah i guess what we're learning from this session is that it's uh you know still far from being in in a you know in a place where we can use it to really i mean it does a number of things oh look at that. It's now producing something different, including actual natural language. Right. Okay. I think like with the API, usually you can set like parameters that kind of gauge how much it should, like the tolerance or like how much it should um um like the tolerance or like how much it should uh yeah yes and not guess so stuff like that right right right okay but look at that because now the thing is we said that it could only return in theory uh rdf and it seems to be adding this context where it's kind of explaining i mean right we could eventually but you know i guess because you can tell it to not return anything else but i think that's what we put is that not what we because we said you're yeah but responding rdf or and do not and do not respond without anything else that's an rdf we try this and and but um yeah i suppose here we're in kind of prompt engineering territory right where we have to kind of fit it until it returns exactly there you go so that's there you go it's a bit more like what we were getting before but i'm tempted to run it one more time because it you know that's what we got before but this time i guess this is never going to be consistent because it's you know it's not there you go so that's i i don't know but it depends on i, I think, the tolerance that you have. Or maybe it's temperature. I don't remember if it's temperature or tolerance. But even, yeah, because even the, like, here is generating associated with statements, whereas here is generating brand statements. And then, yeah, the IDs, they're entirely different, right? I think you have to kind of give, I mean, if you were to do the system command, you can engineer that a lot more. Like, you can tell it, like, and give it examples how you want it to kind of output. So you can give it, like, two or it examples how you want it to kind of output so you can give it like two or three examples and then say like okay okay i see i see so it's there's there's more work to be done and still doesn't like it so i don't know if you know what flavor of it it's returning but you know in theory if we were sure that um and um you know i was being a bit optimistic on how consistent or how predictable the RDF being generated. Oh, and that's still different again. But this looks like something that NeoSemantic should have been able to process. I wonder if, of course, we don't know because it's throwing an exception but if we did a antennas rdf preview in line and we paste it and with a turtle station not liking it so there's uh illegal carriage return it's interesting maybe you have like uh the quote here no that's that makes sense because i had it was my fault uh not removing the quotes here is that it there you go so now now it's um it's better so if it was that so it has what rougher well i suppose something that would have been rougher but probably will not be because again this is yeah in somewhere in ireland so not not rough at all but yeah the same same idea so well yeah so some some clear prompt engineering to be to be done here but once once we we get that right i mean we should be able to to chain the yeah as we were seeing here the the the the generation of the of the rdf directly from the api and then the input of the same rdf or at least the preview of the same rdf in um you know in in basically two lines of cypher but um but well yeah it's uh of wishful thinking so i have to be doing i was you know honest i was curious about because you know we there's been a lot of of uh uh discussion and and comments on on how the two you know how close the two are, how close the two should be. And I was thinking maybe it would be worth exploring how easy it is effectively to bring data from these API or from the chat GPT directly into Neo. And I guess, well, we've got a um i think i think if you add lookups and stuff like that where when when you can stop it from doing like hallucinations and stuff like that and have uh like system prompts that are like yeah i guess some people give you the rules of like it has to follow like it has to be a complete uh has to be a complete rdf statement yeah exactly so some some validations here like uh well first make sure that we get it to consistently produce just rdf and not text uh descriptions which that seems to be okay maybe there's some escaping or something like that yeah And then some, yeah, like you say, some validation of the URIs being used, especially if you want to use standards like the Wikidata ones. And then eventually if those validations pass, then you would be in a position to ingest it. But well, that's what we had for today let me stop sharing and maybe I don't know about any any questions from the attendees if not we can probably bring and bring it here and so, I think we have one question. I could put it up on screen here, but I'm not sure if that is. Okay, now it's a relationship. It's all fine. Oh, wow. Well, that seems to be an unrelated thing, right? Because this is about importing the data. So yes, I mean, you can... Yeah. Other than that, I don't think we have much more questions today. So there's... Oh, there you go. There's another one there on generating Sparkle queries with OpenAI and use that to query Wicked Data would be cool. That's a great one, but I'm tempted to think, and let's use two more minutes to test that quickly. Yeah, sure. In the same way it's managed to, let me present again. Why not? Yeah, I mean, so it would be like you would ask it to make a Wikidata or a Sparkle query. Exactly, a Sparkle. Can you see my screen again? Yep. So if, do I still have my, I should have my, here we go, that's my chat so if i instead of um i can hide that instead of uh asking it to produce rdf can you generate a sparkle theory um for or just to answer this question or something like that. We get the, I don't know, the, the, we said the brands, the products, excuse me, the commercial product. I think you could just ask like to answer this question and then give it a question. Let's try this one and see what he comes up with. Spark query, good, that's promising, but here we go. So that's, okay, so here it's using another ID for Rafa Nadal, hopefully that's the right one. And then it's um yeah well um i would guess that the ids are hallucinated here as well but i would not be surprised if that's the case but there's only one way to find out right yeah we go to wikidata and we paste that query as it is so this thing uh this is not rafanadal unfortunately yeah so this and um oh this is a this is not rough at all, unfortunately. So this is really rough. And, oh, this is a national library in Spain. Yeah, so the ideas seem to be... It's interesting because it seems to be decent at the predicates. Like, for example, it understands that it's looking for a hierarchy, for instance of, or so class of of but then it gets it completely wrong when it looks at things like products or or you know individuals like uh so i would not be surprised if this does not return any any answers because you know that's uh so very very good question but it looks like again it suffers from the same type of problem, right? So it still doesn't seem to be aware of these. Are you able to anchor these queries maybe in some way without the IDs? Like maybe look for a name or something like that? Or do they have to be anchored? No, it could be. I mean, can can you i would say but search by name instead of my id suppose the performance would be as good but still that could that's a good idea so let's copy this code so we copied it and if i come here and we so this is looking for a person with that name and right but again it's using a bad predicate so that's never going to work so let me let me comment that out so if i comment that and i said that this pro so oh let's if we get rid of all that we could uh and so but it's already changing quite a bit to be honest if i run that it doesn't now it doesn't find it so rough and adult it's probably not the label so it's not even finding a thing that has status label so maybe that would be there the name where you would use a different maybe rafael at all any chance there you go so maybe this guy is him that's it so so yeah it can be done as you can see max but it's still not generating something so so great great question and great idea but it looks like um still again suffering from the same type of of limitation I guess, in this context. Yeah, I think it needs to kind of validate that data in some sense if you want to pull it in and have it be specific with the right IDs and all of that. Yeah, yeah, cool. Anyway, that was an interesting one. Yeah, I think it was really interesting. And it's good to see that you're able to actually make these queries as well in the in the database that's that's right yeah good stuff okay well thank you then everyone all right thank you thanks max for for uh sending in this time exactly yeah all right thank you everybody goodbye cheers have a great evening everyone | Going Meta - Ep 17: RDF-ing between OpenAI and Neo4j | 3,526 | Neo4j | 20230602 | Episode 17 of Going Meta - RDF-ing between OpenAI and Neo4j
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Check out https://community.neo4j.com/ for questions and discussions around Neo4j
#neo4j #graphdatabase #semantic #knowledgegraph #rdf #semantics | 2024-10-21T12:50:25.630373 |
https://www.youtube.com/watch?v=IaGfOhZX4-k | Welcome back graph enthusiasts to graphstuff.fm, a podcast all about graphs and graph related technologies. I'm your host, Jennifer Reif, and I am joined today by fellow advocate, Jason Kuh. Hello everyone. And our special guest today, Paco Nathan, who is a principal developer relations engineer at senzing.com and listed as an evil mad scientist on LinkedIn. Thank you very much. So glad to join you, Jennifer and Jason. So just a little bit of background on Paco. He works a lot within developer communities and tech events and tries to provide many pointers to learning materials, industry analysis, and so on, and connect with many people. He presents he presents talks workshops and hands-on tutorials about entity resolved knowledge graph practices and the related ai use cases which are downstream so thank you so much for joining us paco do you want to start just a little bit with how you got into neo4j yeah yeah no great you know i've i've been in this space of like well just to kind of back up and give history back to the dark ages. I went to grad school back in the eighties, started grad school in early eighties in AI. And while I was there, I had a lab partner while I was at IBM research on the project. I had a lab partner who just talk, talk, talk, talk, talk about neural networks. And she really got me interested. So in the mid eighties, I got involved in doing work in neural networks, but I was also already doing work in machine learning and natural language. And I mean, for a long time, nobody really cared about machine learning. So I went off and did like network engineering and other kinds of data management. But eventually I got into doing graphs because of natural language, like for natural language understanding. And I've been involved in some, some open source projects that have been used a lot in that area. And earlier this year, I had friends at two companies. So Philip, CTO of Neo4j and also Jeff Jonas, who's the founder of Sensing and Philip and Jeff wanted to do, you know, something together. And I was like a common, a person with a foot in both camps. And so I signed up to do a tutorial and it was really fun. I got to work with ABK on the Neo side. We took data, mostly open data that was about businesses in Las Vegas during the pandemic. There was the PPP loans that the government did. And then there was the forgiveness program for the loans. And there were some businesses, I'm not going to point any fingers or mention names, but there's some businesses who registered multiple times to get the loans. And so we were looking at how to use graphs to identify fraud, PPP fraud, and just really focusing on that one geo area, because it made loading data quicker, things like that. But even just within that area, I think we had about an 80,000 node graph. And by using entity resolution, so sensing, and then organizing this and visualizing it and drilling down in Neo, you could show right away, it's like, wow, there's a really big cluster over there. Let's go find out what's going on. Oh my goodness. There's a business and they have an address at like a residential address or mailbox, et cetera. And there's like, you know, 50 businesses that are all registered right there. And they're doing, in some cases it was really egregious, like where you'd see, yeah, yeah, we do veterinary clinic, but we also do massage therapy. And yeah, so graphs are really useful for catching bad guys. That's a lot of where I'm at. Okay, cool. That's super exciting. Did you put that, so that project, sorry, I'm like, have like three thoughts going at the same time. So is that project available publicly for folks to kind of jump in and kind of explore as well? Yeah, you bet. I'll put it in the show notes. It was, it actually ran on the Neo4j blog. And so we've got the code all up on, you bet. I'll put it in the show notes. It actually ran on the Neo4j blog. And so we've got the code all up on GitHub. It takes a little over a half hour to run through, like start to finish. And we're showing how to do all the steps. I was using Graph Data Science Library a whole lot, but then going and doing the visualization inside Neo. And so basically get all your data prepared and then inside the graph database and then do your visualizations. And then inside of Neo, you can start to constrain what you're visualizing and start to slice the data and just really zero in. The thing that I think is really startling with this though, is when you look at loading up all these records of here's businesses and here's PPP loans and here's like how many employees are in each business and how many complaints I've had for employment, blah, blah, blah. You know, it's just a big scattered mess. But once you start to connect it up and build a knowledge graph out of it, once you start to bring in sort of the connective tissue, then the real things you need to look at start to pop out right away. You can start to see those clusters in the graph. Nice. It was it difficult to get the initial data that all that they do compile together? You know, that's a really interesting question about that here. I'm just looking in the show notes. You know, what we did was we took data from a safe graph. They do a commercial places directory of businesses all over the world. And so they opened up a slice. So it's open to the public. It's just about lost, I guess. And then we also had data from federal government. So we had some from the department of labor and then some from the U S chamber of commerce. So you know, getting that in and basically I just, you know, put it in a Jason and then some from the US Chamber of Commerce. So getting that in and basically I just put it in a JSON and then I was loading in many batches in Neo4j. Getting that into shape, no problem, but you really do want to analyze your data on the way in for this. When you're going to be building graph, it's probably best to do a little bit of descriptive stats about all the different columns, how much do certain items repeat? Are there any data quality problems like something that's supposed to be a unique identifier, but it's showing up many times? So you go through and you do some work there, but then sensing what's going on is we take PII features, especially connecting features. So we need to have two or more features across two or more data sets. Then we can make determinations on which records can be merged into consistent entities. The entities may be people, they may be businesses, they may be maritime vessels, but these end up being nodes. Moreover, they're a overlay. So like you take the evidence you have and you put it into Neo and those are your data records. And then you, you peel off the PII features and run it through sensing. And you come back with this kind of overlay that organizes it. And it says, here's an entity. And under this entity, here are connections to like five different data records in the graph but here's like you know here's bob r smith jr at 101 main in las vegas but then there's another record that's like bob smith in las vegas we don't know a whole bunch but we think it might be the same person and so if you're doing an investigation those sort of probabilistic links are really important of like hey we're we don't have enough evidence put together to make a decision right now, but here's one you might want to check out. What you do is you end up getting this overlay of nodes, relations, and properties that you build out on top of your graph to organize it. You still got the evidence, the raw data underneath, and then you can start doing analytics. Looking at centrality is a really good thing because it figures out like which nodes are more connected than others. Um, other things too. I mean, if you're looking at other use cases, maybe you're doing like rapid movement of funds or other types of any money laundering. And there's definitely graph patterns you want to look for. It's a nice blend of using entity resolution and knowledge graphs. Right. And so if, you know, other people want to kind of replicate the sort of work, maybe in different cities and stuff, how do you suggest they get started with this? Do they start with the source data? Should they work with the taxonomy first, the ontology? Yeah, that's a great point. In what I do in DevRel, AdSensing, in the Knowledge Graph practice, I am working with other consultants, other experts in this field. We're developing open source tutorials. So a couple of people I'm working with right now, Claire Sullivan, who used to be at Neo, and also Louis Vuitton, who's also done work with Neo4j, a lot of open source work. And we're building out tutorials showing different kinds of data. So maybe different cities or other kinds of use cases. Claire took the one that I had done in Las Vegas and then she built on top of it using Langchain integration with Neo. And so built out a chat bot so you can explore about fraud patterns in this case. But I think the larger thing there is there really are a lot of open data providers. So there's open sanctions that has sanctions watch lists of known people who've done some egregious things in the past, and we're going to watch them in the future. Or maybe, you know, businesses that have been caught engaging in illegal practices, or people who are just likely suspects, like they're in a position where they could be bribed and a lot of people in their position are getting bribed. So let's just keep track. So open sanctions is really cool. It's out of Berlin, Prudik, Lindenburg. But then there's also the kind of connective data, like open ownership, Steven Abbott, Pew out of open ownership in London. And that has a lot more links. So I can say, here's a person, they're a director of the following companies, these companies have owners elsewhere, on and on. That's really what you build up the relationships in the graph. And then sometimes, if you're lucky, you can get a hold of like event data. So for instance, there was a PPP loan, I got that from the federal government. Or in the case of, have you ever seen a Netflix movie called The Laundromat? No, I think I've seen it advertised, but. It's pretty cool. I mean, it's kind of funny. It's humorous. It's actually a really serious topic. Have you ever heard of like the Panama Papers? Actually, yeah, really good news case. So the idea is that there's this case. We still don't know all the facts it happened about 12 years ago but the deal was about three billion dollars are known to have been moved out in illegal means through money laundering and it went out and there was a whole bunch of well there were russian weapons illegal weapons trade there's a whole bunch of like people in the country who ended up buying condos in miami or you know buying luxury items and stuff there are a whole bunch of eu officials who were bribed and probably nobody would have known a whole lot except for somebody at the bank somebody at densk bank in estonia leaked records of like 17 000 records records of money transfers. And so this is actually showing the crime, right? Because you're seeing what's getting transferred from whom to whom. And so this idea of like, you have lists of data like open sanctions that provide risk, talk about whose potential to get bribed or who's known to have done something wrong. And then you have the link data, which gives you the relations. And then maybe you have an event data that says, oh, actually here was a money transfer. And if you can take those three, you can build a graph and show, even though the bad guys are trying to hide everything behind a network, like the money goes through maybe five levels of indirection with offshore accounts and different corporations. And then somebody buys a condo in Manhattan for cash. You can still trace back. And so leveraging a graph to be able to, you know, find those connections and identify that, you know, maybe in EU, you're only allowed to transfer so much money from one entity to another within a certain time period before you file certain paperwork or else, you know, you're probably doing money laundering. In this case, what they were doing is splitting up the transactions in smaller pieces, funneling them through a lot of offshore companies, and then eventually they hit the target. Long story short, there's been a lot of investigative journalism about it. There's been books written about it. There's a movie about it, et cetera. And 12 years later, absolutely nobody's been arrested. There's even like one of the diplomat, one of the politicos in Italy was like, oh yeah, I did the bribes. I took the bribes. I'm never going to go to jail. So, you know, it's, it's really interesting, but this area here, you can get a lot of data from open data sources. Sometimes you can get leaked data from like a bank or a law firm where people know where the bodies are buried and they just are trying to do the right thing. There's a couple of organizations like OCCPR and ICIJ, which take leaked data. And then we'll, we'll put it into format so that we can build graphs. So that investigative journalists and, and you know, officials, government agencies and whatnot can work with it. So it's kind of a long answer, but I hope it gives a little bit of flavor of a lot of what's going on there. That's a lot of our customer base too. Nice. No, thank you. That was a very comprehensive, like, you know, soup to nuts kind of overview, especially like talking about the Panama Papers, which was like my lead into Neo4j. It's a great, fascinating story, right? Oh yeah. Yeah. So much data. Yeah. There was a great book, I think written by two of the journalists or two of the data center folks. I forget the name of it, but it was a great audio book as well, too. Yeah, no, it's really interesting. I I've been doing a series of talks called catching bad guys using open data and open models, talking about how to really build graphs and then use what's emerging in terms of AI tools downstream from knowledge, whether you're building up, you know, GraphRag or you're working, building up agents, or you're doing other types of analytics and graph machine learning, you really got to get the graph data first and got to get it right. And in that talk, I'm showing, yeah, there's, there's like five books that we're tracking. There's one called Moneyland from Oliver Bullough that I highly recommend. He was one of the journalists tracking all this stuff. And he had gotten out of, I think, Cambridge and then right around the time that the Soviet Union collapsed. And so he immediately went to Eastern Europe and started traveling around and stayed embedded there as a reporter for many years. And then it's just really fascinating to see what ICIJ has done. Panama papers, but they've also done paradise papers, Bahama papers. There was one about Cyprus. It's very recent. There's a lot of these different studies. And again, it comes back to somebody decided to do the right thing and they leaked the records. Right. You mentioned the movie laundromat earlier. Is that kind of a similar story to Panama Papers? Yeah, it is. I mean, what they did was they took a lot of different scenarios of what's going on. So if you read Autocracy or you read Moneyland or any of these books that go into a lot of detail, the Laundromat is sort of the condensed crypt notes where they follow like a few people and everything bad happens to these people. And they go off and investigate it personally flying around the world and it's like that has never happened to any individual except for maybe oliver below or bill broder but the fact is that it gives you vignettes of what's really happening and i mentioned about the azerbaijani laundromat nobody's ever gone to jail we have 17 000 leaked records and we know that almost $3 billion went through money laundering. That may not be all of it. I mean, there was probably a lot more. And so when you look at dark money moving around the world and how much it's adjacent to illegal weapons, human trafficking, even illegal fishing and illegal lumbering is very closely adjacent. It's often very much entangled. But certainly a lot of bribery. And frankly, why is it so expensive to live in San Francisco, New York, Miami, London? Well, a lot of the illegal money is flooding in and like paying cash for prime property. So, I mean, there's just a lot of things going on in the world. Also illegal political influence, like in various countries where, you know, there are attempts to like overthrow the government, that kind of thing. You look at the impact of what's happening with dark money, and it's really incredible. On the other side, there's been a lot of moves toward making the data more open. So for instance, after the 2009 global financial crisis, there was legislation that went across worldwide in governments where if you're a corporate entity and you're engaging in certain types of financial trades, you have to have a registration for it. You have to have a unique identifier. There's an organization called GLEIF, which does global LEI identifiers. And you absolutely must, if you're doing those kinds of derivatives that led to the 2009 crisis. Now there's other types of illegal, you know, fin crime that you could be engaged in and no, no problem, go right ahead. We don't need to know. But that's one case at least where some parts of this are tied down in open data. And there's something else that's rolling out more recently called UBO, which is ultimate beneficial owner. So if there is a company, you have to disclose who are the owners. In California, actually this just went in. I have a company that my consulting firm that I was doing before I joined Senzing. It's now a law if you don't disclose the ownership of it as of this tax year, it's a felony. And so understanding the chain of ownership is really important. That hasn't been transparent before, but now it's becoming transparent. And ubo data collection protocols are going out across the world different countries are doing it some more or less like eu is pretty good about it but then malta is like oh no we're not going to disclose that we'll crack down on auditing but we're not going to disclose who wants what so it's still kind of a haphazard landscape but that's one thing that the people behind open ownership have really been pushing like UBO standards across the world, having open standards for the data, but also pushing the policy and really going out and doing dev rel for government policy, if you will. Which is great. I think we need more of that, right? Yeah, absolutely. I mean, there's really bad stuff. Have you ever heard of something called speculative libel? No, what is that? Oliver Blow talks about it in Moneyland, and I apologize, I'm not an attorney, and so I hope I don't mangle this. But the idea is, in the UK, there are laws that if you are investigating some criminal behavior, and you have data and you're going after the bad guys, if they catch wind of it, they can take you to court beforehand, right? And you have to defend yourself. And those kinds of defenses can be expensive. The case of Bill Browder, he's a U.S. entrepreneur who is in Russia, and he became quite a critic against the Kremlin. And, you know, Browder's like a billionaire, right? He could afford to drop a half, whatever, half million dollars to defend a case. But the idea is the oligarchs who are moving out into London have recognized that these laws are on the books. So if anybody's going to try to investigate them, they'll just put up lawsuits against them. And in some cases, they also have gone out to like small island nations and like, here's $5 million for your cancer research institute. Oh, thank you very much. You've made me an honorary diplomat. So, you know, now with diplomatic status, if anybody takes you to court in UK, the crown will go up to the judges and say, no, we can't do this. They've got diplomatic status. If we charge them criminally, then we're going to get killed on tariffs from Belgium. So just reverse the this decision so speculative libel and some of the diplomatic community it gets really dicey because people are trying to do the right thing you might even end up going to jail just because you're trying to do the right thing is that like is that a recent flaw or is that just like a recent reinterpretation existing no i think it's recent reinterpretation i believe that there are laws that are on the books that date back way far back in the day you couldn't say something bad about the king, right? That would be dangerous. If it's okay to transition. Now you've got a talk coming up for nodes next week. And you talked about a lot of entity stuff, but can you give the audience more of a preview of what you'll be covering in that talk? You bet. You bet. So we're going to talk about any resolution. It's a hard problem and it's so important in graphs, especially in these kind of regulated environments, whether you're talking about fin crime or, you know, sanctions or any, any kind of case where there's some sort of behavior that you have to spot. And usually there are people who are trying to hide behind a network. They're trying to hide because they've got a lot of offshore corps. So the problem is if you just load up all your data and try to connect it in a graph, oh, that's great. If you're working in one language and all the data is clean and you have unique identifiers for every node, awesome. That's great. Knock yourselves out. If you have like a marketing email list and all the records are identified by email addresses, doing some kind of deduplication on that is really go for it. But the thing is if you have a name of a person and it's in Arabic and in Arabic, there are cultural conventions like the name changes because you may mention your parentage, your father, the name changes because you may have made a pilgrimage to Mecca. So then the word Hodge comes in. And there's a lot of ways to abbreviate, but then there's a lot of ways to transliterate. So you might have a data set that has somebody's name in Arabic, but then it's transliterated into Italian. And there might be five different ways to transliterate into Italian. Trying to do string comparisons isn't going to work. And when you look across the world at these kinds of like passport control and voter registration and a lot of things, you look at a world where there's a lot of dirty data and there's a lot of bad actors trying to hide behind networks and you're trying to work with it, entity resolution becomes very difficult. There's a lot of edge cases and the edge cases are actually where the real big problems are. And so for instance, we do work where we can handle data that's in European languages, but then also handle data that is in Korean or Mandarin or Russian, Cyrillic or Arabic, or these days, even like Burmese and Khmer. And so, you know, if there's, if's an agency in singapore and they're having to look at like maritime traffic where there might be weapons traffic coming through right offshore and the manifests are written in all these different languages and all these different transliterations how do you make sense of it or if you're familiar have you all ever been to singapore by the way it's a really cool area because there's this confluence of all these cultures, but like you try to find addresses and it's nerve wracking, right? And so you look at addresses in Singapore and the name will be partly a business name, something that has more of like an Indian background and then the street name that has more of a Chinese background. And you look at the abbreviations and of course, Singapore has like super high density, so you go into a mall and there might be hundreds of businesses at that street address, but they all have a different number. And so you try and use things like string distance, string edit distance to compare records, and it just falls apart. Because there are so many cases where there'll be hundreds of businesses that are basically one or two letters off. And then there'll be a whole bunch of addresses that are exactly the same but they're like completely different representations so trying to show these kinds of use cases why entity resolution is hard you have to understand the cultural parts of it you have to understand the edge cases i work on a team where most of the core engineering people have been together for 20 years or more and our technology gets used for the majority of voter registration in the US. It gets used for like passport control and counterterrorism and a lot of fin crime work. And you know, these people have been handling these kinds of ed cases for a long time. By the way, another movie. Did you ever see a movie? It was about mathematician math grad students at MIT. And they went to Vegas, and they were doing this scam where they were card counting at the tables. Oh, yes. And the team I work with, they actually were part of the bad guys in that. They were part of the technology that caught them. Oh, wow. So that's taking it back a few years. So we're going to talk about like, what can we do with data to leverage entity resolution and have this idea of generating. And a lot of it is in contrast to the larger dialogue that's going on right now. There's a lot of cases that say, hey, just put all of your data through an LLM, the LLM will generate your graph and then you'll do graph rag downstream from that. And I mean, that's cool. You can do really cool demos with that. You can do really good applications with that. But if you're in a regulated environment where you're developing probable cause graphs that have to go in front of a judge to be able to get a warrant, or maybe like a SWAT team is being deployed, or maybe you're trying to get an indictment, or other things that these are really big problems and they require a lot of accountability because if you don't have the evidence, nothing's going to happen. In these cases, you can't just throw all the data into an LLM and say, okay, we're going to take whatever we get. OpenAI, knock yourself out. Instead, you really have to be, you have to be mindful about how you're handling the evidence. And so the idea that we present is you start out with maybe you have some ontology or taxonomy that you're required to use. Like there's the follow the money ontology for a lot of fin crime. There's various NIST vocabularies that get used in some of these areas. So really understand like what kind of schema do you have to use because the use case and then take your structured data and run entity resolute, well load up your structured data and then peel off your PII features, run entity resolution to figure out how to merge and then identify the entities and the relations and the properties about them and use that as kind of a backbone to build your graph. And then secondly, what we find is a lot of these use cases, there's kind of a Pareto rule, like maybe an organization has 20% structured data, but 80% unstructured data. That's pretty typical. So you're building these graphs. You might have a lot of data about sanctions and ownership and whatnot, but you probably want to bring in maybe some news stories or some log files or something unstructured, right? Maybe the shipping manifests. And this is a common problem, whether we're talking about investigative graphs or even trying to untangle supply networks. And you might know that your shipping container is going from point A to point B, but there might be something happening in that part of the world where the ship is currently that's going to cause it to be derailed for two months. That's a thing that happens a lot, like in supply networks. So how do you build up the graph, like we've mentioned from schema, and then structured data and entity resolution? Now, the next thing is you want to bring in unstructured data. What we'll show is how to build up a really high quality pipeline where you're using state-of-the-art models for the NLP work, whether you're doing your parse, doing your named entity recognition, doing your relation extraction, but most importantly, how do you do the entity linking? Because that's where you take the nodes and relations that are extracted from your unstructured data. You want to bring them into your graph and be very aware of the context of your domain, right? If I see the abbreviation NLP, that might mean natural language processing. If I'm talking about machine learning, it might mean neuro-linguistic programming. If I'm talking about psychology, given my domain, what are the words, even in a corporate sense, if you use the word consideration, that means something very different in HR versus contract law, right? And if you can't really distinguish the meanings from your unstructured data, then when you go to link stuff into your graph, you get in really big trouble. What we're doing is taking the results out of any resolution after you've run on your structured data with your schema, you take the results from any resolution, you use that to customize an entity linker. And so now you can do domain aware entity linking and be using state-of-the-art models like Kleiner and others to do your named entity, any argument inAR recognition. You can start to pull the pieces from the structured part and the unstructured part together. Once you've got that graph, then your next step is to go into GraphRag or building up systems of agents or maybe doing some type of graph analytics to really identify common patterns. How can we do the AI apps downstream? That's the gist of the talk. Sorry, long-winded, but I hope that gives some flavor for it. No, that's, that's perfect. The comments you made about starting with structured data as kind of the backbone is a topic you went in depth in your graph power hour, I think from last month. Yeah. Yeah. Yeah. Awesome. Thanks. So yeah, I totally write. So put that in the notes as well. I definitely recommend anyone check that out because you also covered the various graph rag techniques that are available now like different design patterns so actually just real quickly from that is there like a best or most promising graph rag design pattern that's kind of interests you the most oh that's you know so one of the things we were saying is that graph rag is a pretty big word. The graph in graph rag, graph probably gets used. I think we've counted eight different interpretations for a graph. So you might be talking about like building a graph out of the embeddings of the text chunks, which is one thing you can do. You might talk about having built a graph and using the prompt to generate a cycle query. You might be talking about doing a lexical graph, on and on and on. We've identified, like I said, multiple meanings for the word graph, if you look at GraphRag literature. And out of all those, I really use a lot of the approach of where we'll take the text chunks from your documents and we'll put them through an embedding model and you put them into your vector store. Great. We'll also parse those. And then from the parse, we build up a text graph, which is a nice structured way to leverage state-of-the-art models for NER, relation extraction, and linking. And then once you've got this kind of like staging area of your text graph, now you can run some, maybe some graph analytics, like centrality measures to see what's most connected. What's probably the thing that's really being talked about in the text chunk. And then from there, use entity linking to link back into a graph that you're collecting everything across all of your documents. That's the approach that I'm using most, which is kind of like lexical graph, but sort of lexical graph plus plus. But I will give a shout out, Tmash Botanik did some posts a couple of weeks ago about a thing called GraphReader, which is really interesting where you're using the graph to come up with basically a notebook of not just facts, but sort of like statements, like logical statements about things that have been extracted from your documents. And once you've got this notebook, then you can use an AI application, some type of agent to sort of treat the notebook of these statements as kind of hypotheses and start chaining them together in ways where you can actually say something about it. I'll give a shout out about GraphReader and the integration with Neo4j and I think it was Langchain because I'm old. I have white hair. I studied AI at Stanford in the early eighties. And back then there was all this work on expert systems. You may have heard, you know, some of that back there. There was, there was something, there was a project from like 1972 to 1979 called Hearsay 2. It was an expert system that did work on sort of discovery with Blackboard architectures. It's really fascinating to me because when you look at what Tomáš was showing with GraphReader today to use with LLMs, if you just change out a couple of phrases, it's very similar to these kind of Blackboard architectures that we had back in the 70s and 80s. And there was a lot of articles back then by Barbara Hayes-Roth and Penny Mee and some of the other AI experts back in the 80s who were really showing what you could do. My graduate advisor was Doug Lennett at Stanford, and he did a lot with Blackboard architectures with the URISCO project and others. And arguably what we were working on back then were sort of generative approaches using Blackboard architectures. And now here we are 40 years later doing generative approaches with language models also, but much better language models. I'll say that much better compute. So I won't say that it's all reduplicated, but it's sort of like we're revisiting parts of the past and I'm really glad to see that. It's great that we can take things that started from before and kind of like come back to and go like, oh wait, we can now combine these in new ways. Or now we have the technology to really leverage some of the initial thoughts that came up way early. Yeah. And I think it's a good reminder too, that this stuff, most of it, at least the foundations of it is much older than what we give credit. You know, the AI wave last year, maybe two years, if you will, but realizing that all of this study and this research actually happened potential decades before that this is not necessarily net new. It's just, we're applying new technologies to existing problems that now we can help solve in new ways. Yeah, so very well said. When you look at reinforcement learning, there's incredible things going on there, but it's really based off of optimal control theory, which is like from the 1950s, you know, building airframes that are resilient. And the difference is now we have deep learning, so we can learn and build policies that are enormous, you know, that have billions or trillions of points in them. Whereas before optimal control theory was pretty constrained by what you could fit on the hardware that was in the aircraft. To that point, I'll shout out, I think in the graph space, I see something really interesting happening that it's not necessarily all about LLMs. I mean, language models are very important. Don't get me wrong. And I come from a language background. I love this, but I'll shout out a friend of mine, Urbanshi Mitra. She's at USC, University of Southern California, and she leads work in the optimization lab there, really interesting work. I saw her present at a conference at Stanford a couple of months ago where they take graphs, and then they build out causal relationships in the graph and study causality. And they look at like sub graphs, and they're using reinforcement learning to optimize the causality. And then you can start to explore like, I want to throw counterfactuals, or I want to change some sort of intervention. So I start to play with the graph to figure out what's my policy really doing? What is my plan? How could I shoot this big corporate plan for a merger and acquisition? How could I shoot it apart, right? By using some AI technology. And it's not all about LLMs. It's about bringing in stuff like reinforcement learning and causal graphs and other things. And two of the takeaways there are number one, graphs are at the center of this. And number two, we're reusing a lot of math that comes from decades ago. But then number three, the cool thing about it, what Urbanchi's team is showing is they're using it to understand the interactions of like systems of multiple agents, like AI agents. So can we build a graph to describe how in a multi-agent environment, what are the outcomes? You know, where's the biases? How can we try to steer this? I'm really hopeful of like seeing a lot of these things come out from the wings and really having graphs be the centerpiece for integration. Graphs connecting decades of technology together. Well said, well said. It's going gonna be a big graph well jen i feel like our show notes for this this episode is going to be incredibly long we have lots and lots of links yeah um uh kind of riffing off of that we'll add a few more links if we could talk about like what type of technologies everyone's been playing with this last month and would like to recommend to the community paco would you like to share first you bet there's a new tutorial that we're just finishing up now the code is up on github in fact there is a new library the first release is in pypy and this is about using the results of entity resolution to customize entity linking so that you can be using the context of your domain. So like we said, you know, start with some structured data and do your entity resolution. You can understand the definitions. You can almost think of it as like building a thesaurus. And then once you do that, now you can go out and use tools like spaCy to go out and parse, here's your hundred million documents for unstructured data, but we're using the results of entity resolution to steer the entity linker to make the right choices in context. And there's some interesting ways that this is using embedding models and like community summaries. So we're really leveraging the graph technology to make NLP do the right thing. The week with tone has been building this out and there's, we'll have a link here for spacey for the entity linker. And we'll have an article about that coming up pretty shortly. Nice. Is that package the input? Is it, do people just give, put the unstructured data or do they have to run some pre-processing function and then that output gets passed in? Yeah, no, the night we try to make it simple. The idea is with sensing with entity resolution, you bring your PII features in and you call, make a call with some JSON and then the results come back as JSON. So you end up with this JSON description for your entities and the links among them and different probabilities and whatnot. We take that as input and then build up inside of spaCy, there's what spaCy calls a knowledge base and their entity linker, it builds off of that. So basically you import this file and you can also bring in other definitions that you have in your graph. We've made a way so if you had a pre-existing graph and you want to leverage the schema and the links that are in that graph, great. You bring it down into spaCy and now you've got a spaCy pipeline component. When you run spaCy to parse your text junk, you get your parses out of that, and you look at the named entity resolution. And so now when you're picking up different noun phrases that are supposed to be entities, if they make sense in context, they'll be linked to your graph. Like you'll have unique identifier directly into your graph. So what this means is we can do parsing that's much smarter so that we can do the right kind of integration in a knowledge graph. Nice. Jen, did you want to go next? Sure. I've been working on a Graph Academy course. Hopefully that will be coming soon, but on Spring Data Neo4j. And of course, as with anything, more you expand your your learning a little bit or try to teach it to someone else the more you learn yourself so i've come across some things especially in the like delete and update data inside the database using spring data neo4j that i kind of bumped against some rails and things like that trying to figure out how it works and what you do with it so just some kind of cool things there i hope to have some rails and things like that, trying to figure out how it works and what you do with it. So just some kind of cool things there. I hope to have some content, a little bit more detailed about what I've learned and kind of where I bumped into walls, but always really great project there doing some really great things. And the people working on that are really fantastic. There's joint work going on the Neo4j side, as well as on the Spring side. And it's just a really great partnership there. So if you're in the kind of Java ecosystem, Spring ecosystem, definitely check that project out. And again, I hope to have some more detail coming soon about some of the things that I've learned lately on it. Nice. My favorite tool is also a package, kind of like Paco's. It's the Neo4j's dash GraphRag Python package that came out earlier in the month. And specifically I've been looking at the vector database retrievers. These external retrievers allow you to combine Neo4j with other vector databases. So the first three that are available is with Weviate, Pinecone and Qtron. If you already have or already using one of those vector databases, you can basically put in your collection and your credentials and you can have the vector database store the embeddings and Neo4j handle all the knowledge graph and graph data. Looks really good. Yeah. It's a package that has quite a lot of features. So it's taken a while for me to unwrap, but for anyone wanting to get in and it's working in a Python environment, it's a good starting position. So November, it seems like we're still in prime conference and event season. Paco, your calendar looks like, I don't know if you have a day where you can sleep. Are there some, some coming up that you'd like to tell the audience about? Thank you. Yeah. I'd love to. Yeah. I got off the plane from Spain. I was in Ocarina at a conference. I got off the plane last night. So I'm back in California. Yeah, really cool stuff. So next week's going to be busy. We've got nodes. I'm really, really looking forward to that and have so many friends also speaking. At the same time, I'm going to be at ML Ops World Summit in Austin, Texas. And also a lot of friends in this space who are presenting there. So I was going to say, I think we have colleagues from Neo4j there as well. Oh yeah, absolutely. Yeah. And just this whole confluence of like data engineering and MLOps and cool applications that are built off of LLMs, et cetera. It's all kind of connected. And then and then after next week, yeah, it gets busy also. What am I doing? Oh, we have the ISWC is a conference about knowledge graphs. It's more academic, but it'll be in Baltimore on the 12th. And we're doing a workshop about software engineering and knowledge graphs. And then sensing we have our user conference in rest in virginia but right nearby there on the 14th and then on nove 18 i'm gonna be at the cleveland mega meetup so there's a lot of like finance and healthcare in cleveland and so we're gonna bring people together from a lot of big companies and talk about using knowledge graphs and using entity resolution. And then London after Thanksgiving I'm going to be in UK for half of December, but it gets really busy because Linkurious, which partners with Neo4j, Linkurious is coming over from Paris and they're holding Linkurious days, which I think is going to be held at Lloyds Bank. And that'll be on December 3rd. I'm really looking forward to a full day of talks there about, like we're saying, entity resolve knowledge graphs used in finance. And then I think on December 10th, my colleague and friend, James Fair at Neural Alpha is doing a meetup about ESG and using graphs for ESG, you know, ethics, et cetera, in terms of business practices. That'll be on December 10th. And then on December 11th through 13th, we're doing the Connected Data World Conference, Connected Data London. And I believe that Neo4j is a sponsor there. Senzing is a sponsor. I'm one of the co-chairs. So I'll be teaching a masterclass on the 11th about entity resolve knowledge graphs, catching bad guys using open data. And I'll also be a host for some of the technology sessions. And then the third day, we're going to have more of an unconference kind of format. And so I'll be hosting part of that as well, as far as like domain knowledge and sort of semantic overlay ideas. But I'm super thrilled about what's all the people coming together out of London. Also be able to catch up with shout out to ABK, who I think is based in London, pretty sure. Maybe not every day, but at least I'll be around the area to catch up with friends. Nice. That's fantastic. Jen, I don't know if you will go next. Yeah, sure. So I got a couple of virtual things next week one being nodes of course on november 7th and but just before that i'm participating in extreme j conference it's a virtual event as well so i'll be doing like the java portion which is the the j at the end they have like a javascript and a couple others that they do as well but this will will be the kind of the Java day, I guess, next week. So I've got that. And then a couple of days later, I've got the nodes event. And then towards, I guess, just before the US Thanksgiving holiday, I've got the Chicago Java user group or the CJAG, and I'm gonna be presenting there. The event's not up on the site just yet, but it should be here in the next week or so. So if you're in the area in the Midwest, I will be there just before the holiday season starts. And then things get a little bit quiet for me. But how about you, Jason? My schedule is definitely the lightest out of the three of us. So next week, nodes, of course. And then the week after that, I'll be going up to Seattle and doing a talk with the Puget sound, programming Python group, puppy. And also we're doing a joint event with union, boundary ML and pine cone. So looking forward to that. And a couple, I'll probably be going to San Francisco at the end of the month. Although nothing has been like set in stone, but right after that will be reinvent in Las Vegas. So those, those are, those are my confirmed events for the. November is super busy for all of us. We're covering, we're covering so many communities and so many geographic areas and all. So I, I look forward to catching up and finding out your takeaways from all this. I think that's the fun part. Hmm. Yeah, definitely. Yeah, definitely. Envious of you Paco. You get to connect with so many old friends and really, really toss around some, I think there's just going to be amazing. Like the number of connections and new ideas that is going to come out of that is just going to be bonkers. Paco Paganova I really love that about DevRel that, I mean, I've been doing this kind of work. I did DevRel back at, there's a little company called Databricks back when we all fit in one room. So that was really fun. I got to lead community work there and training there a decade ago. And I just really love DevRel for exactly the reasons you both were talking about. I mean, you get to meet so many amazing people and get this bird's eye view of like new things coming down the pipeline and like a lot of cool projects around the world. I really enjoy that. Yeah, same. Fantastic group of people. Always. We will have, as always, links to Fantastic group of people. Always. We will have, as always, links to everything else going on. So speaker resources, a bunch of the things that we've talked about with Paco today, as well as just things going on in the Neo4j community, like blog posts, videos, and so on. I'll link all of that. And then of course the Neo4j events, which a lot of the things that we've mentioned where we're going to be in the next few weeks will pop up there as well. But definitely reach out to us if you have questions or need something else. Thank you so much Paco for joining us today and talking a lot about entity resolution and the work you're doing at SunZing and the work you have been doing over the last several years in catching bad guys with graphs. We look forward to hearing your talk at Nodes too. Looking forward. Thank you so much. All right, bye everyone. Thank you everyone. | Catching Bad Guys using Graph Entity Resolution with Paco Nathan | 2,801 | Neo4j | 20241101 | Our guest is Paco Nathan, a Principal Developer Relations Engineer at Senzing.com (and listed as an “evil mad scientist” on LinkedIn). Paco works a lot within developer communities and tech events, and tries to provide many pointers to learning materials, industry analysis, and connect with many people. He presents talks, workshops, and hands-on tutorials about entity-resolved knowledge graph practices, and the related AI use cases which are downstream.
Episode Notes
Speaker Resources:
* Neo4j+Senzing Tutorial: https://bit.ly/3AvKneU
* When GraphRAG Goes Bad: A Study in Why you Cannot Afford to Ignore Entity Resolution (Dr. Clair Sullivan): https://www.linkedin.com/pulse/when-graphrag-goesbad-study-why-you-cannot-afford-ignore-sullivan-7ymnc/
* Paco’s NODES 2024 session: https://bit.ly/48sCSlD
* Graph Power Hour: https://www.youtube.com/playlist?list=PL9-tchmsp1WMnZKYti-tMnt_wyk4nwcbH
T* omaz Bratanic on GraphReader: https://towardsdatascience.com/implementing-graphreader-with-neo4j-and-langgraph-e4c73826a8b7
Tools of the Month:
* Neo4j GraphRAG Python package: https://pypi.org/project/neo4j-graphrag/
* Spring Data Neo4j: https://spring.io/projects/spring-data-neo4j
* Entity Linking based on Entity Resolution tutorial: https://github.com/louisguitton/spacy-lancedb-linker
https://github.com/DerwenAI/strwythura
* AskNews (build news datasets) https://asknews.app/
* The Sentry https://atlas.thesentry.org/azerbaijan-aliyev-empire/
Announcements / News:
Articles:
* Neo4j Developer Blog https://bit.ly/3LcYx6q
Videos:
* NODES 2023 playlist https://youtube.com/playlist?list=PL9Hl4pk2FsvUu4hzyhWed8Avu5nSUXYrb&si=8_0sYVRYz8CqqdIc
Events
* All Neo4j events: https://bit.ly/3Lb1Kn3
* (Nov 5) Conference (virtual): XtremeJ https://xtremej.dev/2024/schedule/
* (Nov 7) Conference (virtual): NODES 2024 https://bit.ly/47VqQkp
* (Nov 8) Conference (Austin, TX, USA): MLOps World https://mlopsworld.com/
* (Nov 12) Conference (Baltimore, MD, USA): ISWC https://iswc2024.semanticweb.org/event/3715c6fc-e2d7-47eb-8c01-5fe4ac589a52/summary
* (Nov 13) Meetup (Seattle, WA, USA): Puget Sound Programming Python (PuPPY) - Talk night Rover https://www.meetup.com/psppython/events/303896335/?eventOrigin=group_events_list
* (Nov 14) Meetup (Seattle, WA, USA): AI Workflow Essentials (with Pinecone, Neo4J, Boundary, Union) https://lu.ma/75nv6dd3
* (Nov 14) Conference (Reston, VA, USA): Senzing User Conference https://senzing.com/senzing-event-calendar/
* (Nov 18) Meetup (Cleveland, OH, USA): Cleveland Big Data mega-meetup https://www.meetup.com/Cleveland-Hadoop/
* (Nov 19) Chicago Java User Group (Chicago, IL, USA): https://cjug.org/cjug-meeting-intro/#/
* (Dec 3) Conference (London, UK): Linkurious Days https://resources.linkurious.com/linkurious-days-london
* (Dec 10) Meetup (London, UK): ESR meetup in London by Neural Alpha
* (Dec 11-13) Conference (London, UK): Connected Data London https://2024.connected-data.london/ | 2024-11-01T08:25:46.284360 |
https://www.youtube.com/watch?v=tB5s7Q-8DsE | Hello community! You ask me, hey, is it possible to connect multiple agents to a knowledge graph? Yes, of course we can! And I will show you today here a new methodology that stands for Adaptive Generative Engine for Task-based Interaction and Graphical Representation Agent Interaction Graph and it is a novel platform to design that is designed to bridge the gap between llms and knowledge graph for multiple agents beautiful this is here the paper october 15 2024 university of tokyo university bielefeld duke medical school universidad de alicante yale university xiamen university canal medicine and hinan university. Beautiful. Eight institutions, eight famous institutions. Now, we will use our old friends, our tools that you know, chain of thought reasoning, a React framework will be implemented in our agents. And we use in-context learning, where we will apply few-shot learning. And this is the way we will build our prompts for the LLMs. Now here we have seven agents, aren't they beautiful? So let's have a look, just look at the name and you might understand what we are talking about. The user intent interpretation agent, the key concept extraction agent, the task planning agent, the knowledge graph interaction agent, we say, ah, here it's happening, the reasoning agent, the response generation agent, and the dynamic knowledge integration agent even want to feed new nodes and edges back to our knowledge graph. So, interesting, eight institutions, seven new agents. Let's have a look. Let's say I have a user query. I say, hey, I want to understand the relationship between quantum entanglement and teleportation. What foundational concepts should I learn first? The user intent interpretation agent identifies this now as a particular request, and there is a predefined class. This here is called a prerequisite prediction, and I will show you the different other classes in a minute. So we have predefined structures that we use. The key concepts extraction agent says okay I extract here the object's entanglement, teleportation as key concepts. The planning agent has now to do here to determine here the foundational concepts leading to or in the environment of quantum entanglement and teleportation and map out a learning path to answer the query. The knowledge graph interaction, you get it. It generates here some SPACO queries or cipher queries or whatever to retrieve concepts from the knowledge graph. And the reasoning agent analyzes to retrieve data to identify the most relevant foundational topics. And the response generation agent crafts the response. And then the dynamic knowledge integration agent updates the knowledge graph if we discovered some new relations that are not established in the knowledge graph. And here we have the official documentation visualization. We have a user input. We have a multi-agent framework that interacts with a knowledge graph. And an output is generated. And this is here a rather simple representation of a particular workflow. So we have here the task definition, and then we have agent 1 to 7. And then somehow all those agents interact in a specific way with a knowledge graph, and this is it. Well, we have to have a closer look now. And you might say, wait a second, this sounds familiar, no? And you say, hey, look at this video where you showed us here the interaction between an LLM and a knowledge graph for the new GIF methodology. And over there, you showed us here Sync on Graph, deep and responsible reasoning of large language models on knowledge graph. And although this is already here for March 2024, we had a look at this and we discovered that instead of just simply fetching here facts from a knowledge graph with a simple SPARQL query, the LLM could here actually sink its way through the knowledge graph and explore different reasoning paths through the knowledge graph like a detective gathering the clues. And this is what I told you that Sync-on graph is rather beautiful for. And you remember I showed you this. We had just an LLM plus a knowledge graph. And then we looked here at Sync-on graph. This is an LLM tensor product knowledge graph so this means a stronger interconnect with the knowledge graph plus was simple we generate a sparkle query or any other query that you like from a graph database you have a question you got a response but if there is a link missing if your knowledge graph is incomplete, and most cases knowledge graphs are incomplete, you have a problem here. But not here, because as I told you, here on the example of Canberra, Australia and Antony, here we have now that we built here subgraphs here of the epsilon environment here of our particular topics. And then we were able to deduct the information from the query. And you remember the mechanism that I told you how Sync on Graph works, that it couples here the LLM with a knowledge graph using the beam search algorithm and this was the beauty that we do not just go with a greedy algorithm but beam search allowed us here the llm to explore multiple possible reasoning paths at once in parallel and then choose here depending on the width of beam, to choose the top three or top five causal reasoning structure. Now, this here is not the case. So this is a step back, because now we just have simply a greedy query. So you see, I ask myself reading this, wait a minute. So I understand perfectly if you have an LLM and a knowledge graph and you have simply your SPOGL query. But how do the authors here in this new paper go now to a closer interrelation between the seven agents and the knowledge graph. What is the mechanism? Because it is not, as I showed you here on Sync on Graphs, a beam search. It must be something different. So what is it? Now, we have to dive, have now a deep dive into every agent in the agenti graph. And I ask here our little strawberry here to provide a simple example. After I had some exchange, we have strawberry on the topic of this paper. So let's have a look. But yeah, let's make it a little bit bigger here. First detail we have. How many points we have? You have 7 points, points seven agents so the first three agents what are they doing now exactly and the query is hey how does photosynthesis relate to cellular respiration and what concept should i understand first you see very clear predefined question what a coincidence so let's have a look. So the user intent interpretation agent determines that the user is asking for both the relationship between two concepts, photosynthesis and cellular respiration, and any prerequisite knowledge. And now you might say, ah, now I know where the epsilon surrounding of our topic are coming from. So we have here the output is relationship judgment and prerequisite prediction. This is here our interpretation what the user wants from the AI. Then we have the key concept extraction agent. Now it is happening. We extract now the entities from the user query and you see it is photosynthesis and cellular respiration. And it also extracts here the relation between those two entities is relate to, exactly as in the user text. So now that we identified what it is really all about, we already used up two agents. Okay, let's go with this. And then comes another interesting part, the task planning agent. So whenever we have some intelligence that is invested here in the planning of the strategy of the whole system, now it gets interesting. The task one and two, find the relationship between photosynthesis and cellular respiration. Yes, this is exactly the question. And identify prerequisite concepts for understanding these processes. So we are now looking, if you want, for additional information explaining, maybe in the simplest case, what is photosynthesis? Where is here connection to, I don't know, animals or to plants or to whatever we have. What is cellular respiration? What is a cell? What is respiration? How is this combined? What is the latest research on the topic? So anything that has a prerequisite concept, we're looking for. So those are now these two tasks that we are looking for. And now the AI comes and says, now the knowledge graph interaction agents comes and says, hey, I have here a knowledge graph for me. Structured knowledge is available for me. So it builds now, it constructs now and executes here a database query, a graph database query, a cipher query, or a Sparkle query, whatever you have. It doesn't matter what system you're using, for both tasks, one and two. You remember my, hey, wait a minute, wait a minute. Here, task one, task two. Okay. So, cipher queries for both tasks. So, now we understand where we get the structured knowledge from. And and then retrieves the relationship and prerequisite concepts from the knowledge graph now this is a little bit vague because how does it do this now let's just follow along the reasoning agent analyzed the result determined that photosynthesis and cellular respiration are complementary processes that identified at the prerequisite in our case here identified by the ai given the knowledge graph that was available included basic chemistry cell biology energy transfer and in the end and then we have a response generation agent and a dynamic knowledge integration agent great yeah they implemented this here with a graph database on Neo4j, but whatever, you don't have to use SciFag, you can go with whatever you like. But you see, now it is interesting that we just have here a graph database query. But where does this relationship and the prerequisite concepts from the knowledge graph come from for the LLM to understand and integrate. Because where we are is more or less we are here now. We have an LLM plus a knowledge graph. And we have here a Spark query or a Cypher query or whatever you have. And if there is a missing link, you have no way to find an alternative route between those two entities. So how does it do this, that it is, you would say, equivalent here to our more interlinked representation of an LLM in a knowledge graph, like in SyncOnGraph. Now at this point here, I looked here at explicitly the agent and here the user intent interpretation agent and I used and I looked here at the prompt this is the absolute concrete prompt that the authors used so you see here this is what defines what an agent is doing one you are an expert NLP task classifier specializing in knowledge graph interactions, and analyze the given query and classify it into the following categories. So we do not go with the complete free search space, but we say, hey, either relation judgment, prerequisite prediction, path searching, concept clustering, subgraph completion, idea hamster, or then everything else, a freestyle question. And then they provide examples, beautiful, in-context learning is happening, and then you see, great, and then please answer in the following JSON format. Okay, this is point number one. Now it gets interesting, because now we have the key concept extraction prompt. And this is the official real prompt. And they say, now as you are an advanced concept extractor, whatever this means, your task is to identify and extract key concept entities and relationships from a given user query, using here our old friends here the named entity recognition and the relation extraction techniques that we know now for dozens of years and you will then map these to the knowledge graph schema using bird derived vector representations for semantic similarity and i give here examples and beautiful And this is exactly where I also said, wait a minute. You will then map these to the knowledge graph schema using BERT-derived vector representation for semantic similarities. I have three immediate ideas how I do the mapping. So which kind of mapping is it in detail? What are you talking about here? And unfortunately they are rather vague here in their detailed explanation. It is just this sentence. And you see here a screenshot and it tells us here, yep, maps extracted entities to the knowledge graph by semantic similarity with BERT-derived vector representation. So we have to have now a two-step process suddenly within the extraction agent where we have new objects. So suddenly we don't have seven agents and a knowledge graph, but suddenly we have to build here, if you want here, a birth model, kind of a sentence transformer. What I would do, and this is only one option how we do the mapping, is this one. Let's say, and I'll show an example to make it really clear, how does global warming influence sea ice melting and what are the main factors involved? So you see, we need some predefined knowledge here. What are those factors? So we do not just have a relation between global warming and sea ice melting with the object influence, but we want to exactly know how and why and what is in the thematic clustering here of those terms. So this is my idea how it is could be done, but if this is the exact case, or maybe there are three, four, five other mapping technologies, I don't know, but I want to explain this here, and this is the simplest case if you want. So step one, we extract the entities and the relations from the query. Easy, no? Global warming, sea ice melting, beautiful. And the relation we have is influence and maybe you go even with main factors involved. Then if we use BERT, we have embeddings, we have vector representation in a mathematical vector space. We now generate embeddings for the query entities and the relation that we just found. So using now a BERT system, but hey, stop, we have to build a BERT system, no? We have to build here a complete vector space where our semantic similarities in the linguistic text is now encoded in a close-by relationship in a mathematical vector space. This is the whole sense of using BERT. So we have now to embed here each and every term. So we have the BERT embedding of global warming, BERT of CI smelting, and you get the idea. So we have one set of vectors if you want. But how do we get the knowledge graph embeddings? Now, either you buy those here, let's say from OpenAI, or you use your own BERT that you created for your domain-specific knowledge. So let's have a look. And it is in the knowledge graph. Let's say that in your knowledge graph somebody built those system and they are a little bit different terms technical terms you have to label climate change and not global warming and you have ice cap melting instead of sea ice melting and you get the idea so we now extract here from a knowledge graph embeddings, and there's a simple methodology that we can do this, not only the entities, but also the relations. And then, but only then, if we have the embeddings for the query and the embeddings in the knowledge graph, then we can compute the similarities. So you see the global warming here. Here we have the global warming. And the climate change that is here in the knowledge graph, they have now a similarity in the constructed bird system that we built out of thousands and thousands and thousands of documents. They are now really semantically similar and if we go with sea ice melting and ice cap melting we also get here the highest correlation between those two terms and sea ice melting is of course the term that we used in the query and ice cap melting is the term that we have available in the knowledge graph So if you say now there's a mapping between those, this is the easiest way I can imagine to perform the mapping. But I have multiple other ideas how to do this mapping. So you see, it is not really specific. Or if you found here the hint how they do it, please leave a reply. Step five, mapping to the knowledge graph. And here it's happening. So we have the global warming as I told you here from the query. We map this now to the node in our knowledge graph and the best node is climate change. And for the sea ice melting our semantically most close by node is ice cap melting in the knowledge graph. So now we have now an allocation of the query to the embeddings in the knowledge graph. My goodness. Yeah, impact on knowledge graph query. Now we can have, for example, here with Cypher. We formulate now a query in Cypher to find a relationship between climate change and ice cap melting through the cause relationship. But you see, this is quite some step, no? Because either, okay, you buy here the embeddings from some company that you like or wherever you are, or from your LLMs. But normally I build my own bird systems because they are highly domain specific. Only for theoretical physics, I have 12 BERT systems. So you see, you have to have those terms in your knowledge graph and you have to have those terms in your embedding structure. Great. This was agent number two. Then we have the task planning prompt. This is here really the prompt that generates you the agent. And we say, hey, as the task planning agent, your role is to decompose the identified user intent into a logical sequence of executable tasks for the knowledge graph interaction. Create an optimal plan, considering now that we know the task dependencies and the execution order. Here's an example. and you provide in context learning look do this then do this beautiful now agent number four you know the knowledge graph interaction you're not going to believe it here as a knowledge graph interaction again your task is to translate high level task into executable graph queries we now have our sci-fi queries, our SPARQL queries, utilize Vue short learning and React framework to generate and define queries dynamically. Here are the examples, beautiful example, beautiful in-context learning, Vue short learning. Then we have the reasoning agent prompt, apply logical inference to the raw query results, leveraging contextual understanding now from the LLM and the reasoning capabilities from the LLM. The raw query results come from the knowledge graph. So now you bring them together and you use now the intelligence of the knowledge graph for causal reasoning. Examples, beautiful. And then response generation prompt. Yeah, I guess it is clear now to us how this works. Dynamic knowledge integration prompt. And those were seven agents. And the prompt that generate those agents and define here the name, the scope, the role, the example, and the task and the output former, the schema, how they exchange information between seven ei agents and one knowledge graph or maybe you have multiple knowledge graph then i would put in another eight agents to do the coordination here of the information flow to multiple knowledge graphs and the authors tell us here our agenda graph has been rigorously evaluated and the authors tell us here the performance matrix is achieved 95% accuracy and task classification and 90% success rate in task execution. Okay, beautiful. And they tell us here, hey, we have two mode. Either you can chat here with the the llm or you can have here an exploration mode that you discover here new relations beautiful so you see a new way with seven agents here that work together interact here in a very specific way i showed you here the specific prompts of each and every agent and this is how the system is built up. Okay you might say now but we know that in certain aspects we could optimize here this particular structure because if we go back let's go back here to the general form now here we know for some agents there are already optimization routines available like a beam search mechanism or how we integrate here the knowledge graph in a more refined way yeah but this is as always so some beautiful research work presented to us here by eight beautiful institutions and they show us here their way plus we immediately know and if you are a subscriber of this channel you immediately know where are the points where you can say hey i just learned from the other videos that we can do further optimization. And you see, this is the beauty of currently AI research. Reading all this beautiful publication by all the AI researcher, you understand that each and every task that you might have, there are hundreds of people that already tried to optimize this particular task. And in the end, you can get maybe a real optimization of your complete methodology. So there we are. This is exactly what I set out to do. I hope you enjoyed it. I hope you had a little bit of fun. Maybe it was informative and it would be great to see you in my next video. | Seven AI Agents and a Knowledge-Graph: AGENTiGraph | 1,464 | Discover AI | 20241018 | AGENTiGraph introduces a new framework that integrates Large Language Models (LLMs) with Knowledge Graphs (KGs) through a multi-agent system, addressing key limitations in current AI models for complex, domain-specific tasks. Explore multi-agent dynamics with a KG (knowledge-graph).
The system employs specialized agents - each leveraging LLMs - to interpret user intents, extract key concepts, plan tasks, interact with the KG, perform reasoning, generate responses, and dynamically integrate new knowledge. By decomposing user queries into manageable tasks and leveraging the strengths of both LLMs and KGs, AGENTiGraph enhances factual consistency, reasoning capabilities, and adaptability in handling intricate queries.
A central method in AGENTiGraph is the semantic mapping of extracted entities and relations from user queries to the KG using BERT-derived embeddings. The Key Concept Extraction Agent performs Named Entity Recognition (NER) and Relation Extraction (RE) to identify relevant concepts and relationships in the query. Both the extracted elements and the KG's entities and relations are embedded into the same semantic space using the same BERT model. By computing cosine similarities between these embeddings, the system accurately maps user inputs to corresponding nodes and edges in the KG, even when terminology differs, thus enabling precise information retrieval and interaction with the knowledge graph.
all right w/ authors:
AGENTiGraph: An Interactive Knowledge Graph Platform for
LLM-based Chatbots Utilizing Private Data
https://arxiv.org/pdf/2410.11531
00:00 Multi AI agents and a Knowledge Graph
01:08 7 Agents with specific functions
01:43 Simple example and all 7 agents
03:03 Official AgentiGraph Workflow
03:51 Open questions and Think on Graph (ToG)
07:02 Every Agent explored with its PROMPT
14:39 Map entities to knowledge graph embeddings
22:05 Performance Metrics of Agentigraph
#airesearch
#ai
#aiagents
#knowledgegraphs | 2024-11-06T18:26:12.593175 |
https://www.youtube.com/watch?v=4fIhbPPbYiU | right we have to wait until the countdown is done okay can you just let me know when to start it yeah i will we like you do introducing me or how did it go say i'll say a few one sentence and then i take it over all right welcome everybody we are kicking it off with tomas bratanic who is a graph data science expert author and all over GNI mastermind at Neo4j with his session building knowledge graphs with LLMs take it away Tomasz. Okay so thanks for the nice introduction so as Alex mentioned today we'll be talking about building knowledge graphs with LLMs and as he mentioned we're also writing a book with Oscar about Knowledge Graph Rec but enough of that. So funnily enough I was looking at the previous session for like five minutes and I saw kind of nice continuation how we're gonna continue with this session, because first we're not going to jump straight into Knowledge Graph building, but we're going to look at when text embedding approach fails and then how to overcome it. And then the overcoming will be using Knowledge Graph, as mentioned by the title. But let's just go into what are the limitations of text embedding approach. In a typical REC pipeline, I've probably seen this so many times now, but basically you take a couple of PDFs, you chunk them up, you index them using any text embedding models, and then at query time you use the same embedding model, fetch the top four or five or ten most relevant documents, pass them to an LLM, you hope that any relevant information is in that document, and then the LLM generates the final answer. But there are some use cases for domains where this approach doesn't really work well, because this approach really works well for documentation, but for example for other domains like legal, for some questions this might not be the best approach as we'll see. And here I have a couple of questions outlined that you can ask. So for example, what are company policies on the remote work? But also you can ask other questions like what's the total value of contracts with Neo4j or whichever company? We'll see that different types of questions require different types of approaches. For example, when you're asking about the company policies on remote work, that's like a documentation type of question where basically you just need to find the top relevant text chunks that talk about remote work, pass it to an LLM and you get your answer. So that's all good. But what are payment terms for specific contracts. So in this case if you took a naive approach and just return top four chunks from your vector index, what might happen is that you can get four or here have three text chunks from different contracts which might not be related. The LLM could be quite confused because, let's say, you get three different payment terms for three different contracts. In this case, just using Naive vector similarity search might already fail. Here in the title, I can say this can be overcome, this can be solved by using metadata filtering. So if your vector text chunks have some structured data and you can filter by contract ID or company name or something like that, then you narrow down your results, and then you don't get text chunks from different contracts because you used metadata filtering. That's one scenario where basically you can use structured data to enhance your results. But then if you ask questions like what's the total value of our contracts with ACME or how many contracts expired this month, because this is a question that you might ask if you have a legal chatbot. In this case, basically, naive approach, even if you add metadata filtering, completely fails because the vector index, like the REC approach just returns for different text chunks. Maybe, hopefully, in ideal world, you would get full text chunks with contracts about with ACME that have like the value in it and then like you would rely on LLM basically doing some arithmetic and adding up the total value but what if you have more than four contracts, if you have 10 contracts with Acme.Ink, then in that case it completely fails. If you ask how many contracts expire this month, again, basically the four chunks that you get from a vector index doesn't really help in answering that question. Sometimes what we see is that you might get one or two contracts that might expire this month and then LLM will be okay, these two contracts expire this month, but that might become just a partial answer because maybe 200 contracts will expire this month, but LLM doesn't know because it gets you just for top-k text chunks and you can get inaccurate or partial results. As mentioned, certain questions that require filtering, sorting, counting, and aggregating cannot be answered with text embeddings only because basically you need structured data to perform these sorts of aggregations. And then there's one which I didn't really mention before, but sometimes when you have information spread across multiple documents, the vector approach might also fail because, let's say, the top K is free, but all the information is in five documents. Then even if your vector similarity search is perfect and it gets the exact information that you want, it's still really hard to define the exact top K because it might be dynamic. Sometimes you want like five documents, sometimes you need 15 documents, and this is mostly like unsolved problem at the moment. So, uh, structuring this data can also help with overcoming the top-k issue. This is my first memory, I guess. Basically, when people think or talk about the REC pipelines, mostly they just think about PDFs and documentation and just text. But there's this whole beautiful world of structured data that is overlooked a lot and it can help you answer these types of questions, whether you need to do structured operations or for more complex questions, like multi-hop questions. So here knowledge graphs come into play. So probably you've heard a lot about knowledge graphs today, so we won't go too much into depth about them. But basically knowledge graphs are used as data representing structured data. So this is probably the most used image to show basically what knowledge graphs are about. So you can have entities and relationships between them. But this, I think, is now slightly outdated because in the world of LLMs and RAC pipelines. Basically what you want most of the time, your data sources contain structured information and also unstructured information, because here we only have structured information, but now we want to also bring in additional unstructured information. And the nice thing about knowledge graphs is that you can have both of these types of informations in a single database. For example, here we have a contract and it's linked to two companies and these companies are located in whatever cities. But then, for example, if you chunk your contracts by clauses, then you can have each clause linked to the contract, and you could embed clauses separately and have them linked to the structured part of the graph. Basically, you could think about clauses or your text chunks with quite complex metadata, right, because the structured part of the graph can be thought about as metadata related to the unstructured part of the graph, right. So basically how I see it now that in the words of LLMs, the nice thing is that you can have both of these types of information together. And that's the power of knowledge graphs. So now let's finally talk about building the actual knowledge graphs with LLMs. Because that's kind of a nice thing everybody is funny, but in LLM world like 90% of the time the input is a PDF or a document right so we need some sort of way to build a knowledge graph to select that information from a PDF and this is what we'll be talking about. Basically, you can have, as I've shown before, multiple documents and then you build a knowledge graph based on the information in those knowledge graphs. The nice thing about knowledge graphs is that they're very good at combining information from multiple documents. Here we have four documents that represent this information, but now if we ask a question like previous OpenAI employees, which companies they founded, this would turn into a very simple Cypipher query instead of us having to look through multiple documents and then maybe one document contains that Dario and Daniela worked at OpenAI and then another document contains that they found it Anthropix. So this is kind of the multi-hop types of questions where they have to look to do multiple documents to find the whole truth. And this is also like a very nice use case for knowledge graphs of combining that information in a nice and compact way. And building knowledge graphs from text used to be called information extraction and it used to be like very complicated you need multiple models and each model in the in the pipeline was very custom and domain specific so mostly academics did that or like billion dollar companies and nobody in between but now basically you just ask an LLM, LLM please extract information in a structured way, mostly the structured way is a JSON and that's it right and so it the extracting of structured information from text has become much more mainstream and that also opens up the possibility of building knowledge graphs from text. Extracting structured information from text is so frequent use cases with LLM that basically now you have like JSON modes or like OpenAI also has like structured output mode and like you can also kind of hack the tools and use them to extract structured output so basically extracting structured information with LLM is such a frequent use case that there are lots of tools built around LLMs to make it easier for the developers to get like a consistent output. And as mentioned, basically the whole magic of information extraction is that you take some text. Some magic happens and you get a nice JSON. So basically you get a nice structured information. And how I see it basically, the information extraction pipeline, there's a spectrum of approaches, how to do that, and how I see it is like there's the spectrum lies from the most generic extraction where you have no idea what's in the text, what to extract, you just say LLM extract as much information as possible and then on the other side you have very domain specific where you know exactly what's in the text, for example, legal documents, and you can define each property in the JSON, what it should look like and what should be expected. Now let's look at the spectrum of approaches and how they actually work in practice. The first one is about the most generic approach where you have no idea about documents, what's in them, because a very typical scenario that I see is basically your boss gives you 10 000 pdfs and just says build the chatbot and you have no idea what's in them. In that case, you could use the most generic approach, you just say extract nodes which have ID and labels and extract the relationships between those nodes, and then you hope for the best of that. This is such a frequent scenario as well that you will see a lot of frameworks supporting the generic approach where you don't have to define any schema. Here I have the code for Langchain because I like Langchain because I implemented that, so I like to show off. So basically, the most generic approach with Langchain just I implemented that, so I like to show off. Basically, the most generic approach with Langchain just looks like this. You decide which LLM you want to use, and then you press the document and build the knowledge ref. Here I have one example of a contract between Mortgage Logic and TudorLate. So here I ran the extraction on the same contract two times, and then you can compare the extraction. So now because we didn't define any schema what we want to extract, you will get some relationships might be the same, but some relationships might be the same, but some relationships might be different. And like the knowledge graph, the extract information can be quite different depending on different runs, right? So for example, here we have TudorLink hosts a website and then MortgageLogic uses it. Andic uses it and in the first one on the left side we don't have that information. Here we have that MortgageLogic provides credit data and client content and here we have credit data and client content. So some things might stay the same and some things might change but what we see is that like the schema is very inconsistent and also the information that's being extracted and yeah so basically extraction can be noisy because RLM decides what's to extract and there's no schema consistency right so If you run this extraction on multiple contracts, there is no guarantee that node labels or relationship types will stay the same. Then I see there's a middle ground approach where you can define the node labels and relationship types you want to extract. This requires some upfront work because you need to define what you want to extract. But the nice thing is that you can use LLMs to help you with defining the schema. Because you can just sample a couple of documents as the LLM, what types of nodes, node labels, and relationship types did you find in the text, and you just fit that into an LLM. Again, you can use Langchain, but it's the Vodanka code, but basically what you would do, you would define the types of labels and the types of relationship types. Here you would have allowed nodes and allowed relationships, but somehow I missed that. Then your schema. Here again, I ran the extraction on the same contract two times in a row, so now the extraction is much more consistent between two runs because we guide the LLM what do we actually want to extract, we'll get more consistent outputs. There's still some discrepancy between the depth of information, between extracted information, between different calls, because that's just the nature of LLMs. Sometimes they want to extract more information, sometimes they want to do less. But the nice thing is that we also have better retriever options because now that your schema is more defined, that also gives you better options for the retriever because you kind of know what information to expect in the knowledge graph, you also know how to then retrieve it in better ways or more ways. Then the last approach that I like the most is the domain specific approach. So now basically you define every, if like you're extracting a JSON, you define every key in the JSON and you give it description, you can give it like available options, like for example dates, you define what's the format that should be used to write. So this type of information extraction requires the most work, but obviously it gives the best results. Here's one example that you could use to extract information from a contract. You can have a type with predefined types and parties of organizations, dates, scopes, amounts, and since this is more like a low level extraction, there's not a specific tool you can use, but you can use frameworks like LankChain, Instructors, Lama Index, or OpenAI Structured Outputs. You define the information you want to extract using pidentic objects. With this approach, because you define what you want and you're very descriptive and constrained about the information that needs to be extracted you will get the best results right so here I have basically one example and then it's basically we have some information as notes but then some information is stored as properties as well right so for example this agreement has effective date like a summarized context scope and stuff like that. So basically, when you can... So basically, this should be domain-specific approach, but basically, the LLM is guided exactly on the type of information it should extract, and the schema is fixed because it's not actually as before. We said to an LLM, extract notes and relationships, but now we basically say to an LLM, extract contract type, extract effective date. So the LLM doesn't actually do the graph modeling. The graph modeling is a separate step. In this case, the graph schema is fixed as it's separated from the LLM. Because we can guide and give instructions to an LLM exactly what we want extracted, we'll get the best results. And since the schema is fixed, you can create all sorts of tools for the retriever. So basically because you know exactly what is in the graph, what's the format of properties and all of that, you get the best options to build a rack application that can do all sorts of aggregations, filtering, multi-hop questions, all of that. That's the three types of approaches that I've seen in practice. Now some observations using that in actual practice. Obviously, anytime you're going to be building knowledge graphs and text, you'll have to do some cleaning up afterwards. The most frequent one is the entity resolution. You can have multiple nodes representing a single relevant entity. So here in this example, we have a UTI asset management company, and there are three nodes that represent the same node, the same entity. So you want to have some post-processing steps where you identify these duplicates and merge them. That way, your graph will have much better structural integrity. Some observations from papers. This is the paper from Microsoft where they're building knowledge graphs. So like when you instruct the LLM to extract or let's say all people and organizations from text, will the LLM actually extract all people and organizations from the text? And the answer is mostly not right. So when working with telelamps nothing is really perfect. And they also observed that the extraction is quite dependent on the chunk size that you use, so the smaller chunk sizes that you use the more information you will extract overall. Another technique that they've introduced is that you can do multiple passes on a single text chunk and then extract more information. So if you say that on a single pass we will not extract all the information, maybe let's do two passes or three passes and try to get as much information as possible. Then this one is also quite interesting, as we've seen before. This is how consistent is the selection. Even with a fully defined schema, like let's say domain-specific approach, how consistent is selection and it's valid depends on the types of documents, but the less ambiguous are the documents, the better the consistency will be. Here in this example, series are least ambiguous, So here in this example series, I'll let them list ambiguous so they'll have the most consistency. And then it's funny, but scientific articles and websites are kind of similar. But we can see that scientific articles are still more consistent because when you're doing science, you shouldn't be as much ambiguous as you can be when you're basically doing websites. One thing that's also quite interesting to see is that nobody like this is like an evolving space and nobody really knows what's the best approach to extracting knowledge graphs or like structured information. So like then you have so many options how to do like you can use JSON modes with LLMs, you can use tools, functions, you can just do regular prompts and define the example with fuchsia. And basically, this is already model-specific. So it's really hard to say what works best because it really depends on the model and not just the model, let's say, because even GPT-4.0 from January behaves differently than GPT-4.0 from April. But this is also something to think about when you're building knowledge graphs. And that's it for me, so just on time. And now we have two minutes for questions. And basically also because this is like a theoretical talk, but if you want some code so you can try it out, you can go and look at my blog post or code examples and test it on your own. So yeah, now I don't know if we have time for questions. I think I've already squeezed one in. Thank you, Tomas. The one with the most votes comes from raya and they are how does knowledge graph update internally when i add new document to the graph yeah i mean so that's basically up to you uh but uh build like adding additional information is not uh is like not a big problem because it's just adding additional information is not a big problem because it's just adding additional nodes and relationships. But the only thing you have to be careful is that you do entity resolution. So if an entity is mentioned in the text that's already in the graph, you kind of want to use the same name because as mentioned, sometimes not exactly the same name is used. So then the only problem is dealing with this. But this could also be like as post-processing. You update your knowledge graph and then try to search for duplicates. But other than that, it should be very straightforward. Yeah, cool. And then there's one one more maybe we can squeeze it in as well from david um and we'll keep on answering this one and then we close it but if you want to jump to the next session you can do so uh what is your recommendation to perform graph disambiguation merging duplicated nodes or different nodes that are actually the same thing? Yeah, so what you see in practice is that it really depends on the types of nodes you want to merge together. So it's very different when you're merging people than when you're merging organizations or locations. But for the most part, what we've seen is that you use text embeddings to find potential candidates that could be merged, and then you can use some sort of word distance or you can have an LLM as a judge deciding if two entities should be merged. So the generic approaches is that you use a combination of text embeddings and then additional logic to give the final decision if two entities should be merged or not. Super. Cool. With that, I think we're at the end thank you very much tomas thank you for watching uh keep keep continuing with the notes uh presentations throughout the day and uh yeah see you at the next uh next session uh and uh somewhere in the future bye see you later bye | NODES 2024 - Building Knowledge Graphs With LLMs | 1,857 | Neo4j | 20241108 | Tomaz will share lessons learned from building knowledge graphs with LLMs, from using various text chunk sizes to constrained schemas and more.
Register to NODES 2024: https://bit.ly/47VqQkp
Get certified with GraphAcademy: https://dev.neo4j.com/learngraph
Neo4j AuraDB https://dev.neo4j.com/auradb
Knowledge Graph Builder https://dev.neo4j.com/KGBuilder
Neo4j GenAI https://dev.neo4j.com/graphrag
#nodes2024 #neo4j #graphdatabase | 2024-11-09T14:36:52.471369 |
https://www.youtube.com/watch?v=C642TL9_X0Q | We're joined today, very fortunate to be joined today by our speaker, Mike Walsh. He's a futurist. He has written best-selling books about the topic of AI, about the topic of algorithms, which have been with us for thousands of years. And he's talking about this inflection point of where we are and how, what's the responsibility of all of us collectively in how do we take advantage of a lot of the advancements that we see around us. I met Mike at a AI forum in Texas a few months ago. Back then, GPT was not a thing. And so this is how fast things are progressing. And we see a lot of advancements in what we do. And what Mike is going to talk to us about is how each and every one of us play a role to lead not just people, but now lead algorithms, lead AI. What he's going to talk about is we all have a role to play whether we know how to code whether we we don't how do we steer these algorithms in the right way whether how do we judge that they're telling us what we should be using etc you know no event like this would be complete without me going to chat GPT and asking, what questions do you have for an AI futurist who's giving a talk about behavior? And I have a few questions. I'm not gonna impose all of them on you, but one of the questions was- Can I get chat GPT to answer for me? That would be fair. What was over time, how have you seen people's behavior modified in their affinity towards AI? And what have been catalysts that you think are modifying people's behavior? So it's actually a very insightful question. I have to be shocked. You know the funny thing about, we have to remember is that we as human beings are ourselves running our own version of deep learning statistical models. I mean that's actually, these systems were actually modelled on us originally. And sometimes if you think about it, superstition is kind of a form of overfitting in a machine learning model. We start to believe things that don't exist. And I think we do this naturally, and even kids do this, that when you start interacting with any kind of system, you start to build a mental model for what that is. And one of the examples I talk about in the book was someone talking about her young kids, who had basically figured out that if they keep typing the word robot into the browser, they would see the kind of ads that they like for Transformers, right? So this kind of five-year-old kid had built a mental model for how Google's ad search algorithm worked. And I think that's really, you know, something that we have to think about is that our behavior as human beings, whether it's in organizations or as consumers, it's constantly being nudged and influenced by these other algorithms in the marketplace. And I don't think that's necessarily a bad thing, but it's something that we have to be more conscious of in knowing that our ability to be manipulated and to be nudged is very vulnerable to these algorithmic systems. So we're being conditioned by the ecosystem around us, which is including algorithms in everything that we do for the sake of scale. And that's what you believe is conditioning us. The hope is that we bring some of that into aspects of what we do. And that we don't become stupid, like we're talking about. There is a scenario where we just become like in the HG World story, the LOI and the Molochs, right? We just become these stupid, highly pampered people that live in a society with all of this heavy machinery underneath us that we don't understand. Thank you. Question. Thank you. This has been fascinating. How should one decide which decisions and thought processes should be made by humans and which via automation? Yeah, it's a great question and I didn't get into this probably deeply enough. I kind of think of it as a pyramid where you've got, you know, highly, at the bottom of the pyramid you've got highly repetitive easy to document decisions, you know, where I bottom of the pyramid, you've got highly repetitive, easy-to-document decisions, you know, where I kind of think they're deterministic. Like, if the following five things happen, you know, if the kind of the sensors are reading this temperature overload and the thing's blinking red, the following five actions need to humiliate it. Honestly, no human being should be doing that. If you can document it, you should automate it. Then you've got kind of decisions that are kind of a bit of a gray area. They tend to be a bit more probabilistic. Like, you know, we're not 100% sure that what the right thing to do is, and there's probably some kind of feedback effect. So you kind of want to work with a probabilistic AI system that kind of says, well, you might want to consider the following three actions. You know, we're not sure, but there's a high probability of X. And then the human being and the machine are kind of in this training loop together. And this even happens with chat GPT, you know. As human beings enter prompts and they get outputs and they correct the prompts, the system gets smarter. And those sort of decisions, I think, are ones that you automate in that way. Truly human decisions, you kind of know. They're actually something that really touched the human experience or regulation or the choices we make as a society. Makes a lot of sense. And I'm trying to go through these questions based on your votes. And so one question bubble to the top have you done any work regarding how AI will power patients to be even more active in their own health care I've been going to see my doctor once and he had a sign on the door it said I don't I don't treat the following three people people who think they're in couples therapy professional athletes and people who go into Google looking for their their symptoms and I think this is going to be a real it's gonna be interesting dilemma I think for healthcare professionals because we're gonna be putting a lot of diagnostic and measuring tools in the hand of patients, and that's a great thing. Like I said, these aura rings are nuts. Because of regulatory things, they can't tell you your temperature, but they can tell you if your range is off. They can give you feedback about your sleep levels. This is like an accelerant for hypochondriacs and and I think but I think ultimately this is a positive development because you know we're actually going to create a different relationship between patients healthcare providers pharmaceutical companies where hopefully that data and the direct relationship allows us to actually really understand and optimize outcomes. And it has a big implication for us, you know, as patients, as we educate patients, as we educate healthcare professionals, where will they get their information? Will they rely on us or will they seek it somewhere else when the bar to entry and really distilling everything that is known out there is at someone's fingertips without coding. So I think this is a question we have to think about, not just for HCPs, but us as a GenMap. How do we interact with the outside world? Especially with the regulations now on drug marketing. You know, like we were talking about this before, about the science saying no drug reps, you know. But if you could be part of the experience of someone managing their wellness, where they can understand that better and you're facilitating that, you know, that's much more powerful than handing out a pad with your brand on it. Yeah, yeah. You mentioned the importance of management being transparent. What is your advice for management when knowledge is power? Well, it's a tricky question. I think knowledge is power was the old paradigm. The old paradigm, very Machiavellian, you control the access to the information, you play people off against each other, you create and so on, certainly in discord in order to divide and conquer. But I think that model of sort of political leadership just doesn't work in a world where there is so much more information that's out there. And we can't have a hierarchical model anymore because it doesn't allow us to be agile and to compete. If you really want those fast results, you actually need to have a more decentralized approach to management and to teams, which means you can't have secrets and politics and hierarchies. You can't have the Game of Thrones leadership model, right, because people are going to devote more and more of their energy to survival rather than productivity. So you have a choice. And I think from an application to GenMap, there are a lot of things that we do exercise this autonomy, decentralization. And we're finding that some consistency also is needed. And so not really a lightning rod model, but we do need to have at times the GenMab way from an identity perspective knowing that there's a lot of things that could happen in a fairly grassroots matter. So we're still trying to get kind of a happy medium in how we do. But that's powerful I think, You know, the idea of the Toyota way or the GenMap way, it's kind of a heuristic, right? Because it's the way... that's your collective IP. You've collectively learned through all of these experiments in an emergent way that this is an approach that leads to good results, as long as it's able to be challenged. Yeah. I think it's challenging this whole concept of a secret sauce versus secret sauces within an organization. It's fascinating to contemplate what that is. This is interesting. How do you think a successful organization should value age and experience versus youth be innovating? Without violating any employment laws. How old was the question to answer this question? Without the, yeah. Well, you know what I, maybe to like answer this in a slightly different way, you know what I think one of the greatest tragedies is when people with an immense amount of experience leave organizations, you know, because they often take all of this implicit knowledge with them, you know, things that aren't in emails, in PowerPoints, in presentations. I mean, what is the sum of someone's contribution to an organisation? And when they leave, what happens to it? And I think we're so careless about that. And I think it's almost, in an ideal world, you know, at the later phases of someone's career, I kind of feel like we should be doing the most we can to actually having them train systems to take what's in their head and basically train maybe these transformer models. How do you take the essence of a great leader and basically build it into the cultural DNA of an organization? That's probably going to be, I think in the future, the most important part of the later years of someone's career. So you're advocating for more experienced people kind of contributing and capturing somehow the institutional memory. Exactly. What do we know and how do we then transact it with a lot of others? Yeah, like the GenMap way needs to constantly be refueled and refreshed by the experience of people that have gone before. Yeah. What kind of teaching materials should one expose a 10-year-old to prepare for the new world he or she will be working in? Minecraft. Okay. Seriously. I mean, Roblox, Minecraft. Seriously, I mean, Roblox, Minecraft, I mean, these are actually, I mean, if we grew up with Lego, right, then Minecraft was kind of like digital Lego, and then Roblox is kind of computational digital Lego, right, where you've got these fully functioning, I mean, you can actually build logic gates in Minecraft, but then on Roblox, you can actually turn that into a business. So these toys are actually tools, I think, for teaching kids. Yeah, I agree. Can you talk about the ethical consequences of AI, something we probably didn't touch on during the talk, but there's a lot of ethics bias and with any development that it's infancy, the nascency, there's a lot of room for correcting course and a lot of these go through a lot of cycles to get it right, but how do we focus on that and do we? No, it's huge and actually, you know, this is an area where people who might feel that they're not that technical will play such an important role. Because I would say that the technical parts of AI are becoming less important with time. It's just gonna be an API basically, you know, for many organizations. And a lot of these tools even, I think we'll say that Microsoft next week are rolling out as an API on Azure. so it's just not a big deal but the the ethical parts of it are critical and I think there's dangers at either extreme you know to take a completely you know non ethical view of these things is very dangerous and so you've got these people now basically trying to create AGI and hook it up to you know autonomous killer drones. I mean technology for its own sake, if we build a kind of a killer robot civilization it won't be because it's come to life, it's because some maniac you know at a weapons organization has built one of these things. So it's not going to be a mystery when it happens. It's not your toaster that's going to get consciousness and go to you, right? But on the other extreme, there's a temptation for governments to over-regulate before they understand. And for sure, you're going to see out of the EU some kind of crazy dictate, you know, which will ban aspects of some of these technologies before anyone even understands them. And the danger is that a lot of this regulation is kind of form of these technologies before anyone even understands them. And the danger is that a lot of this regulation is kind of form of a regulatory capture which is actually designed to preserve various incumbents who then will pay them large fines. So it becomes this kind of little like game, you know? It's going to be fascinating to watch how this is regulated, how this is reacted to from regulators from different parts of society you know this is a good question as we ramp up 2023 and and we've actually grown year over year consistently for the last five or six years 2023 is no different and so when we are looking for new talent, what are some of the predictors of people that could adopt what we're advocating for, who want to see as people who are change agents, maybe more friendly to this type of thing? How do you look at this? Maybe ask them to come up with some good prompts for DALI or the interview process. Okay. I mean, because, you know, they used to say that in the era of Google, there's no point teaching kids to memorize things anymore, right? I mean, Google essentially has become a form of extended memory. You don't need to know the dates of all the US presidents, right? That's no longer a test of intelligence. The test of intelligence is being able to ask an interesting question on Google. And maybe now in 2023, you know, being able to write an essay is not as important as coming up with a very clever prompt that generates the right one. Yeah. So maybe incorporating some of these new tools into the way we have conversations with potential talent will give us a sense of how open they are to these new technologies. Yeah, you know, it's interesting. You keep saying, write, ask them to write. And I was watching my nine-year-old daughter interface with chat GPT and she's not the, I mean, she's figured a way and it's probably not very tough to just speak. She didn't want to type anything. She's just speaking, and then it's typing, and then that is giving you kind of the text output. But it's fascinating to see how so intuitive it is for a seven-year-old or a nine-year-old. And so I'm actually scared to see what that generation is able to do. When your daughter goes to university, the question which universities are already facing now is how do we evaluate her intelligence, right? Because if we give her a homework assignment, she can just basically spit something back from open AI, right? And then now do we need to check that for fraud? I think that's the wrong way of thinking about it. Maybe we don't have written examinations anymore. Maybe we have oral examinations, right? We're basically in conversation, you're trying to understand, does someone really understand this topic? Now, if your daughter is going to prepare to understand something, she'd probably sit down with ChatGPT for quite long periods of time and ask intelligent questions until she builds her own mental model of that. Then we can figure out how smart she is. Yeah, that was a topic. You know things are mainstream when the PBS NewsHour is about chat GPT usually not the topics that you hear but that was all about universities reworking their curricula and one of the professors they interviewed said if I sniff any hint of AI you're gonna stand up and you're gonna have a oral exam verbal exam. And so it's very interesting, I think we're just imagining this. What do you think of the future of drug discovery using AI? Where should it focus? Well, actually, I mean, you've answered the question yourself in that I think the future of AI powered drug discovery is telling you where you should focus. I mean, that's the beauty of it, is that these tools like AlphaFold, they're not a threat. They actually allow us to really start to direct our energies and priorities in areas that have the greatest opportunity for discovering great candidates, right through know potentially reducing the need for wet labs and simulation and and we get a lot of the aspects of drug discovery and development we can actually speed up now you know and I think that's a wonderful opportunity yeah you know you talked about this being an ecosystem to do this and for us at Genmab we do work through a lot of partnerships. Partnerships with companies like ours, partnership with technologies, partnership with academics. How should we approach leaders and maybe conservative organizations that we're partnering with that are not ready to reimagine things and so we have a dependency. We may be all in. How do you convince a partnership to adopt this new way of doing things? It's challenging, and I think people don't need to buy into all of it in order to make a significant step forward. So you guys can be the kind of the hothouse for pushing the limits on everything, but you only need your partners to take one step forward in the new direction for it to be useful. I think that's been the problem in the past is that it's been very binary. You either totally embrace the radical 21st century disruptive model of pharmaceuticals or you're like back in the 1950s. And I think now, especially in the past to embrace stuff, you had to be a very large pharmaceutical organization. But now the way these tools are developing, whether it's via API or through partnership, actually being small is an advantage. Because you don't need to make the massive R&D investments in these tools, which actually just may not work. You can actually just cherry pick the ones that are the most relevant for your purpose. Yeah. You talked about this a little bit. Do you forecast a period where laws are not up to date for work that can be automated? Who's liable for things where robots, can you point to robots and saying it's their fault? It's sort of becoming the 21st century version of the devil made me do it. And I think where this is going to get a lot more sophisticated in that in the past we would blame the technologists. This is a program that's gone wrong. But increasingly I think the standard of liability is going to be around what if we decide to optimize this model around? Where did the data come from? Did we ask the right questions? You know, and I think that's something that even boards are gonna be ultimately liable for. You know, this idea I'm not a technical person, I'm not expected to understand, is not going to be sufficient. I mean, do you think the board of Southwest should not be held liable for the failure of the IT system? Of course not, because at some point there was a decision, how well are we going to fund this? What are we optimising this system around? And that was a strategic choice, because they chose not to be a hub and spoke model. They chose to be a direct-to-direct model, and that was the basis of this algorithmic failure. So these are management decisions. Maybe we take the last two questions. What do you think, and you alluded to interactions, what do you think the impact of AI will have on in-person interactions and maybe the extension on human mental health? I'm concerned, actually, honestly. I mean, I can't say I'm overly thrilled about this idea of people wearing earpieces plugged into AI supercomputers. I don't think that's necessarily going to create for great conversation. In that, I don't think smartphones have created teenagers with great social skills, right? Yeah. And we've already seen how this is playing out. So I think we are going to accelerate to a point where we're actually going to have to step back and re-look at things like social interactions and what is appropriate, what is etiquette. And if you look at some advanced societies like Japan, they already have very strict rules around how people should use mobile phones and what conditions. They've had to kind of to reconsider that as a society yeah I think this is the last question and there's a lot of others I think has to do with is your book available to listen to and I can yes that's how I read the book but this is interesting Humans need a quick win in order to go all in on change. What first steps can you advise that we take in order to win in an algorithmic future, no matter what our current state is? I think the quick win for most of us is getting some time back. And I think when you experience personally that using these tools is going to win you back and i think when you experience personally that using these tools is going to win you back some more time that you can now spend with your families with your hobbies just for your personal health you know that is a big win you know i think that makes sense and i think that's what it's to some extent the pandemic has taken from us because this is why remote work endless zoom meetings it actually took more taken from us, because this is why remote work, endless Zoom meetings, it actually took more time from us as human beings, which is why I think it was ultimately a false god and why I don't think it's the path forward. | ChatGPT, AI, and the Future of Work | Mike Walsh | Futurist Keynote Speaker | 1,457 | Mike Walsh (Global Futurist) | 20230127 | How will ChatGPT, generative AI, and other breakthrough technologies like AlphaFold transform not only how we work, but also the very nature of entire industries? Hisham Hamadeh, Vice President, Global Head of Data Science at Genmab joins futurist Mike Walsh to discuss how AI is changing human behavior, the transformative impact of automation on decision-making, reinventing culture and leadership models, and the challenges of ethical AI and regulation.
- - - - - -
Follow Mike Walsh's channel for insights on the future of work, digital transformation and reimagining leadership for the age of AI: http://www.youtube.com/user/michaelwalsh?sub_confirmation=1
- - - - - -
For more information about booking Mike for a virtual presentation or workshop: https://www.mike-walsh.com/topics
- - - - - -
Check out Mike's latest book, 'The Algorithmic Leader: How to be smart, when machines are smarter than you'. https://www.amazon.com/Algorithmic-Leader-Smart-Machines-Smarter/dp/1989025331
- - - - - - | 2024-11-15T11:40:37.945515 |
https://www.youtube.com/watch?v=GZrseZGYG8k | Music Good morning everyone and welcome to the workshop on building applications in the AI era under Google Cloud AI for Impact APAC hackathon powered by Hack2Skill. I'm Uzma and I'll be your host for today. So before we start the session, I would first like to give you an overview of the hackathon. AI for Impact Hackathon is a unique opportunity for people to collaborate, innovate, and create groundbreaking solutions using AI to address pressing social challenges across the APAC region. By utilizing Google's GenAI tools, including Gemma, Gemini, Vertex AI, and Gemini Quotasist, you have the chance to create innovative solutions that can drive meaningful change and improve lives. So whether you're a developer, a freelancer, or an IT practitioner based in the APAC region, we encourage you to join and build for a better tomorrow with AI. So this is the timeline for the hackathon. And we are currently in the registration and idea submission phase. And here's everything you need to know. It's pretty simple. The first step is to register yourself. And after that, you'll form a team. The team can be of up to four members. After that, you need to submit your idea. The deadline for registration and idea submission is november 17th which is two days away so please don't miss out by the end of november we'll also announce the initial shortlisted teams who will move on to the prototype phase and then we have the grand finale in the later half of december now let's talk about what's in it for you. Well apart from getting a chance to create an impact we have got some incredible rewards lined up and trust me they are worth the hustle. The first thing that you can expect is there's a cash pool price of twenty thousand US dollars. Second, the real win is the chance to be mentored by some of the best minds at Google Cloud. The top teams of this hackathon will get a chance to receive feedback and mentorship from the Google for Startups team. For all the top teams and valid idea submissions, we've got certificates of recognition as well. Now the winners of the hackathon will also have the chance to present their solutions at a Google event. And apart from all of this, you also are getting a chance to network with like-minded individuals and gain hands-on experience on Google Cloud's AI tools through several workshops. Now we have several themes for you to choose from and these are empowering healthcare workers, creating a greener world, forecasting national disasters tech for better education helping people with disabilities transforming public service solutions for community empowerment and if you have observed any other challenges besides the ones that i just mentioned you can select the open innovation and social good and submit your solution under that now moving on to today's session i am thrilled to have Richie Mishra, a customer engineer at Google with us today to start the workshop. We'll also have a Q&A session in the end. So make sure to type your questions in the comments during the presentation. So without further ado, over to you Richie. Thank you for being here. RICCHIE MISHRAH KHANNAHARANI- Thank you, Uzma. Good morning, everyone. So like Uzma mentioned i am richie i'm a customer engineer with google cloud uh which means that i work with clients uh to address their uh needs on google cloud i advise them on the solutions that uh on how they build these solutions on cloud right um i have been a software developer for about three years and post that I've been with Google for around one and a half years as a customer engineer now. In today's session, I think we will be diving deep into how you can build using Google AI, right? So in the last session, my colleague Aditi had walked you through the Vertex portfolio and how you can use agent builders. And today's session will be more around how you can streamline your everyday operations, how developers can be assisted through our AI. Right. Now, as we move ahead in the session, I will be switching between the slides and demos please let me know if i'm going slow or fast and if i need to control the pace i also have with me my colleague nikita jays wal who is also customer engineer with google cloud she'll be here to answer some of her queries during the session and i'll be taking up the rest of the questions post the session and i'll be taking up the rest of the questions uh post the session uh so i'll just start presenting my screen uh uzma maybe you can confirm once it's visible uh i need to add the slides, Reduzma. Yes, you can just share this. Okay. Let me just quickly share my screen. Let me know if it's visible. It's visible, right? Yes, it's visible. Okay. So today we'll be seeing how we can build applications in the AI era. But before we do that, here's a concise view of what all we'll be covering. The first will be understanding the role of Gemini, which is our multimod it can be used to generate and complete code right so these are the topics so before we dive dive deep into gemini for developers we'll do a quick recap of Vertex just so that you have an understanding of where Gemini for workspace or Gemini code assist sits in the entire Google portfolio. Right. Moving on. Now, Google has been investing significantly in AI. Right. And that is clearly visible from the portfolio that we have in every layer and every stack. We have something to offer uh right so this is our ai portfolio uh just to uh recap uh we have our infrastructure which has all the leading gpus in the market be it nvidia a100 h100s of the world uh or our own tpus using which our own uh models are trained right uh we you have that available through the uh vertex platform uh we also have a layer of model garden which means that there's a marketplace for different models out there while gemini is our most capable llm model with a 2 million multimodal context window uh in case there are some more other open or partner models like llama or claude mistral any of those that suit your use cases and you want to try them out then that would be available through our models marketplace which is the vertex ai model garden we also have a number of tools to support you to build your own models, be it experimentation or be it taking your MLOps workloads from experimentation to production. We have everything to support you, right? Be it with prompting, serving, tuning, or your pipeline monitoring, your feature stores, or notebooks to help you quickly experiment with python codes right on top of that we have our agent builder which is essentially our out of the box rag solution that was covered in detail in the previous session right how you can build search agents or chat agents or recommendation agents just by the click of a few buttons right um and on top of that we have gemini for google cloud which means that in every step of your google cloud journey gemini gemini will be there to assist you right you have any questions you just open up gemini you ask it those questions uh it will respond to you with summarized answers with all the sources cited as to from where it got all this information all grounded on google's public documentation right we have then we then have gemini for google workspace which means that gemini is in the very dna of everything that workspace has to offer right be it gmail slides or docs or sheets everywhere gemini is there to help assist you in every step of the way in the upcoming slides we will be taking a look at these two products primarily gemini for google cloud and gemini for google workspace right and on top of that we have out of the box like AI solutions which are tailored for very specific use cases as you can see contact center AI which is for contact center solutions right which can be used directly off the shelf we have direct AI search for retail media and healthcare as well which have like customized fine-tuned LLMs for very specific use cases in these industries. Right. Now that we have taken a look at what the Vertex AI portfolio, what the Gen AI portfolio is of Google Cloud, we'll take a deep dive into how this is present in Google Cloud and Code Assist and in workspace specifically uh right introduction to gemini uh is so gemini is an ai powered collaborator in this case uh it can assist with uh workspace collaboration be it writing mails be planning uh writing plan documents uh it can help you with security management, which means that you can just go through, I mean, the process of handling security events, cases, going through the logs. All of this has been simplified using Gemini. It's also present for data analytics, databases and developments and operations to make the lives of data analysts or developers easier right we'll have a look at how how it does that in the next few steps so Gemini for workspace has essentially three pillars emails documents and slides now in emails uh uh it's it's often um often while replying to mails uh we get to thinking right that okay uh what should be a formal tone of replying to this mail uh how should how casual should the tone be uh what is the context of the thread that has been going on for so long right like there are long email threads going on and maybe you're not aware of what has happened earlier and you want to summarize context of that and you want to draft a reply while summarizing all of it, right? And you want to polish your tone style. You want to check for grammatical errors rephrase it in a very nice way, right? So all of these we have often thought of this right that okay if there was a tool to help us with all of this it would make our life so much more easier right writing emails and all and doing all these operational tasks they take up so much of our time but if there were a tool to help us with this it would be so great right so that's where gemini comes in it will help you summarize email threads It'll help you write emails up from the scratch. It will help you polish your emails. It'll help you elaborate your emails, shorten your emails, change the tone of your mails, make it does that in the next slides. And similarly in documents, it can help you like create, it can be basically a creative aid for you in creating outlines or some draft paragraphs or summarize some meeting or bulletize some meeting points, points from a meeting right it can also refer to your other documents in that are there in your workspace in your drive and from those documents it can bring in information and put it all in front of you in one single place similarly it can also help us help you craft like a really good slide it can help you generate images for your slides which are relevant summarize the content and slides summarize or bulletize these content or create entire slides with images and text as well so this is an example of how Gemini helps with writing an email as you can see there's a small prompt saying that need meeting re events Wednesday am and we have refined the selected text using and elaborated it and we have a ready-made draft of an email right there in a matter of few seconds in a click of a few buttons right similarly writing documentation we have given a prompt that okay create a high-level planning document for an events application written in JavaScript using so and so for authorization and a fire based data firestore database it has given us a very detailed bulleted point wise documentation along with like placeholders right it gives you interesting chips and placeholders for you to fill in and create like very good documents in a matter of few seconds right similarly creating images for presentation if you see this particular image this has been created using Gemini in slides you can select from a number of art representation forms you can give your own prompts and uh fine tune it to get the sort of images that you want that are relevant to your slides right uh before we move to gemini for security operations i would just want to show a quick demo of uhini in Workspace while we have it open. Uzma, let me know if my entire screen is visible, please. I hope this is visible. Uzma, you there? Yeah, it's visible. We can see your screen. Yeah. Okay. Thank you. So coming to mails, this is a mail thread that I've sent to myself from my other account for demo purposes. So this is a mail that states that okay there has been an invitation to a ai hackathon event uh to which uh there's a reply that okay there are some replies and there's a there's a thread going on basically now let's say i want to reply to this mail right um i can use gemini to keep the conversation like, okay, just tell Gemini that inform them that you have registered for the event. That's it. I'll just write whatever I'm thinking and I'll quickly create and in a matter of few seconds, I should have a mail polished ready to send right yeah so I have this ready now if I want to refine this further I can elaborate this I can also formalize this more. Yeah, it was already formalized, so it did not change it much. But yeah, this is how it helps you draft replies. We also have a Gemini bar here, which will help you sort of summarize mails. So if I just click on summarize this mail, it should give me a very quick summary of whatever has gone on in the past threads. It's given me all points in a very detailed point wise manner so that I don't have to go through the entire thread right similarly if I were to ask what are the action items for me and tell me what I need to do but okay I need to register for the event and has to wait for the event times to be confirmed a confirmed via the via mail after registration right but it has given me the action items along with the sources that okay this is the mail and from here these are your action items right um so that's how uh gemini is integrated into the gmail aspect of it um similarly uh in docs like i said help if i ask it to help me write this is the gemini uh button if i ask it to uh bug tracker it's given me a sample an example bug trapper tracker with priority level assignment description right this can help us like create create planning documents real quick right just given me some uh dummy data so i can just insert this and i can replace this with my own data similarly here also you can ask gemini a summary a summary of the content right it has given me a summary i can ask it to refine this document as a list of key points by using an informal tone i will do that yeah um similarly I can also reference other documents, right? For example, I have a slide, let's say census household, this is some census data that I just picked up from the internet. And if I ask it to summarize the insights from this sheet in bullet points. It should give me a very detailed summary again of the comprehensive so this sheet was essentially had a census data on the number of districts, of districts, households within each sub district and things like that. And it's given me a very comprehensive summary of the, of what exactly the sheet encompasses. Right. So what this helps in is basically when you have a lot of interlinked documents and you want to summarize all of your uh uh uh you know documents in one place from from sheets from slides from uh mail everything and in one place it this will this feature will help you basically do this and you can ask gemini a number of other things but yeah this is a very high level uh view of what all things can be done right similarly in Gemini is also present in slash sheets right here also you can use Gemini to create formulas so let's say I have this table of students with scores in different subjects right I want to create a formula of let's say to find the mean score in the math in the subject maths right Gemini should be able to generate a formula and give it to me and this is a simple example but it can generate like much complex formulas we look up formulas and all of that so it's given me a formula that just average B2 to B20, which is from here to here under maths and I'll have my mean score, right? It can also help me generate insights like from the given sheet, given table, let's say I ask that give me some insights on the score patterns in English, right? And it should be able to give me some meaningful insights. yeah so it's given me like a statistic score that, okay, the average and median score is this, it ranges from 76 to 94. 75th percentile is at 80 and 87.5. And like, overall English scores show a decent distribution. So basically, you can use it for your data analytics purposes to have like, you don't have to be a data analyst uh to be able to make sense of the data that is there on the sheets right you can just ask gemini natural language and you can get the information that you want out of it um similarly for slides if you see uh gemini is present here as well uh and you can just ask it to give you again similar uh summary of the content that is there right give me and automatic generated a summary so this is a three slider uh sample product gtm uh for a headphone variable uh brand which is symbol and it's given a summary of this content that okay that this is the product this is the gtm this is the ideal customer um that for the product and this is something of the content uh if i were to create an image for this presentation uh i can just give in a beach setting. For, you know, photographic purposes or marketing, to create marketing assets and to create presentations real quick. You don't have to go to the internet, search for pictures, and then put it up here. You can just do it directly here and insert it in the slides it's given us some yeah it's given us some pretty good images as you can see right pretty good pictures for a presentation um so yeah so that was google for workspace and let me switch back to my screen and show you will walk you through the slides now I hope we're back on the same slides that we left off at. Uzma, can you confirm? Yes, Richie. That's correct. Similarly, Gemini is there for your security operations as well, which means that it gives you contextual recommendations even before an analyst goes into the details and digs deeper and finds all these events cases without drilling deep into them you Gemini summarizes these cases for you it creates queries for you to automatically run and it also gives you recommendations as to okay so this was the event that happened uh it could be happening due to so and so resources please check your iam permissions uh are they in place or not uh and these are the recommended actions along with links on best practices and tips and tricks right uh so it's there like it's not in one place it's there throughout google cloud along with integration from integration with Chronicle, which is Google Cloud security analytics platform. Right. So it gives a summarized view, a bird's eye view of everything that is happening without you having to dig deep into the cases. Right. So it just guides you step by step. Okay, this is what has happened. These are the recommended actions for you to take and this is the queries that this is the query that you can run to see what has gone wrong where it has gone wrong and things like that similarly Gemini is also there for data analytics the same features are there the same natural language search which means asking questions in natural language getting summarized replies getting quick start templates all of their all of this is there in the databases as well um similarly it has integrated uh with big query also which means that you big queries essentially uh google cloud's data warehousing solution where you can uh like store and process petabytes of data at blazing fast speed right um so there to write sql queries let's say you're not a sql developer so and you want to write some query you you are a business person you want to uh look at some have some uh look at some tables and get some insights out of it so uh you don't need to wait for a developer to come and assist you with writing a sql query for your task right you can just explain it in natural language and gem and i will be able to give it to you uh in in a matter of few seconds i'll just quickly show how that works now this is the gcp console for those of you have not seen this before, I'm in the BigQuery section where I have a dataset called Movie Lens, which has a few tables. It has few tables related to movies, their genre, and more meta tags related to a movie. So if I want to, let's say, find some movies which are in the adventure genre, right? So I'll just, let's say I do not know how to write a sql query so i'll ask gemini right okay uh help write me a query to find movies in the uh adventure genre adventure let's see I have just note that I have not given it any information on which database it needs to look into or which table it needs to look into but it has gone through the data sets that I have it has selected the correct data set which is movie length and within movie length the correct table which is movies right and it's given me a query and this is a simple query but it can write much more complex queries uh than than this right uh in case the movies the more table source is not correct that it has selected you want to look in a different table you can always go and like edit the table source you can find the table using natural language, right? Okay. That, so this is a correct movie. I can just select a different table if the table that was not correct. Right. And just click on apply close and I insert this query. I run, I run my query and I have my answer and just what, like in a matter of five seconds. Right. answer in just what like in a matter of five seconds right uh similarly if there's sql query then and uh you are a new developer uh who is learning sql right or or somebody else has written it and you need to understand it or to get started or take it forward um so gemini gives you the capability to explain that as well so if i just select this and click on explain query it should give me the intent so yeah it's uh given me like all the details of by breaking it down line by line that okay first line is selected does so and so from does this and this this is a conditional statement which does so and so and has given me a very detailed explanation of all of this right um so yeah so that's how uh that's an overview of how uh gemini is present in bigquery as well uh this is again a sample of what i just showed you that uh okay here we have specified the table and we have asked it to like find us that uh a query to find a number of articles in Google with this title. Right. And it's generated this query for us. Similarly, Gemini for development, this was Gemini in all our products, BigQuery databases or in Workspace. This was in Gemini integrated on the products. Now we have Gemini only for developers which is called gemini code assist uh right gemini code assist is something that is present in your ids it is available as an extension uh and it it's present as an assistant to help you uh understand write explain code generate text test cases and more such things and we also have something called gemini cloud assist which is present in the cloud console itself uh if i were just to just show you let me go to the dashboard and if i click on here gemini uh yeah the button here right uh this is basically gemini in cloud code so if i ask it any question question that, OK, I am a starter to Google Cloud. I do not know how to set up a load balancer. So I will just ask it, OK, how to set up a regional load balancer. Right. I'll ask it this question and it should be able to give me like detailed step by step questions of yeah. Setting up regional load balancer. It's given me the types. It told me what I need to do and it's given me a step by step detail along with the resources that. OK, so these are the resources that you can refer to if you want to set up. This is the tutorial and these are the steps. It can also give you commands like what command should I run to set maybe IAM permissions. So basically what I'm trying to say here is that you can ask anything and everything related to Google google cloud doing things in google cloud you want to understand your billing you want to uh open cloud chill and run some code you you can ask anything and everything to gemini and should be able to assist you so it's given me some commands that i can just directly copy paste and run in the cloud chill right yeah so this is gemini in google cloud uh and similarly uh gemini in code assist is present in ide i'll walk you through um how it looks like um in cloud shell right um so i just open there so this is my cloud shell terminal i have already logged into it uh so there are three steps to this the first step is making sure that the Gemini API is enabled if it is enabled you will see a diamond shaped the Gemini logo here on the right bottom corner then you need to log in select a project for which the Gemini API is enabled and then you have your code here and you're ready to get started with Gemini. Right. So this is a sample code of a library of a librarian, library management system. Right. So this is a DAO class. And if I were to, let's say, understand that what this piece of code is right so I'll just ask Gemini that explain this code to me and it will open up this panel on the left hand side and explain to me step by step what yeah so it's given us a detailed explanation that okay like even a very uh non uh i mean not of any experienced coder somebody who is learning uh this newly can also understand it's it's that simple to understand right it's given us very detailed step-by-step explanation of everything that's happening in here in this particular segment of code right um so yeah it's given like point wise in point wise explanations um similarly if i wish to uh you know write or write wish to write some code, let's say I put this in comments that write a method to update librarian contact. I just select this. Select this. And I'll ask gemini to generate code so it will generate some suggestions and yeah it's here so if you can look at the suggested code you can accept it word by word you can accept it completely if you want to accept word by word you just select the arrow button here and you go next to one by one and see if the code is correct uh if not you can just select tab and accept whatever code gemini has generated right uh sometimes uh like most of the times actually it will cite sources for whenever uh it's generating this code. If it's available publicly, any open source documentation or somewhere, it will cite those sources from where it's taking the code. But since this is LLM generated, it is always recommended to check the code that is generated and not just blindly trust it to accept whatever it's giving. So it's always recommended that you look at Gemini's suggestions and you accept, like, have a look at what it has generated and then decide that if this is a code you want to, if this is what you were looking for, right? Similarly, if you are just writing a code, if I were not writing a comment here i was just writing a code let's say um i am just writing some code that okay uh if i just do control enter then it will start generating suggestions again so yeah so while you are in between the code even then you can uh generate suggestions by just pressing ctrl enter and then there we here there's one suggestion but there can be multiple suggestions as well through which you can iterate and see which one you want to uh which one suits your purpose better right um similarly for generating test cases let's say this is the method i want to which one suits your purpose better right similarly for generating test cases let's say this is the method i want to generate test cases for i again select this i select the uh bulb here and i click on generate unit test cases on the left hand side i will see that it's working gemini code assist is working yeah and it has generated a test scenario right or you can choose to insert this copy paste or whatever you want to do with this right so it's generated just to show here it's generated me test cases for this particular method uh right apart from that you can ask your other questions also similar to how uh gemini in cloud assist uh how you were asking questions there you can ask questions here also okay okay um i want to write a code on uh let's say uh to insert some data into my database how do i get started so that's that's there as well um moving on yeah uh so we looked at gemini cloud assist we looked at gemini code assist code assistance is nothing but uh there will be real-time recommendations while you are writing code there will be uh automatic suggestions uh during the code writing process and um for you know uh accepting or declining those suggestions, it's up to you. So this is what I showed right now, Cloud Code. It's an IDE extension. It's available in Visual Code Studio, JetBrains, and the one that I showed was in the Cloud Shell Editor. There are tools to facilitate all development, deployment, everything in Cloud, be it Kubernetes, Cloud Run or Cloud Functions, computing in anywhere you want to run and deploy code. All of that is streamlined through the cloud code operations and integration of Gemini in there. all right this is this again i already showed this uh how do i as you can see here there's a simple prompt that okay how do i use gemini it's nothing uh related to explaining or generating code it's a simple question how do i use gemini here even gemini should be able to help you with that as well um so yeah so when you open just when you open to a cloud shell editor you will see these steps so like you'll see this screen first the one on the left which says walkthroughs and if you just click on code with gemini it will take you to this screen where you can click on connect with google cloud and then your gemini icon will be enabled and you'll go you'll be good to go to you know start coding with gemini these are the three steps that i showed gemini is enabled uh your project step number two your project is selected where you enable gemini and step number three of the you just start chatting with your queries and questions right right um yeah so these are the uh three code assist basic features you can use a do natural language chat you can have answers to your coding questions you can have answers to your best practices questions on cloud um you can have a powered smart actions such as uh test generation code generation code explanation um and many more shortcuts so as you start using you will uh you won't even realize it but uh it starts the uh like showing up in uh while you are writing these codes and while you are doing your tasks uh it automates so much of the tasks that you you won't feel the um like developers would definitely understand when they move from a local development environment to one uh to the cloud code environment uh with gemini enabled you would notice the difference because it's so smooth it automates your tasks so easily that you wouldn't even realize that you're doing these tasks so quickly right um similarly for code assistance, we already saw the demo on how code completion works and how you can generate functions just by adding comments within the lines of code, right? So Gemini supports code generation in 20 plus languages. These are some of the very popular languages languages Java, Python, Ruby, SQL. It understands all of these, it's trained on all of these languages. So it can help you in code generation for all these languages to look at, have a look at all the 20 plus languages. So it's available in the public documentation. So you can take a look at that to see all the 20 languages that are supported. This, we have already seen the demo for this of how you can generate code by selecting the comment lines. And yeah, this is what I was talking about earlier when I said that there can be multiple suggestions. The one that I was generating was a very simple code. So there was only one suggestion, but if you're writing some complex endpoint method services, microservices, you will see more than one suggestions through which you can iterate and then see what to what works best, right. And like I said, with any generative AI out there, generated content is not always correct it is prone to hallucinations uh so it's always always recommended to uh have a human uh in the loop always uh for any llm use cases at all and that is true for gemini with code assist or gemini with cloud assist as well always check your sources always check your references uh before deploying anything to production right um so yeah i showed how you can accept word by word code changes or accept entire suggestions and carefully check all the inserted code and code completion also we took a look at and this is an example of enhanced suggestions which means that i have just written three lines of code and uh gemini basically understood what i am trying to do and it has given its own suggestions right uh just from the first three lines of code uh that i have written right you can just click on control plus enter and it will give you enhanced suggestions on how you can improve your code and that was it uh for today's presentation uh thank you i hope this was useful and we'll take a look at any questions over to you thank you you can see there are several questions that we have in the comment section so i'm just going to go ahead and display them on screen for you to answer. Sure. The first question we have is how can you optimize costs when using Vertex AI for training and inference? Okay. So this is not related to this session, but i'll still answer this question um so there are a number of features that uh vertex uh has uh provided to help you optimize your costs uh one would be uh something called provision throughputs which means that you can uh like reserve your capacity and make sure that uh your uh trainings remain within that right right you can also create there's something called spot like sharing of the vm spot vms and another number of features which allow basically sharing of your compute across projects and across training operations so that way you can optimize costs now i'll not go into a lot of details, but there are a number of things that can be done there. Okay, thank you. The next question we have is, what are the best practices for deploying and monitoring AI models on Vertex AI? So best practices, I think you'll find it in the public documentation, but there are tools to deploy and monitor your ai models in terms of this model registry there are pipelines that you can monitor pipelines for monitoring your jobs right you can look at your drill down on your artifacts and look at your pipelines and optimize by reserving compute and things like that. Yeah. The next question we have is can Gemini be used for creative writing tasks like writing poem scripts or musical pieces? If so, how can we guide the model to generate desired outputs? Okay. So creativity is subjective uh so uh there are a number of ways you can do uh like let's say you have a particular style or tone that you want the output to be in right uh the first step obviously is always to prompt tune uh tune your prompt accordingly how you want it to be but let's say you have a few examples that you would want the model to learn and understand your tone your style then you would then do go for a few short prompting right you give it a few examples of how the input should be how the output looks like and you basically train the model to somewhat work like you want it to, right? In your tone and your style. So it's a prompt tuning, then a few short prompting. And if not, if you want to build your own models, then you can always take the base model and supervise, like fine tune that model through supervised fine tuning or any of the fine tuning methods to have a model which generates output exactly like you want to. Thank you for this. The next question is, how can Vertex AI be integrated with other Google Cloud services like BigQuery, DataFlow and Cloud functions? So Vertex AI is already integrated in all these uh services like i showed in bigquery right uh vertex uh like it's interchangeable like uh gemini is there in bigquery uh bigquery is available as a source uh for creating data stores in vertex so if you take a look at or take a detail look at all the products within vertex ai and within bigquery uh you'll see that it's very tightly integrated like ai is coming to well ai from vertex is coming to bigquery and vice versa uh the one that i showed where gemini is able to generate uh sql queries through natural language prompting that is one such example and a vice versa example would be uh uh in last session i think we looked at how uh agents are created right in vertex um so agents uh can have uh sources data sources on which they are grounded and these data sources can come from big query as well so that's how both of these are intertwined and similarly it's for data flow and cloud functions as well. Thank you. Another question we have is, what are the best practices for deploying AI models to production environments, especially in terms of scalability and reliability? Yeah, so in terms, I'll take this in two parts. First of all, reliability, it's an LLM model so obviously uh you you need to put checks in place you'll need to uh have some sort of methods in place to make sure that it does not hallucinate as much in production uh production environments and that that can be achieved through a number of methods grounding is one method you can either ground it on your own data you can ground it with google search which is again a feature in vertex ai and you can also uh have some sort of like there are a number of controlled generation features and a number of features that come with vertex a that can that in fact customers are using to uh take ai workloads to production right and similarly for scalability, I talked about provision throughput earlier, right? To reserve capacity, to make sure that your resources do not get exhausted and that only you have, that you can be assured that, okay, if this capacity is reserved, then I can definitely scale this without my end users running into quota issues or resource exertion issues. Right. And again, a high level summary. There are a number of best practices that can be put in place. But this is a very high level summary of what customers and clients are doing to take AI workloads to production. Great. I can see most of the questions have been answered by Nikita in the comments. But one last question that we can take is, in the case of users with Chronicle, which metrics can we expect? Which metrics as in, I mean, I do not get the question exactly. Which metrics to expect as in from Gemini or what is it? I think Mario, you could just write in the comments. I need a little more context of what do you mean when you say that which metrics to expect from what? One more question I can see here is what factors should be considered when choosing between Gemini and Vertex-CRA models for a specific project? So Disha, this is completely use case dependent. You can try and test to experiment with different models. We have a number of tools in Vertex to do that. You can choose different models. You can compare the outputs from different models. We have a number of tools in Vertex to do that. You can choose different models. You can compare the outputs from these models. You can, you know, there are tools which allow you to compare two models and see which one suits your use case best. So choosing, so yeah, that would be the answer that choosing a model between Gemini and other models is completely use case dependent if you have a uh multi-modal uh let's say uh multi-modal even within gemini there are two different models right one is a pro model another is a flash model uh so even choosing between those two can be a decision uh if for you uh for your use case latency is a concern uh you do not uh bother a lot about the quality of the output but rather you want the output to be generated like blazing fast speed right uh then you would go for a gemini flash model over a pro model uh if you have a longer context window 2 million context window requirement for a multi-modal use case you want to upload a two hours of video and do some sort of processing on top of that uh then you would go for a gemini model because no other model out there has a two million context window to support that sort of a input size right um so how you choose between gemini models it's a similar choice between other open source models as well yeah so i can see we still have questions coming in. One more question that we have is, how can we deploy CNN ML models with the help of Vertex AI API and Google Cloud Platform? I can send the documentation on deploying models. I mean, it's a step-by-step procedure. I wouldn't be able to summarize here in one line answer. There's documentation for that i'll just share that real quick just give me a minute Yeah, I'll just paste it here in the chat window. Yeah. Okay, I'll just share this link in the comments. Yeah. So all of this is available in public for best practices, questions, and all these questions which are asking the steps for deploying a model and things like that. There is public documentation, very good documentation available. You just need to go to Google and if you just search for it, you'll find it in our official Google Cloud documentation pages. All best practices and the steps to do all of these things. We can take one more question. How can you use BigQuery enhanced with Gemini's advanced analytics and AI to study customer behavior and sentiment through new marketing? Okay. So to study customer behavior, I'm just going to take an example here because this is a very broad question. And I mean, it's not possible to answer in one line again here. But let's say you have some customer data. And there are a number of tables, there's some data and you want to build let's say visualization charts or have some natural language insights out of it right so we have our looker dashboard on top of bigquery which has which helps you create these visualizations and what you can do is gemini is also integrated into looker i haven't demoed that but gemini is also integrated into looker visualization dashboard which means that if you ask it questions that okay you need to find me customers in this what are different customer segments possible uh or let's say what if my customers in this segment in this demographic i would like to understand their behavior help me generate some insights on that if you just ask it in natural language it will be able to help you give summarized answers in natural language again and help you uh create this charts also if you just give natural language inputs right that is one and otherwise in bigquery also you can use gemini to write these auto ml uh queries AutoML, you know, SQL queries and AutoML queries to using Gemini. You can write these queries to build these models for sentiment analysis or customer behavior analysis. Okay. Thank you so much, Shrechi. I think these are the questions that we have. The rest have been answered by Nikita in the comments. Thank you so much, Shrechi. I think these are the questions that we have. The rest have been answered by Nikita in the comments. Yeah. So to everyone watching, this is a QR code that will take you to a feedback form. And I request you to please fill the form so that we can better understand your perspective about the session today. So I'm just going to leave it here for a bit. You can just scan this and it'll take you to the feedback form now thank you everyone for joining us today and a big big thank you to you richie for such an engaging session um it was pretty interactive and we have so many questions as you can see in the comments and to others who are watching remember that if you have any further questions or if you need any type of support please feel free to reach out to us at ai4impactsupport at hacktoskill.com we are also dropping the comment dropping the email in a comment so for your reference best of luck everyone with your hackathon journey and we can't wait for the amazing solutions that you come up with. And thank you again Richie. Thank you for being here, for taking up the time. No worries. Thank you. | Workshop-2 (English) | AI for Impact APAC Hackathon 2024 by Google Cloud | 3,363 | Hack2skill | 20241116 | Welcome to the 2nd workshop of AI for Impact APAC Hackathon 2024.
The AI for Impact Hackathon, presented by Google Cloud and powered by Hack2skill, is a unique opportunity to leverage the transformative power of AI to address pressing social challenges across the APAC region. By utilizing Google’s GenAI tools, including Gemma, Gemini, Vertex AI, and Gemini Code Assist, participants will have the chance to create innovative solutions that can drive meaningful change and improve lives. By harnessing the potential of Google’s cutting-edge technologies, develop solutions that address a wide range of societal issues, from healthcare and education to sustainability and accessibility.
To know more and participate, visit: https://vision.hack2skill.com/event/aiforimpact_reg?utm_source=H2Ssocial&utm_medium=H2Ssocial
Register and submit your AI idea before 17th November, 2024. | 2024-11-19T11:12:34.952863 |
https://www.youtube.com/watch?v=jYIEqju84a8 | Hey! Hey! Hey! Hey! Hey! Hey! Hey! Hi everyone, good morning. Welcome to the workshop on Gemini and Vertex AI agent builder under Google Cloud AI for Impact, APAC hackathon, powered by Google Cloud and Hack for Skill. I am Shweta and I will be your host for today. Okay, so before we start with the session, I would first like to give you a brief overview of what this hackathon is all about, including the timeline, report section, themes, and other things. So AI for Impact Hackathon is a unique opportunity to collaborate, innovate, and create groundbreaking solutions using Google Cloud's Gen AI tools to address pressing social challenges across the APAC region. So you have various tools to explore, including Gemma, Gemini, Vertex AI, and Gemini Code Assist. And you have the chance to create innovative solutions that can drive actual meaningful change and improve the lives of various communities living in the APAC region. So whether you are a developer, freelancer, IT practitioner or maybe an AI based or tech based startup solely based out of APAC region, we encourage you to join and build for a better tomorrow with AI. So with respect to the timeline, we are currently in the registration and idea submission phase. And here's what you need to know in this current phase. So first step is register yourself. That is going to be the first and very easy step. Second is form a team. So here you can also participate solely because if you want you can also form a team of up to four members then the next step is that you have to submit your idea and the deadline for both registration as well as idea submission is november 17 so make sure you do not miss out as we only have a week left. And as of now, for idea submission, you can just go to your dashboard and answer the questions that are written there and only have to submit your idea and not the prototype. Then the shortlisted teams will be moving on to the prototype phase where they have to integrate Google Clouds, Gen AI tools and everything, and develop their prototype. And by the end of November, we'll announce the initial shortlisted tips. And we are going to have the grand finale in the later half of this episode. So now let's talk about what's in it for you. What are the rewards and benefits? Well, apart from getting a chance to create an impact we have got some incredible rewards lined up and trust me they are worth the hassle so number one is cash pool price worth us dollar 20 000 and second is so cash pool price is one thing but i believe the real win here is the chance to be mentored by some of the best minds at google clubs so we'll be conducting several workshops like this one in next week as well and apart from that the top shortlisted teams of this cyclotron will also get a chance to receive feedback and mentorship from google for startups team so you know that is something that that is there as a real win for all the top teams apart from that for all the valid idea, you have got certificates and credits for the Google Cloud platform. The winners will also get a chance to present their solutions at the next big Google event. So earlier this year as well, Google Cloud conducted the GNI hackathons and the winners got a chance to present their solution at the Google I.O. Connect event happening in different cities. So similarly, you will also get exclusive passes for Google events for this hackathon as well. And last, you are getting an opportunity to network with like-minded individuals from across the APAC region and gain hands-on experience on Google Cloud's AI tools through these workshops. So this hackathon is basically focused on creating a social impact. So under that we have several themes for you, which you can choose one and submit your idea, starting from empowering healthcare workers, forecasting natural disaster, creating a greener world, tech for better education, helping people with disabilities, transforming public service and solutions for community empowerment. And if you think that you have observed any other social challenge that you want to solve for, then you can select the open innovation and social good problem statement and you can submit your idea under that. So now moving on to today's workshop, I am thrilled to have Aditi Jain, Customer Engineer at Google with us to take forward the workshop. We'll also have a Q&A session in the end. So make sure that you type out all your queries and all your questions in the comments section during the presentation. And after that, we will take a look. So without further ado, let's welcome Aditi. ADITI Jainaini- Thank you. Thank you so much, Veda. Hey, everyone. Am I audible? Just a quick check. ADITI Jainaini- Yes, you're audible. ado let's welcome everything thank you thank you so much for that hey everyone am i audible just a quick check uh yes you're all perfect uh so i'll probably take over and start sharing my screen and basically get this started give me a second i think my screen should be visible now uh shweta, quick check. Are you able to see the screen? Perfect. Hey everyone, myself Aditi. I am a custom engineer in Google, which means I'm basically a solution cloud, a pre-sale solution architect working with our sales teams here with our customers to help them solve multiple problems across Google Cloud, whether it's AI, data, infra, etc. I've been part of Google for almost two and a half years now in total, and been working in the overall Cloud industry for almost nine years now in total. So moving forward, this is the agenda for today. Very quickly. We'll start with talking about Vertex AI, our foundation AI ML offerings in GCP. Then we'll quickly move forward to do a quick little deep dive around Vertex AI search and conversation, how you can use Vertex AI search and conversation on how easy it is to get started, what you can build and towards the end, we'll also try and get you a very quick walkthrough of the platform on GCP console. Along with that, we have built a very small, quick chatbot for the demo which we have built using Vertex AI Search and Conversation so that at least we'll be able to get you a gist of how exactly you can create these chatbots and, and of course, just easily mingle with your ideas that you have been thinking about. Then we'll move forward and talk about Gemini, our Gemini offerings, how you can get started with Gemini, what are the ways you can simply just, you know, either use Gemini for fine tuning, basically pre-trained Gemini models on GCP, and how exactly is it you can do it on the GCP consoles towards the end. So this is basically the agenda for today. Now, let's move forward to the very first topic of today, which is setting the foundation on Vertex AI. Now, before I talk about Vertex AI or our solution areas on GCP or the platform on GCP, I would first want to spend like quick two minutes on different consumer and enterprise perspective on these use cases, right? And how Google has focused on developing these different solutions for different user needs. So on the left side, as you see on the slide, we have consumers and enthusiasts. So consumer as in us using Gemini or maybe generative AI for our own personal use. Maybe you want to plan a trip. You want to basically, you know, plan a neighborhood block party. You want to write a novel, just want to get started on the idea how to get started, right? So basically brainstorming from an end user perspective, who's not very much into AI, just want to use AI for getting their work done a little bit more impactfully. Of course, enthusiasts, maybe students who wants to explore AI, so they can definitely get started with our offerings like Google AI Studio, start developing on top of it and start exploring our offerings on that perspective as well. This is one side of users which we see in the AI platform. And then second is enterprises and developers like you, who are either working with big organization and trying to build some solution for them, or maybe yourself, and you're trying to basically see how enterprises will basically fit into this overall AI ML solution areas. So if we specifically talk about enterprises, their needs are very different from the consumer use cases, right? So enterprises probably would want to understand and see how exactly they can use these AI ML solutions to build something from scratch for themselves, or maybe their end customers if they're into service based background, or maybe they want to integrate these AI ML solutions, maybe a chatbot right into their existing application. Similarly, they would also want to keep a cost into the mind as well, which is primarily a very big decision point for any of the enterprises who are working on AI, right? So these are some of the areas where enterprise are now exploring AI ML as well. If I talk about products perspective, so from consumer perspective, we have products like Gemini app, which is publicly available. So you can just simply go on Google and type Gemini.google.com. And you will be simply able to access Gemini public application for consumer perspective, right? And hence, it's a publicly open application. Hence, from a security standpoint, not very fit for a firm and organization or enterprises. Then if I talk about solutions on enterprise side, we have, of course, the talk of the town for today, which is Vertex AI platform, which has everything and anything that Google Cloud has to offer from a model perspective, whether these are foundation models that Google has built or any third party OSS models that we already have launched or kept onto our platform. Or maybe you just simply want to get started on pre-training and fine tuning your own models on platform. Right. So you can do all of this Gemini models, whether you talk about Gemini 1.0, 1.5 Pro, Gemini Flash, etc. All of them are also available on Vertex AI and on GCP, so that you can just simply get started by simply pre-tuning our models, and then start using them in your own use cases and applications. Now, moving forward, now that we've briefly talked about different AI users, what are the solutions that would make sense for them from Google GCP perspective? Let's just shift the focus a little bit more on the enterprise needs and we'll do here as well for Vertex AI. So Vertex AI, if you ask me what Vertex AI is, Vertex AI is an AI ML platform that has anything and everything that you can think about from an AI ML perspective. If you want to simply launch and run your own custom model, you can do that on Vertex AI. If you want to use pre existing models from Google, which we call foundation models like Chirp, Gemini, Gemma, etc. You can use them on Vertex AI as well. And of course, if you want to, in addition to Google AI Studio and Vertex, we already have integrated Gemini in our powerful platform like Collab. If anybody of you have been using Collab, then Collab Enterprises and enhanced version of Collab, which is also available on Google Cloud. Firebase, if you're into building mobile application or Android applications, you can easily use Firebase for maintaining and managing. It's a very known platform. And of course, Flutter. On top of that, Vertex AI also integrates via an SDK and offers Python integration to let you automate ML workflows as well. If you want to basically create ML automation, you can just simply design those workflow using Python on Vertex AI SDK. Then apart from this, we have Vertex AI Workbench, which is a Jupyter notebook-based development environment available on GCP as well, which you can use anybody who's into AI ML or who's an AI ML developer, you can just simply use to create an entire machine learning workflow on the workbench and then get started. So with these different integrations of Vertex AI within Google Cloud solutions and even with solutions like Flutter and Fireways and Collag, any developer who's more into developing application, they can be an AI developer. So that's basically a nutshell of how exactly we have built and focused on Vertex AI to make it a little bit more accessible on Vertex. Also a quick plugin, if you feel like I'm going too slow or I'm going too fast, just drop a ping and then we'll try to adjust the speed as well. Now let's take a look at various offerings from a Gen AI perspective on GCP as well. So the idea is that we as Google understand that generative AI business value would require a comprehensive platform and not just a single solution. Hence, we have built the slide that you currently see are the solutions that we have built focusing on those business problems. So we have built solutions to help basically meet different business needs and different user needs. So if I start from the bottom on the screen, which you see here, we have Google Cloud AI infrastructure. So anybody who is coming from an AI background, we know that to be able to run any AI-based LLM model or a simple model, we need GPUs to run them a little bit more effectively from a performance POV. Hence, we would need the AI infrastructure to be able to run these models. So we offer that as well. So GPUs like H100s, A100s, we have our own native GPUs, T4, etc. are available on GCP as a platform for anybody to get started on GCP. You just simply spin up either a GKE container or a virtual machine, which has GPU enabled in them and then start using and running these models on GCP platform. Now, moving forward a step ahead, which is basically this layer. This is where our Vertex AI platform comes into. It has three major dissections if you want to call it. So the very first one is Vertex AI Model Garden. So Vertex AI Model Garden is basically a platform basically which is which combines diverse foundation models Google owned. And then of course we have some third party and open source models also available on Vertex AI Model Garden. So this is different models and either start building on top of them or just basically integrate these models via APIs into their own applications, right? So Model Garden would be a way to go. Then we have something called Vertex AI Model Builder. So if you want to, you know, prompt tune, you want to serve a model, you want to fine tune a model performance, if you want to access notebooks, if you want to train a model. Anything from a AI ML pipeline perspective, monitoring perspective, you would use a Vertex AI Model Builder to able to perform all of these tasks, whether it's workflows or basically automating the overall machine learning pipeline. This is where Vertex AI Model Builder will be able to help you with. Then moving forward a step ahead, then we have Vertex AI Agent Builder. This is what we will be exploring today specifically. So Vertex AI Agent Builder is a platform which can help you build out of the box custom agents. Two basic themes where you can build around your chatbots on. One is search. So if you want to build a search-based model, all you have to do is just simply integrate the backend data source with Vertex AI Search. And then just spin up a frontend where you can access this search from. And then you will be able to ask questions and search through multiple data using natural language prompts through search. And similarly, for chatbots, right, a normal use case chatbot where you want to build a calling chatbot, a chat conversation for retail customers or many examples like healthcare, a simple chatbot where you want to build a calling chatbot or chat conversation for retail customers or many examples like healthcare a simple chat chatbot on a website right or maybe internal chatbot for multiple use cases you can build that on vertex ai agent builder as well now going a little bit top on this so as i was mentioning earlier we have gemini as a model as well so what we have done is we have integrated Gemini in a lot of our own Google Cloud solution as well. So if you are somebody who have already been using BigQuery, for an example, you might have noticed that we have integrated Gemini with BigQuery as a product as well. So now you can just simply use Gemini on a chat prompt and then get the SQL query to run on BigQuery. Just simply use natural language to write what exactly you want to retrieve out of BigQuery, and Gemini will be able to help you write the SQL version of it. As simple as you start using these products. Similarly, we have integrated Gemini in our Google Workspace product as well. So customers who are specifically using Google workspace for their organization communication needs, we have integrated Gemini and Google workspace, which I think the followed workshop will probably cover in a little bit deeper. But just to give you a brief, Gemini for Google workspace is integrated in all of our workspace applications, which mean if you're somebody who has been using workspace, it has integrated it has integrated in Gmail, Google Docs, Google Slides, Sheets, even AppSheet as a platform, and even Google Drive, which from a consumer end user perspective, makes the life a little bit more easier. For example, writing emails, writing comprehensive document, analyzing a lot of amount of data in Sheets, which is generally not easy for a human eye to just capture patterns. Instead, you can just use Gemini, ask questions based on your data in sheets. So for example, let's say you have 1000s and 1000s of rows in the sheet and you want to understand a pattern in that sheet data, you can just simply use the integration of Gemini in Google Sheet and ask a question on that data, maybe analyzing a pattern, maybe asking one particular information. And Gemini will be able to help you do that very easily. Right. And then third is, of course, building your own generative AI powered agents, which is your own chatbots that you can also do with our Vertex AI and Gemini platform. The top layer that you see here are remaining solutions like contact center AI, healthcare data engine, search for retail media healthcare. Apart from this, we have solutions like document AI, right? So all of these solutions are out of the box, like speech to text, text to speech. So if you're somebody who has been working on Google Cloud products, then you probably know speech to text and text to speech is an oldest product that we have actually been working with. So all of these are also part of our overall AI solution platform. Now that we have looked at the very high level overview of Vertex AI and Google Cloud AI platform, let's do a quick deep dive around how do we get started with specifically on Vertex AI search and conversation, which is building agents or chatbots for search or normal conversation based use cases. Now, basically, GenApp or when we call agent builder allows users or developers to build very low code and Gen AI enabled app on two types. It's very low code. When I say low code, it's actually a three or four step process to just simply build a chatbot and which really require very low or almost none of the coding skills to be able to build these chatbots and get started with these chatbots on platform. One use case could be building enterprise search applications, which have a lot of data at the back end attached to it. And then you can just simply use to build those enterprise search apps to build a search engine, maybe for your own organization or maybe for end customers. So the good part is attaching the data with enterprise search. You can easily work with data like unstructured data. For example, let's say you have a lot of PDF files, text files. You can simply attach them to Vertex AI search. Structured data like JSON or maybe BigQuery rows, that also is available as a data source and even website data. So if you would simply want to attach your website and build a search chatbot for a website, you can just simply add your website's data source and Google's tool will automatically use indexing for the site and will be able to help you answer the questions as well. So once you build an application using search and attaching multiple data sources, your end users will be able to search that data. And not just on carefully selected keywords, they can even use natural language, which is simply asking questions how we converse and ask those queries in the chatbot. And actually, InGen will be able to process and will be able to return the answer to that query with the linked resources and also summaries and conclusions. So think about use cases. If I particularly talk about use cases, think about use cases where let's say you're working for a healthcare organization as an example. And when we talk about healthcare organization, they generally deal with a lot of amount of data, for example let's say you you have built an application for maybe doctors because they need to get access to a lot of patient information very handily right so what you can do is if they have maintained uh their patient files in some sorts of format you can just simply connect that data source it can be connected with google cloud storage it can as i mentioned. It can be connected with Google Cloud Storage. It can, as I mentioned earlier, it can be directly connected with files as well, which are stored in Google Cloud Storage. And once you connect them, you can just simply ask questions. So for example, if I am a doctor and I want to see, okay, what was this patient's history when I last met them? I can just simply search, putting it in the search chatbot that I have created and just get the answer in a natural language problem. Instead of going back, search for the patient information and then opening a particular file. So this is one use case, very, very random use case. They can be much more and better use cases that you can start building using our search AI and conversation. So this was the search use case that I just mentioned about. Another use case of Vertex AI agent builder could be building information chatbots applications, right? Which is primarily focused on providing intelligent chatbot, which can answer questions related to a specific information, right? Which can be information about your business. If you're building a chatbot for your website, it can be a chatbot created internally for your internal processes. If a new employee joins, for example, and they want to look for very particular information, they can just use that chatbot to find the information in the organization, right? So imagine taking all of your product documentation and helpdesk material and building a chatbot that allows your end users to very quickly get answers without having to scan dozens and hundreds of pages. As simple as that. So this is how Vertex AI agent builder can actually help you build grounded data stores and then help build AI applications. So as you can see on the slide as well, it's very simple three or four step process to get started with Vertex AI Agent Builder. You just simply create a data store first, which is attaching your source with Vertex AI Agent Builder to serve as a source of truth for your AI application, whether it's a search-focused application or information bot-focused application. And then second step is simply loading the data store from multiple sources. Could be BigQuery, could be cloud storage, even Google Drive for that matter. And then simply just start using it. And then once Vertex AI simply process all of this data for content and meaning, your app will be ready. So you can create search applications, recommendations, application, chatbots, etc. And then just simply start. also building and using Vertex AI agent builder for multiple use cases in the market. Now moving forward, the current page that you see, when we worked on Vertex AI agent builder, we realized it's very important for our customers who are using these solutions to build their own platform to be able to create grounded generation of responses, which means the response that Vertex AI is coming back or the agent builder is coming back with, it should be grounded in nature, which means it should be relevant to the use case that the customer is trying to build on. Hence, we build the solution to make sure that you're able to attach your own data to basically meet where your need is. So Vertex AI Search will be a great option if you want an end-to-end, out-of-the-box customizable solution for search and answer generation on a variety of corpus types, depending on your use case, of course. And if you want to build your own solution, you just need more control. And we totally understand that, right? Which is why we have enabled building your own rag with the AI chart builder as well. So you can just simply by integrating your own rags, you'll be able to simply get more accurate and reliable responses from Vertex AI agent builder whenever you build your application and attach your data sources to it. So as you can see this particular slide from processing and annotating, embedding, indexing even and retrieving all of this task is done by Vertex AI. So you don't have to manually do an indexing to be able to better find, to basically to be able to better the search response and performance. Indexing is automatically taken care by Vertex AI as a platform. All you have to do is just attach your data to it. Of course, start working towards it in terms of once Vertex AI process the data and you are good to go in terms of building these chartboards. Very quick view, this slide just very simply show you, and I will be showing you this in a Google Cloud console as well, but I've still attached the screenshot for a very quick reference. So as I was talking about, very simple three-step process of getting started with building these agents on GCP platform. First, you connect your source. So as I was talking earlier, we have multiple source available, right? Whether you want to attach a website URL, Vertex AI will automatically index the URL and will be able to get you a faster response. BigQuery as a data source, cloud storage, if you have a lot of files, right? For example, let's say you want to build a chatbot or search chatbot for your internal employees to be able to find relevant information on your organization, policies, etc. So you can just simply upload all of the policies into a cloud storage bucket, for an example, and just integrate that, add that as a data store for Vertex AI search and then just get started. Healthcare API, Google Drive, or any other APIs which you can manually integrate or import data from by calling those APIs as well. Then once the data store is attached, you can just simply start building the application. As I mentioned earlier, you can build search, chat, or even recommendation or normal agent-based chatbot applications as well. That's pretty much very easy to get started with our Vertex AI chat and search conversation. Now that we have very quickly spend some time on Vertex AI search and conversation, let's now move forward towards understanding Gemini's overall plethora of offerings on GCP. Now, before I talk about different models available, different Gemini versions available on GCP, again, connecting the dots from what I was talking about earlier, which is we have focused our AI MLB solutions from a different user needs perspective. So if I talk about consumers, again, Gemini models are available as an app or web application to our consumers directly in a public availability. Then of course we have Gemini is available in Google apps. If you look at Google search application, you get Gemini responses as well. Then of course, we have already integrated Gemini in Gmail, Docs, etc. This is where we are focusing on making our AI and ML solutions available to our consumer-based users. Then of course, we have Gemini available for developers. Developers who specifically want to use Gemini API, either through Vertex AI Platform, Vertex AI Studio, and they want to basically fine-tune their responses, want to use Gemini API into their own solution that they have built, they can just very easily get started with that using in simple APIs. Then of course, the third level of users, which is businesses and enterprises. For them, we have of course, build Gemini for Workspace and of course, Gemini for Google Cloud. Gemini models available on Vertex AI as a platform, and then of course, Gemini for Google Workspace, where you can start using Gemini in all of our offerings. This is Gemini in all of our offerings. So this is Gemini for you. Gemini is our most capable and generalized multi-model that Google has ever built. And it has been a really great effort by our teams across Google, including our DeepMind team, which is very specifically focused on building these solutions, and of course, Google Research. Now, Gemini is integrated in our most of Google Cloud products and services, as I was mentioning earlier. So example, as I mentioned earlier, could be BigQuery for SQL assistance, conversational assistant through a chat interface cloud console can also even help so if anybody of you have used google cloud console then then you know that we have already built an integrated gemini in the console as well through a conversational chatbot assistance right where you can just simply get the help on the console with anything related to Google Cloud, right? So Gemini is now available from being an assistant to an AI powered code assistant even, right? So we have something called Gemini code assistance also available. And we have already integrated the code assistance with Gemini into multiple IDs platforms as well. So if you're somebody who have been using Visual Studio code, then you will be able to integrate simply and access Gemini's code assistant in Visual Studio as well. We have already indicated this within JetBrains IDE. We have already available in our own Cloud workstations, Cloud Shell. If you're somebody who already have used Google Cloud, then you're definitely a little bit more known about Google Cloud Shell, Cloud Spanner, and BigQuery. So all of these integrations are already done. And of course, we are working more and more to enhance our offerings from a Gemini POV. This slide basically will give you a very high level overview of Model Garden. So Model Garden again is an offering within Vertex AI as a platform where you can get access to all open sourced, not all actually, a lot of open source Google owned foundation models as well as partner models available through Model Garden. So, of course, we have our own foundation models also available on Model Garden, which you see on the left side, including Gemini and the latest version, which is Gemini 1.5 Pro, Gemini Flash, Gemini Ultra, and Imagine, Kodi, Chirp, all of these models, foundation models that Google has built, all of them you'll be able to access on Model Garden. Any open source models that you might want to use for an example, Meta's Llama 2, Mistral, Ditto, Gemma models, which we have created, and we have made Gemma as an open source, so you can just simply start using Gemma as well. And then any third-party models like Anthropic, Clouds 2, and of course, we are adding more and more model support from a third party perspective in the platform as we move forward just to get you an understanding of how these models are like which models are available this is what the categorization on a very high level looks like on model garden right so at the bottom you will see all of our partner and open ecosystem model, which includes Lama 2, Falcon, etc. Then we have Google domain-specific models that we have built, which is MedLM specifically built for life sciences and healthcare, a large language model which has been trained on medical data. So it understand medical terminologies, etc. to be able to better perform when it comes to medical data. And then SecPalm specifically built and focused across cybersecurity. Then we have Google task specific model. So I think these are the older oldest AIML based offerings available. If you are somebody who already know Google's platform, then you totally you might have worked with these solution areas before as well. So we have speech to text, text to speech, of course, then we have natural language model. We have translation, doc AI, OCR, etc. So all of these models are also available. Then we built our foundation model, which I was talking about earlier, Palm to Imagine. Imagine is basically our image to text and text to image based model. Chirp specifically made for audio to text based audio to text based model processing specifically. Kodi is built for, of course, your code assistance and embeddings as well. And then on top of that, we have Gemini foundation model. So any customer who doesn't want a single model for different different tasks, we have made Gemini. So as you know, Gemini can work and can process data in a form of image, audio, video, etc. So Gemini is a multi hence we call that multi multi model. And we have different versions of Gemini also available as as we speak now. So in a total of available models on Gemini, we have somewhere around 130 plus enterprise ready foundation models in Model Garden available as of today. And of course, we are working on adding more and more as we move forward. A very quick overview of getting started with Model Garden. All you have to do is just simply search Model Garden on GCP Cloud Console. Once you do that, you'll be able to see all of our models which are available here. Of course, starting with Google's foundation models on top, and then of course, our different models from third-party and open-sourced, which are at the bottom. Of course, you have two options whenever it comes to accessing and deploying a model or running a model. Either you can run a model on Vertex AI platform, which is a managed platform just like any managed services offering, helps you streamline the overall operation tasks of running and managing these models. Then of course, if you don't want to go with the platform and want to control and manage your models and running these models yourselves, then you can just simply get started on compute engine or Kubernetes engine and start using and running these model on Google Cloud. With that, I'm through with my content and then I'll probably move to the demo and then we'll take the questions towards the end. Give me one second, I'm going to change my screen. Let me just refresh this. So this is a chatbot that we have built on GCP as a platform, on GCP itself. It's a demo chatbot that we have built, focused on the retail use case, which I was talking about earlier. So if you want to build a simple retail-based infobot, which gives your end user some information about maybe on your website, internal, some processes. This is a very simple example that we have built. After I show you the demo of this end chatbot, I'll quickly show you around the platform as well on GCP. So I'm going to start with the chatbot. As I mentioned, it's a retail-focused chatbot. So it basically has all the information about retail store called Symbol Bikes, of course, hypothetical store. And as you see on the left side, if you see right, we already have attached these documents via our cloud storage as a source for this chatbot to be able to answer those questions. So as you can see, it can answer questions on return policy. And generally, it's totally up to you whether you want to show this or not, but we have just kept this for this demo purposes, so that at least you know what sort of data sources we have connected with this chatbot, right? So we have connected a document for return policies, services, any store information or frequently asked questions. So any customer who might anybody who might want to ask questions, they can just get answers across all of these different topics. So let's say I want to ask the chatbot, are there any stores available in Miami? So let's see the chatbot comes back with here. Since I already had added a data store information about my stores and locations, the chatbot was able to come back with a response because we had a store in Miami. Now, let's say I want to ask a very process-related question. that I purchased before, can I return to? Yes. Since I'm asking for a return, the childbot automatically came back with a return policy attached to it as well, which is it offers a 30-day return of policy on all items. You can just simply return the item to the store where you purchased it or just basically ship it back to the right store. And these chatbots that you have created are similar to the chatbot that you probably see on multiple websites as well. Earlier, if you talk about creating these chatbots from scratch with the use task, because you had to code and then create a web interface, et cetera, but now simply you just put a front-end, maybe a front-end running on an app engine, or maybe a front-end running on a compute virtual machine, depending on how quickly you want to build it. And then on the back end, you can just simply create or attach a data store and then just simply get started with it. So as many data you want to attach to it and the chatbot will be able to response with more and more information. So this is a front end, very simple chatbot. And I'll just quickly show you on GCP as well, a very quick process in terms of how easy is it to get started with building agents in GCP. Give me one second. I just searched for Vertex AI agent builder. You can see it's currently loading, but you can see all the apps that we have already built. If you want to create a new app, you can just simply get started with creating an application. I'll just quickly show you how easy is it to get started with building these. As I was showing earlier, you can select the type of the application that you want to build, whether it's a search application chart, recommendation. Let's say we want to build a search application, generic, media focused or healthcare focused. The reason we have segregated this is because we understand that healthcare-based customers have FHIR and NorvHIR data and then the understanding of these terms would be different. Hence, different configurations and content specifically selected. Let's say we want to build a generic chatbot. You can select the enterprise edition feature, which are basically emit search website search etc which lets you search for websites as well advanced these are something which we have added very recently which is adding advanced feature to our bots as well you can give give give a name to the chatbot you want to give a company name you can give the company name as well. Then you can select where you want to deploy. As of today, you can either create a global chatbot. Basically, if you are coming from an organization or you're building a solution for an organization who have users across the globe, then you can pick global which will make sure that we have endpoints in the back end of this chatbot into multiple locations, or just restrict to US or EU as well. At this point in time, these are the only regions where we have the endpoints available. Let's say I just picked up US as a region. The second step is you just simply attach data stores. So as I was talking earlier, we are adding more and more data stores as we speak today with Vertex AI Agent Builder. So you can pick from all of these options wherever you want to add data sources on. So let's say you want to build a cloud storage. You want to attach data storage as a cloud storage. Then you can just simply select the data type, structure, unstructured, media data, etc. And then ingestion, whether the data should be ingested in Vertix AI once, which is processing of content, or periodic, which is basically a preview function. It's not currently GA yet. And then you can give the path of the folder and file. Once you give that, you just click on continue and you have your app configured. And all you have to do is just build a front end and then point this app via an API call to basically make sure that you're able to access this. These are the current data stores that we have built. If you want to basically monitor your application performance, you can just get all of your data inbuilt directly from here. By traffic, latency of your chatbot, any error rate which is happening, you can just simply see the API matrix here as well, which is how many API calls are being made, etc., from your front end or any of the website that you might have put the chatbot on. These integrations are available, and this is how easy is it to get started with Vertex AI chat builder. Question, Chweta, am I still audible? I think I'm just getting a prompt that I've lost connection. If anybody can give me thumbs up on the chat as well, that would work. Yes, you're perfectly audible. Shweta, am I audible? Yes, you're audible. Okay, perfect. Thank you. Thank you so much. This was a demo on the Vertex AI agent builder. Very quickly, we have also built another demo on Vertex AI search. I just quickly go over that and then we'll, of course, move forward to the Q&A piece. Just to quickly show you how exactly we have built this. The previous example was for our Infobot chat builder, right, which is simply asking question about website or anything that we want the chatbot to answer questions on. This is specifically built for a biomedical literature, specifically for a healthcare-based use case, right? So coming back to the example which I was giving earlier, which is let's say you're working for a doctor or let's say you have built an application for doctors who want information on their patients, right? So you can build a search-based application. And this is how this application that we have built, this is how its architecture looks like. So we, of course, have configured access control on this and then the for application layer. And for the front end, we have configured App Engine. On the back end, we have integrated enterprise search where the document the big query metadata search is happening right so very simple straightforward architecture nothing complex uh as such i'll quickly move forward to the bot so this is a bot so for this board particularly what we have done is uh we have integrated questions already here right if you don't want to do this, you can just simply put in a search bar and then the customers or the end users can simply type their own questions in natural language in the chatbot and sorry, and the search bot would be able to take that as well. So here, since it's a health care or biomedical literature based at what we have already asked these questions added these questions, which can be asked. So if I press on any of the question, which is what are the classes of anti-arythmating drug? So as you can see, the answer came in and at the bottom, the best part, basically, when it comes to any of the search idea. So whenever you want to create any search based chatbots, you want to make sure that whatever information or the answers that your chatbots are giving, you add the relevant source to it as well. Then the end user, whom server is asking the question on what this information, they have a source to go back to to validate the answer. You can even do that. With Vertex AI searchbots, you can just simply get those source links towards the end as well so as you can see uh for this particular answer it added sources all i have to do is just simply click on the sources and it will expand these sources for me so as you can see here view paper so here the bottom it will give me the source right so you can very easily get started with these simple applications and even more complex one depending on your use case, what you're trying to build. And just to show you a corpus, which is the papers that we had uploaded on this search board are these ones. So these 10 papers were uploaded to create this very quick chatbot. And all of them can be accessed through here. And some of them even have a public link. So as you can see, search bots can access both information, text-based or a document-based source or data store that you add or even if you simply want to add a website link, search will be able to answer questions based on that as well. So as you can see, this is the information which was available in the emerging paper, which is basically fetched from this particular website that I just opened. So that's it pretty much from my end on Vertix AI, Gemini. I really hope that you were able to get some understanding on both of these solution areas. I see we have a few questions. Give me one second. I can pick the question and show on the screen so that everyone knows what you are answering. Yeah, thank you. So I will start with. Okay, so is asking what are some of the common use cases for Gemini in real world applications? Okay, so I think you said Misty, right? Yeah, Misty, it tends to be very honest, depending on which model you want to use. For example, within Gemini, we have created different versions of Gemini right for example we start with Gemini 1.4 then we come back with Gemini 1.5 which is more complex more bigger model you can answer more complex and solve more complex problems right and then we have Gemini flash so for example if you're creating a use case which needs fast responses. Let's say you build basically scenarios of online or live chatbots, right? They need faster responses from the model. Then Gemini Flash would help. For an example, let's say you are... Healthcare could be one example which I was talking about earlier. Second could be, let's say you are working in a build back field and you want to basically build an application where the customer or the customer or the lawyer basically can go back to and then ask this legal information from one, get all of this legal information about cases, about your organization from one place, right? You can just simply use gemini flash the back end and then fetch information drops right uh this is another use case we have seen customers using gemini for uh more complex problems which is uh you know uh integrating with their own applications and gemini basically helping them process more complex situations so in an actual the answer may vary depending on what use case you are trying to implement. And if you want to explore more where you can ask Gemini, where you can start using Gemini from, just simply search Gemini use cases in GCP, and you'll be able to get more. It's a Google Cloud page. So it has real-world Genial problems where Gemini can help. I can probably share this link. I can paste the link as well, like in the private chat. Yes, you can paste in the private chat and then paste in the conversation. Perfect. So this is the link where you can explore different real world Gemini use cases and maybe pick something of your own and start building on top of it. I hope this answers the question. Thank you. Next is Siddharth is asking, how do Gemini and Vertex AI manage scalability for large data workloads? Good question. Really great question. So when it comes to large data workloads, I'll answer this from a Vertex AI perspective because it's a platform. Generally, whatever platform you are using on the back end, whether you are running the models on Vertex AI as a platform, or you're running these models on TKE or computer engine. Depending on what you have chosen, Vertex AI is a platform, it's a pass offering, which means it automatically is auto-scaled based on the need and the traffic that is coming into the models. The scaling piece, the performance piece is automatically taken care of by Vertex AI. You don't have to manually configure anything. And we already worked with a lot of large enterprises who have built on top of Vertex AI. So if you're using Vertex AI, then then vertex ai as a platform will take care of automatically scaling these workloads depending on the traffic and the need but if you're running these models on let's say a computer engine or a gte depending on the world if you're running these models on a gp uh compute engine based uh bm which has gpus on it then of course the scaling piece would be taken care by you, right? Depending on what your scaling requirements, whether it's a production environment, testing environment, what is the user base you're trying to cover, right? So both the options available, but I always recommend my customers. If you're specifically building to enterprises, right? And you expect a lot of used traffic, then I always recommend our customers to go with Vertex AI as a platform because you just simply consider the required and then Vertex AI will take care of everything from a platform to manage these requirements. Moving on to the next question. Amar is asking what customization options are available within Vertex AI agent building? When you say customization, as you can see, it's a very out of the box solution, which is you can just get started with them very quickly. When it comes to customization, use cases can be across search and infobots depending on where you want to integrate these. Customization basically will vary in terms of what is the use case you're trying to implement, whether you want to build an infobot based solution, a chatbot which might help you. It can also even look at the transaction and help you get some transaction information as well. So from a use case perspective, the possibilities are simply from a use case where you want to integrate these chatbots in. So I hope that answers your question. But primarily from a search and info bot perspective, you can just basically pick out any use case wherever you want to build a legal healthcare based use case and you will be able to get started with that very easily. So next question is from Ganesh and he's asking, can Gemini handle multilingual data in books effectively? Great question Ganesh, the answer is yes. As of today, we are adding more and more languages to be very honest. But as of today, I think we have somewhere around 40 plus languages already supported within Gemini, which includes some of our native Indian languages as well as well as global languages as we move forward right if you give me a minute I'll just probably share a link of the language support as well for everybody's reference give me one second please Here. I've placed the link. I think you can move forward with an answer. The links are available in the comment section. Yes. So moving on to the next question. Yes. What are the best practices for deploying general in a distributed cloud environment? See, depending on, of course, your use case, but if you're using a distributed cloud environment, two ways generally to use Google's own proprietary foundation models. Either you use these models on the ModelGuard platform, which is through Vertix AI, or if you're looking to access these into your own environment, could be across any of the Cloud platform, then you can just simply start using them through Gemini API. So for any environment which is out of Google Cloud, any other Cloud platform, or something else, you can just simply use a Gemini API, and you'll be able to use them. I think there was a Gemini API best practices as well. If I find that document, I just probably put the link there as well. So just one quick sort of plug-in. If you need documentation, I think we've added a lot of documentation on Google Cloud, different offerings. So any question you might have, just simply start searching for these and you'll pretty much end up finding a good detailed public documentation on any of your queries. And then very easy to get started on them. Moving to the next, can Google map data be used as input to the model and ask questions about the surrounding area, live traffic details? Google map is, as an API, is integrable across these solutions, so that's also possible. I've seen a lot of customers who have built these AI-based application with a Google map data. For example, a very simple use case could be an application in our cars, if you think about it. So our car providers who are manufacturing these cars, all of these applications, they definitely have some telemetric data on the maps, location, et cetera. That's also possible. Next is, if I can use Go language, base, JKI, SDK to connect to Vertex AI, or do we have to learn Python to integrate with Vertex AI? Based on what I remember, I can of course double check, but apart from Python, we have Go, Rest, a lot of ways of getting started with Vertex AI platform. Give me just one minute. I'll just probably share and double set this thing as well. But based on what I remember, yes, you can use Go as well with Vertex AI. You don't need to only know Python. I'll probably double check this one but based on what I remembered who is supported but I'm not able to find the correct link but I probably work with whether and see if I can share the link after the link. Yes definitely all the links that you have shared until now and whatever links that are there we'll share with the participants after the session in our document. Sure moving on to the next version. This is from the file. What do you that you can just simply start building on. agent builder I will share here so for any building any search or rag experiences you can just simply use a suite of API's which are available on vertex bi to build your own augmented generation retrieval on applications so and that's also not quite a lot of coding as such. You just very simply use our API to get started with building bootlegs. I've just shared the link for that as well. I hope this helps. Next is, is there any real-time processing AI tool which could be installed and happier with Kooten. I mean, I might need a little bit more context on this. But when you see real-time processing, pretty much a lot of these models support real-time. Depending on what type of data you're processing, the solution might change that you're working with a vision model, text-based model, or if you want to work with a model to understand all of text, audio, video, et cetera, then Gemini flash could be the one which you can probably start using, right? Because Gemini flash, it was shorter and not shorter. It's a smaller version of Gemini, which is comparatively faster and created specifically for real time use cases. So maybe start exploring my flash, and if you have any specific questions, then probably we can take that back offline if possible. So next is, how does Vertex AI leverage Gemini's multi-model capabilities and mixture of experts architecture to improve model performance and scalability in real world applications? Good question. Scalability, as I mentioned earlier, since scalability is primarily an infrastructure job, so Vertex AI takes care of that in the form of being a pass offering or a platform. When it comes to performance of a model, to be very honest, it varies. It depends a lot on what type of data are you working with a model on, how huge the data is, what kind of response you're expecting from the model. The definition of performance changes depending on the model, the data that you're working with, and what is the environment and what is the use case you're trying to implement with that right so so far to be very honest we have uh lived into and even if you just simply search gemini for gcp and you basically will land on the gemini page it talks about the performance of gemini and it's a publicly posted page uh compared to our other private previous models as well right and how many tokens it can process in a single request. So all of this information can be found, but in a nutshell, I've seen Gemini performing really good so far with the customers who have implemented Gemini in their own use cases. I think Navin was talking about real direct processing, right? So he has added a follow-up question that is called signal processing if you want to add something otherwise we can move to the next question um i probably need some more another comment where he least trying to explain. Are you reffing the healthcare, the signals from the ECG devices, Naveen? Okay, let Naveen come back and I'll give another question in the meantime. Also, to the earlier question, Go is supporting with.HDI. I just found out, got the link as well. I didn't, my colleague is also on the call. She helped me with the link. So I just shared the link as well for the Go support. So Go language is supported with Vertex CI, so you don't have to learn Python to get started with that. OK. Naveena said, yes, I mean, this is what he made it with respect to the health care. Got it. So yes, you can, when it comes to real time signal processing, to be very honest, it would work. In the short answer is yes. The question is, depending on the image size, the model, the response time might vary, but the very short answer is yes. You can process signals data as well with Gemini, but you probably will have to use our Medgemini particularly, which is trained on medical data and not the generalized version of it because that will not be able to understand these medical terms, etc. Offline tools, I might need a little bit of context there, like what exactly your use case? Why do you need an offline access? But probably we can connect on Discord channel and see if I can answer your questions there. So next question is, yes, what are the key differences between Gemini 1.5 Flash and Gemini 1.5 Pro? Got it. Good question, shrijan i think uh answer is two major differences uh one is gemini 1.5 pro is much larger and much complex model which means if you're trying to solve a very complex use case which needs a lot of processing and the model to be able to understand and tone down the problem, then Gemini 1.5 Pro would make sense. And I always recommend and Gemini flash specifically built for use cases, which is which needs real time response, right? Because when we had launched Gemini 1.5 Pro earlier, a lot of customers were coming back and saying that, you know, since it's a very big model and very complex model, it will take some time to respond, process and then come back. Hence, we were seeing this coming from a lot of customers. And that's the reason we came back to Gemini Flash, which is specifically built for real-time new cases, right? For example, if you're building a web application to answer questions in real time or to process data in real time, then Gemini flash would be the right choice and gemini 1.5 pro is more is going more complex problems where you need where you don't need to work with a lot of real time and where the response time is not very important so that's the key difference in terms of choosing the right model depending on your use i think we're five minutes over shweta just Jig. I think you probably had to run as well. Is there a question that we have? I believe there is one question that we just got. If we take this as a last question, and if some of the questions are still left unanswered, we'll share with Arithi over our doc over email after the session, and we'll get back to the participants later. Sure. Can Mordex AI build chatbot can provide a professional response like give some charts, graph or insights that related to it? Specifically charts and graphs, to be very honest, I have not seen. But depending on your source, for example, even let's say you attaching a chart uh you'll probably attach that either maybe in a document or as a png file right so depending on how you have attached the data store gemini will be able to also the chat bot will be able to come back with the response right if it's part of a document then gemini always uh comes attach the source link at the bottom. Right. I haven't I do the best of my knowledge. I don't think a response will be on some charts and graph spaces. But I do remember that we were working on this. It's currently available in preview, which means will not be available to all the users, but should be out very soon. It's something that is being worked upon, to be very users, but should be out very soon. It's something which is being worked upon, to be very honest, and should be out soon for the Jamai version. At least in workspace, it's available. I'll double check on the Vertex CIP on the chat box. Great. Thank you so much, Aditi, for your time and for the detailed session. I'm sure the participants have now much better clarity on how to integrate who will give us any tools in their solutions and better solutions for the side. Thanks again. I'll just take up a few questions related to the hackathon. Right. So thank you for that. Thank you. Okay. So for everyone who has been asking about the slides for this session, the entire recording of the entire session, starting from the introduction and all the slides that are presented as well as the Q&A will be uploaded in the resources section. So you can just log into your dashboard, go to the resources section and check out the entire recording of the workshop. And I saw some more questions around the hackathon that is, was it online or offline? So the entire hackathon is online, all the workshops and everything as well as the grand finale will be online. Make sure that you keep a check on your email so that you do not miss out on any important information. As per the deadline, we have 17th of November is the final date to register and submit your ideas. Okay, so this is the feedback form for all the participants who have joined us today. You can just plan the QR code and let us know your feedback on the entire workshop. What did you like? What did you not like? And all the Q&A session and everything. And we will get back to you over the email and whatever queries are still left we will make sure that we pick those up and share with the with our speaker and we'll get back to you with the responses great so with that let's wrap up this workshop uh all the very best for your ideas of issues. | Workshop-1 (English) | AI for Impact APAC Hackathon 2024 by Google Cloud | 4,176 | Hack2skill | 20241109 | Welcome to the 1st workshop of AI for Impact APAC Hackathon 2024.
The AI for Impact Hackathon, presented by Google Cloud and powered by Hack2skill, is a unique opportunity to leverage the transformative power of AI to address pressing social challenges across the APAC region. By utilizing Google’s GenAI tools, including Gemma, Gemini, Vertex AI, and Gemini Code Assist, participants will have the chance to create innovative solutions that can drive meaningful change and improve lives. By harnessing the potential of Google’s cutting-edge technologies, develop solutions that address a wide range of societal issues, from healthcare and education to sustainability and accessibility.
To know more and participate, visit: https://vision.hack2skill.com/event/aiforimpact_reg?utm_source=H2Ssocial&utm_medium=H2Ssocial
Register and submit your AI idea before 17th November, 2024. | 2024-11-19T11:14:51.140960 |
https://www.youtube.com/watch?v=MfZR_ZrLSDw | My name is Nathan Smith. I'm a senior data scientist in the Neo4j Professional Services Division. I joined Neo4j in 2021, so I've been here for a little bit over three years. Before coming to Neo4j, I had done previous graph work as a data scientist at Love Every and PRA Health Sciences sciences and my home is in kansas city missouri i'd like to kind of motivate this session with a quotation from nick harkaway who said wrote garbage in garbage out or rather more felicitously the tree of nonsense is watered with error and from its branches swing the pumpkins of disaster On almost every data science project that I work with with Neo4j, one of the first steps is always doing some kind of entity resolution. And that's really important to make sure that whether the data is all coming from one system or whether it's coming from multiple systems, that we've resolved those entities to match items in the real world. And when I failed to do that upfront, then often the pumpkin of disaster has come back to smack me in the end. There are several sessions today about entity resolution. It's a great topic. In fact, there's one going on right now with Paco Nathan from Senzing. There's another I saw earlier today from Linkurious. And so I would encourage you to check those out on the videos. You know, sometimes you need a more of a rich application type of approach to any resolution. The approach that I'm going to take is more of a lightweight approach that you might take before you maybe advance and move on to something that's more application focused and more complex. In talking about this title of this session as an entity resolution playbook, I want to distinguish between a recipe versus a playbook. When I think of a recipe, I think of like the technical challenge on the British Baking Show where you've got an explicit list of step-by-step things to do. You've got the exact ingredients that you need and you want everything to come out very consistently exactly the same way every time. And contrast that with a playbook. You know, being from Kansas City, it's almost obligatory that I'm a fan of the Kansas City Chiefs and their head coach Andy Reid comes into the game not with a recipe of what he's going to do step by step all through the whole game, but he does have a playbook. He's got some things that he has worked out in advance that he knows have worked many times in other contexts that then he can apply as needed against the various opponent or based on injuries or the weather or whatever changes. He can adapt, but he's got a bag of tricks that he can pull from. And that's really what I want you to come away from. This session is not necessarily an explicit recipe, but a bag of tricks that you can pull from as you're working on your own projects. So at a high level, this graph entity resolution process that I'm gonna outline for you begins by designing a graph data model that shows shared identifiers. Once we have that data model, we're gonna populate it with standardized data, and then we're also gonna flag what I call placeholders, and we'll talk about what that means later. Then we want to identify candidate entity pairs that might be entities in the real world that are the same. Then we'll calculate a similarity score, and for items that are above a threshold on that similarity score, we're going to record the results of the entity resolution process in our graph. So to begin, we need to find a good graph data model. And sometimes when I come into a project, I see a data model that looks like the one on the left. And in some ways, this makes sense because I want to find out if these two people are the same. So I've got the two-person nodes. And then here, I've got a whole bunch of properties that tell me everything I know about that person. The challenge with this data model for entity resolution, though, is that from a graph perspective, there's really nothing that connects these two people. I have to look at the individual properties. Looking at the individual properties like that is almost like I might as well just do the entity resolution in Python with record linkage or some other technology that's not really a graph. However, if I come over here at this data model that's on the right, if I refactor the data so that the shared identifiers are actually shared nodes that both of these people nodes are connected to, they both have the same phone number, they both have the same social security number. Now, just as a visual, when I look at this from a graph standpoint, I can suddenly see where the connections are, and I don't have to compare property by property as I would with the model on the left. So identifying these shared identifiers and breaking them out as separate nodes is a good first step. Now, there's also a potential pitfall here, because I think it's important we identify or make a difference between what I call identifiers versus descriptors. Identifiers are things that are pretty unique that wouldn't be shared by more than a few people in the real world, like a phone number. Maybe two or three people share the same phone number, but not 100 people or a social security number or something like that. A descriptor, on the other hand, is something that might be shared by a whole lot of people. So if you look at this picture on the right, we've broken out gender as its own node. And then everybody who has the same value for gender is now suddenly pointing to the same node. It creates a super node which is not ideal for us. And the fact that two people are connected to the same gender node doesn't really tell me that much that they are actually the same person, they just happen to have the same gender. So this would be a poor example of splitting out every single property into its own node and linking them together if it's really a descriptor and not an identifier. Another piece of the data model that I often find useful is to treat the addresses in a special way. Often location data can be an important part of telling if two things are the same. And here I'm looking at an address node and what we can do is apply a geocoding service maybe from Esri or from Google Maps or all of the cloud providers have these geocoding services where you can put in an address and then it's gonna give you back a latitude and a longitude. And so I'll make a separate node for that location that has the latitude and longitude with the point type within Neo4j. And that way, if I have two different addresses where maybe there's just a slight variation, like this one was spelled ST or this one was spelled street all spelled out but otherwise it's basically the same address I can tell that they're at least pointing to the same location in the real world now once we have that data model in mind we want to start to fill it in with the actual data that we're trying to clean up. And as we do that, it's important that we standardize the data and identify placeholders. So as we're looking at our raw data, we want to apply whatever formatting is necessary to bring it into a standard format. So if there's maybe leading or trailing white spaces, we want to strip those off. For the phone numbers, if there's like parentheses or trailing white spaces, we want to strip those off. For the phone numbers, if there's like parentheses around the area code or the country codes or different things like that, we just want to make sure that every record is consistent. That way, if I see the same string value, I know that that actually is the same thing from the source data. Same thing for emails, make sure they're all lowercase. Social security numbers, do they have the dashes to separate the segments or do they not? All things like this that can standardize the property before we load it into the graph. whether or not it's a valid value or not. For example, if you're dealing with phone numbers, in the United States, there are no US area codes that begin with the range 100 to 199. So if you see that value come in, you know, oh, this one's actually invalid. There's a typo going on here, or maybe there's some kind of fraud. Same thing for social security numbers. There are certain ranges that are invalid ranges. I've recently been working on a project with npi numbers that identify pharmacies or medical professionals and the last number of the npi number is actually a check digit that has to have a certain value based on the other numbers that are part of that string and so i can tell maybe there was a typo along the way somewhere or maybe this npi number was just made up by somebody. So I usually go ahead and load this invalid data into the graph because it could be a signal for fraud detection or something. But I put a special label on it. And so I can exclude it from the candidate matching because just because two people have the same invalid identifier in common, that doesn't necessarily mean that they're probably the same person in the real world, it just means they both made a typo, is what I would normally assume from that. Now, they're also what I call placeholders, and these are things that happen when somebody has required field on a form, but they don't know what to put in there, and so they just put my phone number is 999-9999-9999 or something like that, or none at known.com is my email address. Again, this is not something that is gonna help me identify a person in the real world because there is no such phone number as that in the real world. And so we wanna try to identify these placeholder values and hold them out from our analysis. And one way that we can do that is using the graph data science degree algorithm to figure out that some of these identifiers have a very high degree. Because remember, an identifier is something that you wouldn't expect to be shared by more than a handful of people or objects in the real world. So I might make a graph projection that has the phone numbers and then the people associated with those phone numbers. And if there's a phone number that has a very high degree, then it's probably a placeholder. And I can run that degree algorithm in stats mode to just get a sense of the overall distribution, like where's the 99th percentile for the most common phone numbers. And then I can also run it in stream mode to get back the actual values. And then again, I can label these values that are above whatever threshold I want to say, hold these out and don't include them in the next steps of the process. So now that we have our data model loaded into the graph, the next step is to identify entity pairs that might be duplicates. Now, when you think about this process of comparing nodes with each other, if I compare every single node with every single other node in the graph, that's a quadratic operation there that's going to take a lot of resources, a lot of memory, and a lot of time to compare every single thing with every single other thing. And so if I can narrow down the universe of things that need to be compared, my entity resolution operation can happen much faster. So this is the workflow that I follow usually. I'll load the graph data model, and then the first thing I'll do is an algorithm called Weeklyly connected components to start to narrow down the list of candidate matches and if that gives me a small enough number of candidates that I can afford to evaluate them all then I'll go on to my next steps in the process of calculating string similarity calculating descriptor penalties and getting the final score but if we click in active components gives me still using with too many possible pairs it hasn't narrowed things down far your penalties and getting the final score. But if weekly connected components still leaves me with too many possible pairs, it hasn't narrowed things down far enough, then I might move on to another algorithm called node similarity, which is going to allow me to narrow the field even further. And if that is still too much data to process, or if the number of candidates is too high to run node similarity, because node similarity is also a rather expensive operation, another approach is to use fast RP with KNN. And we're going to talk about each of these three options on the next few slides. The weekly connected components algorithm is one that just asks the question, is there any possible path that can get me from one node to another node in the graph? It doesn't care about the direction of the relationships. It just says, can I get from here to there? So as I look at this picture, these three blue person nodes, they all do have some kind of a path that links them together. So I would want to evaluate and say, okay, how likely is it that any of them are actually duplicates? Whereas this person node over here on the right, there is no path that connects me from this person to this person. That means they're going to be in different weekly connected components, and there's really no reason that I need to spend time checking other downstream tasks to see if they're the same person because they have no identifiers in common and so that allows me to segment the graph and break up my work in an efficient way now the next possible way of filtering things down even further is to use an algorithm called node similarity node similarity is asking how similar are two nodes sets of neighbors? So if I look at this picture, I've got two blue people nodes, and they both share a phone and they share a location. And then this one has also a social security number node that's not shared. So two out of the three neighbors are shared. And so this would have a Jaccard score of 2 thirds. And the Jaccard is one of the metrics that you can use when you're running node similarity, and it's often a good baseline. You have an option of two other metrics. One of them is called overlap. And with the overlap score, instead of taking the size of the intersection of their neighborhood divided by the size of the union of the neighborhood, the numerator of the fraction for overlap is the same. It's the size of the union of the neighborhood, the numerator of the fraction for overlap is the same. It's the size of the intersection of the neighborhood, but then for the denominator, we take the size of the smaller neighborhood. In this case, there are two nodes in common, and then the node with the smaller neighborhood has two nodes in its neighborhood, and so we have 2 over 2. The overlap score would be 1.0 versus two-thirds for the jacquard score overlap can be useful when you have missing data but it's not um it's not necessarily different data so in this case you know this person probably has a social security number i just don't know what it is so i don't want to penalize their um their relationship as far as possible being the same thing just because i don't happen to penalize their relationship as far as possible being the same thing just because I don't happen to have a number associated with this account. The final metric that you can consider is the cosine similarity. Cosine similarity is good when you have weighted relationships between these nodes. And that brings up the point that all these relationships can potentially be weighted if some of the relationships are more significant than others. And I also want to mention that if you're wanting to use weighted relationships with node similarity, I would encourage you to make sure that your GDS version is upgraded to version 2.10.1. There was an important bug fix in that release that solves some issues with weighted Jacquard. And so 2.10.1 or later is recommended if you're using node similarity in a weighted context. Now, the node similarity is a great algorithm, but it's also a little bit expensive. And for a really big graph, it can be slow to run. And so another alternative is to use an algorithm called FastRP, which creates embeddings for our nodes in the graph. So basically, if you start with a graph that looks like this, you apply FASTRP, and you turn each of those nodes into a point in a vector space. And then I can look for each node vector, what are the other node vectors that are most nearby? That's what the k nearest neighbors stand for. And so if I was asking for the two nearest neighbors neighbors for this node it would point me to this one and this one as blue nodes that are neighbors of this one in space and those would then become the candidates that i would check to see if they might be duplicates the fast rp is very very fast to run that's that's how it got its name it just does some magical matrix multiplication to run. And so that's the advantage of this approach. The disadvantage is it's an approximate approach. It's not exact like you would get with the node similarity approach. So there's a little bit of a trade off with performance and approximation. If you go this route, a couple of pointers that I would offer you, one of the most important parameters that you can tune with FastRP is the iteration weights. That tells you as you're running FastRP, how far out across the graph do I need to listen for signals to create the embedding for each node. In our case, if it's like a bipartite graph like we've been looking at, I really don't need to go out more than two or three hops. I don't need to have an iteration weight that has five or six or seven positions in that iteration weight vector because that's listening way too far out when I'm really just looking for who are my nodes that are just a couple of hops out. And so set a low iteration weight of probably two or three at the most when you're using fast RP for this purpose. Then when you're running k-nearest neighbors, what it's doing is it's starting with a node and then it is getting a suggested set of other neighbors that might be the closest k-nodes. Then it just tries to swap out other nodes into that neighbor set to see if it can improve the distances associated from the node that you started from. Well, that initial potential neighbor set that it starts from can be generated in a couple of ways. One is with just a uniform random sample where it could pick any node from anywhere in the vector space, anywhere on the graph. The other alternative is to use a random walk. And this is usually the better approach if you're trying to just find the nearest neighbors because it's just going to search out starting from your source node and find neighbors within a random walk around it. And those are the ones that we're most interested in checking to see if they are duplicates anyway. So the random walk initializer is a good step if you're using fast RP with k-nearest neighbors. All right, so now that we've identified our candidates, we're going to move on to calculating the similarity score. And there are probably some properties like a product name or a person name that are just text properties that we would like to compare. And Neo4j within our APOC text library offers a whole bunch. You see a long list here on this slide of different ways of comparing texts. I'll just point out a couple that I think are the most useful for our purposes. The Levenstein distance is a very common text comparison algorithm. It just is really about edit distance. How many transpositions or substitutions or deletions are required to turn one string into another string? So that's a good baseline one to start from. Levenstein similarity is basically the same algorithm, but it's just been inverted so that a high score coming from Levenstein similarity means that the two strings are almost the same. Whereas a high score with Levenstein similarity means that the two strings are almost the same, whereas a high score with Levenstein distance means that they are very different because you had to do a lot of edits to turn one into the other. So just keep in mind that very similar strings will have a low Levenstein distance and a high Levenstein similarity. Another algorithm to be aware of is the Jera-Winkler distance. And this algorithm prioritizes matching the beginning of the strings. And it's usually a good one for personal names, because if I was typing in my name, surely I'm going to get my first initial right. And maybe down later on in the string, I might slip up and make a typo. But this can be a good one for names. This is also the point in our entity resolution process where we're going to look at conflicting descriptors and create some kind of a penalty associated with that. If I look at this picture, here's two person nodes. They both share the same phone. They both share the same address. Oh, so that looks promising. That might be the same. If I look at their names, the strings are only one character apart. So that seems also very promising. Robert Johnson, Roberta Johnson, you might be thinking they're the same, but then when you look at the gender, this one is male, this one is female, and you realize, oh, that actually makes it, now that I have that extra piece of information, it actually makes it a lot less likely to be the same person, that they're duplicates. And so that's an important piece of information we want to include in our final formula. So to find this final formula, basically we're just going to combine all the ingredients that we've been working with so far. I'm going to take the node similarity score that's maybe going to come from k nearest neighbors or coming from node similarity, and that's the way that I'm going to account for the common neighbors. I've also got maybe a string similarity score that's based on how similar the text properties are. And then I've also got a penalty for unmatched descriptors, and I will apply some kind of a weight, and sometimes I'll just kind of look at the data and kind of play with those different weights to see what seems to work best. Or if we have a deduplicated data set that you're already confident in, I could train a machine learning model to find the right way to combine these ingredients, maybe with a logistic regression, to get the right weights for our inputs. So here's an example of a Cypher query, how we could do this. I'm going to start by streaming out, I created in mutate mode when I ran node similarity, a new relationship in my projection called has similar identifiers, and it's going to have a similarity score that I'm going to get back from mutate mode. And then when it comes back from the graph projection, the source node ID is just a number. The target node ID is just a number. So I'm going to call gds.util.asNode to turn those into actual nodes. And now that I've got those nodes, I can retrieve their text properties, their name properties here, and then I can call the Levenstein similarity function to get the text similarity. I can also check to see if their gender properties are different, and if they are, I can then include a penalty for having different genders. And finally, I'm just going to combine it. So I'm going to wait the identifier similarity is 65 of the score the text similarity is 35 of the score and then out on the descriptor penalties and that's going to give me my final similarity score and then anything that's above a threshold i'm going to say these are duplicate nodes anything below the threshold i'm going to say is probably not duplicates so now that i've figured out what is probably a duplicate i'm going to say is probably not duplicates. So now that I've figured out what is probably a duplicate, I'm going to store the results in my graph. One way to do this is to actually create a new node, like in this case a person group node, and all of the nodes that have a similarity score above the threshold get related to that same person group node. And the nice thing about this is I can still see all of the inputs that went into that person group node. So if I come back later and wonder why these things are related, I can see, you know, this is what they had in common or this is what the similarity score was. This is often a good way to handle the results of your similarity calculations. Another option is we have this APOC.refactor.mergeNodes procedure that you can call. Now, this is going to actually take those separate nodes and collapse them down into one. And this can be nice because your end users sometimes don't want to have to go look out to that group node. They just want to see the results collapsed like that. The disadvantage to collapsing them is that you kind of destroy the input ingredients. So a little bit harder to retrace and see why you collapsed these nodes that were separate in the source system to be together. So that's the major downside. But if you choose to use this, you have some options. If the source nodes have slightly different properties, you can choose to take the property value from the first node or take the value from the last node in the list that you're collapsing. Or you can actually preserve all the unique values as a list-valued property when you combine them. So that's something that you can configure. You can also configure if these two nodes that you're combining are both related to the same neighbor node, then you can choose to collapse that relationship. That way, after you've done the merging, you don't end up with two parallel relationships pointing to the same neighbor, but you just have one relationship. So those are some things to configure with that. Another option is not to change your graph at all, but to send your results back to the source system. And then you can merge them there or deduplicate them perhaps with a human in the loop to help you figure out, you know, was this really a data entry error of some kind that you can clean up? And then when you re-ingest that data to Neo4j, the duplicates that you have identified will be cleaned up. So that gives us our whole graph entity resolution process from beginning to end. We designed the graph data model that showed the shared identifiers. Then we made sure that we standardized the data before we loaded into the graph. And then we could also use GDS to flag the placeholders where we had the same value showing up more often than we would expect it to with different profiles. Then we identified the candidate entity pairs that we wanted to compare and we had three different ways to do that depending really on the volume of the data. And then we calculated the similarity score, things that were above a certain threshold with that similarity score we assume are the, and then we can record those results of the entity resolution process in the graph. So thank you very much for your attention. Are there any questions? Let me jump over to the Q&A. How is fast RP different from vectorizing the property of each node? Oh, let's see, there's more to this. Like the new Neo4j vector index. So that's a good question, Nick. Fast RP is really about vectorizing the topology of the graph. You can potentially include node properties when you run Fast RP, but for the situation that we're doing here, I would recommend that you simply, you've already moved those properties for the shared identifiers off to be their own nodes. And then it's really the topology or the relationships within the graph that we're trying to encode with the fast RP. And so that's what you're going to get out versus the vector index is more if I have a new query vector that I want to check on, I can query that. And it's kind of like the k-nearest neighbors, but it's done in real time for one specific vector that you're trying to match. So hopefully that answers that question adequately. And Kenneth asks if I have a blog post for this methodology and i don't but it would be a good idea to write one um so thank you for that suggestion and this video will also be um available on our youtube channel so you can come back and refer to that as well and mark asks um the recommended strategies are to be done after we have data in the graph. What are some data collection, NLP, and resolution processes you can do beforehand? And that's very true that, you know, certainly you don't have to do everything after it's in the graph. And I think some of the standardization topics that we talked about, about just trimming the white spaces around or making sure that everything's in the same format, that's a really good way before you even load the data into the graph to begin to handle this entity resolution. But as much of the processes you've done in advance in other ways with other tools, and then you bring that into the graph, you can still reflect the results of that in the same way that we were showing with the graph-based approach, where maybe you make a group node that you relate everything to, or you possibly even merge the nodes after you bring them in. That can still be relevant relevant even if you've done some of the other work upstream so let's see i think we're just about out of time but do we have any more um questions here um i think we might have addressed them all great well enjoy the rest of nodes and if you have any questions beyond what I answered today, feel free to reach out and we'd be happy to continue the conversation. | NODES 2024 - A Graph Entity Resolution Playbook | 1,759 | Neo4j | 20241119 | Entity resolution, the process of determining which digital descriptions correspond to the same real-world entities, is an important graph use case. It is also a crucial precursor to many graph data science projects. In this session, you will learn steps that the Neo4j professional services team has used in many entity resolution projects. The steps include designing a graph data model that highlights shared identifiers, standardizing the format of node properties, identifying outlier nodes that should be excluded from the matching process, using graph data science algorithms to identify duplicate entities, using string similarity to identify misspellings, and capturing the results of entity resolution in your graph.
Get certified with GraphAcademy: https://dev.neo4j.com/learngraph
Neo4j AuraDB https://dev.neo4j.com/auradb
Knowledge Graph Builder https://dev.neo4j.com/KGBuilder
Neo4j GenAI https://dev.neo4j.com/graphrag | 2024-11-19T11:15:43.992622 |
https://www.youtube.com/watch?v=NtfeEhpvcuk | Hello everyone and thank you for joining. I'm Federica Ventruto, a data scientist at GraphAware. GraphAware is a globally recognized graph technology company and the creator of Hume, our connected data analytics platform. And we'll present today a session about a graph-based entity resolution approach in order to achieve data harmony. So we will start with a bit of context and applications, why and when you should apply entity resolution. Then we'll discuss the process from a technical point of view. And in particular, we will focus our attention on the incremental or real time process. Next, we'll discuss a few data modeling options that we have in order to represent the resolution results. And after that, we will see a demo. We conclude with a few takeaways and question and answers. So entity resolution is a crucial process in data management that involves establishing connections between the scattered and incomplete records to resolve the same real world entity or the same real world profile. So let's suppose that we are ingesting data about people coming from different data sources. Now, according to the source of our data, we might have that the same person is referred to by using different identifiers, different ID that are, for instance, the license ID, the insurance ID, and so on. There are many options. Also, the same attribute could be written in different ways. So here we have the name, including the middle name. We might have the name written as last name, comma, first name, or two different attributes for first name and last name. So having the process, having a process that allows us to resolve and disambiguate these entities, these records, is fundamental because it allows us to reach a unified view of your entities, of our entities, in order to reach a cohesive understanding. This allows us to go beyond the classic silo situation in which different data sources do not interact to each other, and instead reaching a system in which we can create a real knowledge and the different sources interact so we can merge the information that we have from them. For this reason, entity resolution helps in reducing the complexity of the data and perform more accurate analysis and predictions. In our approach, entity resolution is a two phases process. The first phase that we call candidate selection and ranking essentially involves some custom defined rules in order to compute, to identify potential matches in the graph. So at the end of this process, we will have our graph with some additional similarity relationship, each of them with the related similarity score. At the end of this step, there is the resolution phase in which we employ some community detection algorithms to identify communities of nodes that are closely connected. And each community is likely to represent the same real world entity. So these communities are the end, the final results. So let's see these two phases in detail. The first step, the first phase is the candidate selection and the ranking. And it is a single process in our approach in the sense that we perform it by using a single query but i'd like for a moment to keep them separated so that we can start explaining the the selection and then see the rank phase so candidate selection aims to identify potential matches among records based on custom rules when it comes to the rules, we can distinguish them according to the type of information that they leverage in the graph. And the first type of rules is, of course, the ones that depend on the node attributes. Here in this picture, there is an example that is the same social security number. This is a rule of type node attribute because it is using the social security number that is indeed an attribute of type node attribute because it is using the social security number that is indeed an attribute of our person nodes that we are supposing to resolve in this case. Now these rules are very easy to manage but usually they are not enough because the problem I mean the problem that we have with the resolution is that we usually do not have unique identifiers that we can use to distinguish among different entities. So for this reason, we need to leverage the context of the graph and the indirect relationship linked to the nodes. And this is why there are other type of rules that we can define, among them there are graph pattern rules we say that the two entities are similar according to gap pattern rules when these two entities are linked to some common nodes through relationship or paths of the same type in this case for instance we can say that the two person are are satisfied a rule of type same address if they are linked to the same address or the same location through the same type of relationship. In this case, as others, there are also composite rules that combine both for the attributes and the patterns. So similar name and same address is a rule of this type as the name is essentially a property of the person nodes and for the same address we are checking a pattern in the graph. Now the rank process is to score potential matches in the graph and in our proposal we have two levels of scoring. The first one is represented by the similarity score that is computed by using a similarity measure. It returns, usually, results that goes from zero to one. And here we have a few examples. So in the case of the same social security number, these two entities have a similarity score of one as we are checking the exact matching of the social securities. So the score is one. While for the similar name and same address, we are considering a similarity measure, the similarity of the names of this person. So in this case, this similarity score is obtained by computing the similarity of the two records names. With this approach, it is also possible to define optionally a threshold so that we define a similar only pair of nodes that have a similarity that is higher than the fixed threshold so that we have the possibility to filter out some pairs. The second level of scoring is represented by the importance of the rule. This is a property of the rule itself and allows us to score the rules according to their importance. What we will consider in the end is the final value that we got by multiplying the similarity score and the importance score of the rule so this value that we you can see here is the final value that we will consider later on during the resolution phase and it's what we will call similarity similarity value or similarity score now i'd like to show you a few examples of the queries that we can use during the candidate selection and the ranking phase. And as you can see, this step can be performed by just using the cipher queries. And this is the query that we are using and that we can use for same social security number rule. This process, the candidate selection, is a process that we execute in a streaming mode in a sense that for each input node, we identify all the entities that match the rule that we are considering. In this case, all the person nodes that have essentially the same social security number. Then we compute the score, as I said before, by multiplying the rule score, in this case, the same social security number as a rule score of 0.8 and the similarity score. And then we return for each input node that is represented by this node ID. So in this case, start node identity, we will have the list the collection essentially because we are collecting them the the list of entities that matched for each of them we have the ID and the similarity score so that later on we can create the similarity relationships here there is another example in this case we are considering a similar name and same address rule. Again, we executed the process in a streaming mode, but in this situation, we have two different pieces of the query that are dealing with the selection part and the ranking part. So for the selection, we are using a text index on the name of the person and these are in order to streamline the process. Then we check the condition that is, we filter essentially according to the same address condition. Then there is the ranking phase. And in this case, we are using the Sorensidist similarity string similarity function computed by considering the names of the two person. The reason why we are using this function is because it returns normalized values between zero and one. So instead of normalizing the results of the full text, we just use this other function. That is another option that we have. But this is important because it allows us to show the way in which we can customize our rules by using different ranking functions, similarity functions, or splitting the candidate selection and then the candidate ranking as we prefer. In this case, we are also defining the similarity threshold so that we will consider only entities that matched that have a similarity score that is the Soren said is a string similarity value higher than the fixed threshold. And then also in this case, we compute the final score by multiplying the importance of the rule, in this case 0.5. It's just a value that we assigned according to the quality of the dataset that we had, and the similarity that is obtained by using the Sorensen Lisa. Now at the end of this process, we will have our similarity added into the graph so that among possible matches, we create this similarity relationship with the similarity value. Then we perform the resolution phase that employs community detection algorithms to identify communities of nodes likely to represent the same entity in our approach we are using we can we can use two different algorithms from the neo4j graph data science library the first one is weekly connected component or wcc that offers a broad resolution value resolution level in the sense that this algorithm works by identifying the connected components in the graph that are groups of connected nodes. So every time two entities are linked by a relationship, they will be in the same connected component. This is an extremely valuable algorithm in the sense that if our data is lacking or dirty, it avoids us to miss some important information. But in other cases, we can get very big components. But on the other hand, we can use the other algorithm that is Louvain that offers a finer level of resolution. Louvain is a hierarchical algorithm that can work in a weighted mode in the sense that it can take into account the similarity values so that these weights that we are standard to the relationship. And it is able to distinguish between communities of nots that are strongly connected among them and the ones that are not. So in this example that we are considering with these four Michael Alcon, while WCC returns a single community, because essentially these nodes are linked among them, Louvain identifies two communities. And these communities are composed by nodes that share the same social security number. This is not the case, it's because the same social security number relationship has a niger value. So this shows us the importance of defining the proper rules and assigning to them the right thresholds and importance. Now here we have the queries that we are using during this resolution phase. So this is the first one that we are using for creating the in-memory projected graph. So we consider the personal nodes, the ones that we are solving, and the similarity relationship among them. We are also considering the scores of the relationship and in the case of multiple relationship, we are summing up the scores. Then we are on the WCC without considering these weights, this score and the Lubain in a weighted mode because as you can see, we are taking into account the total score that we computed before. Also in this case, we are enabling the writing of intermediate communities because Luven is a hierarchical algorithm. So it offers the possibility to return all the intermediate communities identified by the algorithm before getting the final, the final, the optimal configuration of the communities. And so this is another a further level of customization that we can select the further level of uh yeah resolution that we can assign or we can define according to our data and the data sets that we are working on. Now, what we've described so far is what we call batch resolution. That is a process that is useful in the case of large scale and initial resolution. But the problem is that this process can take a considerable amount of time while we need a resolution that is able to to keep our resolution results up to date and it's quick enough because usually data changes over time and in small increments so we need for this reason a process that is able to address all the changes up in the data almost in real time. And this is the process that we call incremental resolution that essentially works by identifying only the portion of the graphs that are affected by some changes according to, I mean, compared to the previous status of the graph while the previous resolution was run. And it updates the resolution only on this portion of the graph, only on this subgraph, while remaining the... while leaving the remaining nodes untouched. So this process starts by identifying the items that are affected by some changes, and this is possible thanks to the Neo4j change data capture feature. In our example, we can suppose that this red node changed and that it's the social security number of the node that was updated. Now, after detecting this event, what we do is to run the candidate selection and the ranking only on the connected component of this red node, that is the affected node. Why are we considering the entire connected component? Well, if the social security number of this node changed, the previously established similarity relationship involving this node, because of the social security number are not valid anymore and we need to recompute them so with these with these updates the entire connected component might be affected so in order to keep our resolution correct and that we need to rerun the similarity process on the entire connected component. Now after doing this it can happen that the social security number of the red node is now the same of another node in another connected component. So while running the resolution we need to consider to apply the community algorithms on both the connected component of the affected nodes and the nodes that are involved in the newly established similarity relationship. This process is effective in the sense that we can address and resolve in a proper way according by addressing any different type of changes that can happen to the database. For instance, the addition of items, not so relationship or the updating of the deletion of them and all these possible cases that can be addressed through this process. And it's efficient because we are only updating a sub graph, a subset of our graph and we can run in very fast. Now let's say that we run our resolution. Now it comes to the data modeling to the representation of the resolution results. There are a few options that are possible. The first one is storing the resolution results as attributes of the nodes. In this case, we use the of the resolution, but other options are possible. So the main thing is that we are using, we are storing these results as an attribute of our records. The other option is represented by the hyperlink approach. So in this case, we are creating an empty node that is linked to all the entities resolved through it. And the third approach is very similar to the second one. But in this case, these nodes will be a fused node in the sense that it contains some consolidated attributes. So which one, which option to prefer depends of course on the use cases and the requirements. For instance, if you are working in a street access control scenario, you should prefer, you can prefer the first option, the attribute option or the hyperlink because allows us, allows you to check the information, check the resolution of your node without disclosing information about other nodes that you cannot you you do not have privileges to see while if you need to um traverse essentially these uh these resolution results in your queries because for instance you want to merge data coming from different data sources and the jump from this Michael Alcon to other coming from other sources you should prefer the approaches that indeed that essentially created this node because the queries will be simpler now I'd like to show you a demo through YUM. So this is the visualization of YUM. And here we have five nodes that we will consider during this example. While about the rules, the similarity rules that we are defining, we are considering the two rules that we explained before. So same social security number and similar name and same address. And then there is another rule that considers the similarity of the names and the match of the date of birth. Now, by observing this note, we can notice that these two notes have essentially the same social security number. And this is the fourth line that we can see in the caption. So this number and again by these two nodes share the same that of these three nodes are for sure similar among them. Also, these two nodes have the same social security number again on this property. So they will be similar according to the first rule and these two others share the same addresses will be similar because of the address pattern. Now, let me run the batch resolution, supposing that we are running the first resolution process, the initial phase. This is a workflow that essentially implements our batch resolution, and we created by using orchestra that is our orchestrator that allows you to create workflows that can implement complex business logics in a local setting now let me run it while i explain how it works in the meantime so in the first branch we are just setting a few things such as the entities that we are resolving and creating some of this that we will be doing the process. Then in the second one, we fetch the entities that we want to resolve. In this case, all the person that we have in the database. And adding to them a temporary label that is to be seen in process label. The reason of this is because we wanted to track the status of the similarity computation because the similarity computation is essentially the process that takes the most of the time. So in case something goes wrong but also for checking the status we wanted to add this label. So here this bigger branch deals with the similarity computation. So we have three rules and essentially this branch starts with the three lines, one for each rule. For each rule we are fetching the entities that we want to process according to the label that we added and then we apply the rule. This is very similar to the one that we have seen before just a few automations. And this is the case of the same social security number. Now, at the end of the branch, we check. And every time an entity has been processed according to all the rules that are defined here, we remove the temporary label. So that in the following branch, in this one, we are just counting the number of nodes that still have the temporary label. So we know that the similarity process finished when the number of these nodes is zero and we essentially can start the resolution phases. Here we perform the resolution phase. We are projecting the graph. The query is very similar to the one that we have seen before. And then we run WCC and the Louvain. Now I just want to show you the number of one that we have seen before. And then we run WCC and Louvain. Now I just want to show you the number of nodes that we are considering in the projection. That is the entire size of essentially all the person that we have in our data set and we process according to the CIRR because it is this number that we have seen here, 400. Then we store the results. And in this case, we are using the attribute mode. So let me reload this announcement. And we can notice that now we have two new attributes that are the result of the WCC and the result of Louvain. Now, if I enable the grouping strategy according to WCC, we can notice that all these entities belong to the same community. While if I enable Louvain, we have two different communities. One is composed by people that are born in 1994. These two have also the same social security number, while in this other we have people that again have the same social security number. So these results seem promising. Now if we want to check the resolution, the similarity rules and the similarity relationship created, we need to change the visualization. Because something that I didn't mention before is that we usually suggest the usage of an helper database in which we can store the similarity relationship and also run the graph data science algorithms. This is because in this way, we can avoid polluting the main graph with all these details that are only relevant for the purpose of the resolution. Otherwise, if an analyst wants to see them, can just change the visualization. Okay, now let me copy these component ID. And if I change the visualization and I open one visualization linked to the helper database, we should now see our entities. open one visualization linked to the helper database, we should now see our entities. So because we run the batch workflow, so we have now the entities that are visible in this visualization. So if I expand the person nodes that have the component ID equals, sorry, the one that I copied before, we should see our entities. Okay, let me disable the grouping strategy and expand all the similarity relationship so we can understand the way in which they are connected. In particular, we can notice that the connection that goes from a community to the other, according to Louvain, is because of the same address. Now I just wanted to show you the incremental part. So let's suppose that an analyst noticed that this relationship is wrong. This person has not this address. I can drop it by using our graph editing tool. And let me submit the changes so that it will be persist to the database. Now, let me run the incremental workflow. This is the workflow that is meant to run continuously, but in this case, just for the demo, I'm starting it manually. So it is very similar to the batch workflow, but it starts by using a Neo4j change data capture components. You can see here and we are essentially monitoring the changes that involve any item used in our similarity rules. So this selector is defined according to our rules. So for instance, we have a sort of security number, the names, the address relationship. Now, when an event happens, what we are doing is to identify the connected component of the affected item, and we are processing all the entities belonging to the connected component. So here we are running the similarity only for five nodes. Then we also run the resolution when these, yeah, here in this bench, and I can show you again the sides of the projection that we are considering for which we are running the resolution. Only five nodes, the five affected nodes. The process finished, so I can go back to the main visualization and reload all these nodes. And we can notice that now, enabling again WCC, we have two different communities according to this algorithm, because essentially we broke the relationship that was connecting the two of them. Now, this concludes our demo. We have seen both batch resolution and the incremental one. In our case, I think that it took less than a minute, but in general, the incremental is a very fast process and also effective. I just want to conclude just mentioning that we have seen that gaps offer a natural way to address safety resolution. We can for instance mention the simplicity in defining the similarity relationship even in the case in which we are considering contextual information of our nodes in direct associations, all the links. It is very simple to write queries because of the graph structure. Also, our community detection algorithms are very fast because we are working, they are optimized to work on relationships, and this is the way in which we stored our similarities. We have discussed some of the useful features that Neo4j provides for managing the real scenarios. For instance, the change data capture that we use in the incremental or real-time process, but also the community detection algorithms that we use them and the variety of indexes that exist. And we have described the fully customizable and explainable entity resolution process that DUM offers in which the only responsibility that is left to the analyst is the definition of the rules according to the data. Otherwise, if you would prefer an out of the box solution, DUM also integrates with the sensing that is a living entity resolution library. But that's maybe for another talk. So I'd like to thank you all for attending this session. I hope that you found it interesting and useful. And let me check if there are any questions. | NODES 2024 - Beyond the Silos Achieving Data Harmony With Graph Based ER | 1,813 | Neo4j | 20241119 | Entity Resolution (ER) serves to interlink fragmented and dispersed data, facilitating the identification of records representing identical real-world entities. This function is pivotal for intelligence analysis, enriching investigations by ensuring comprehensive and uniform data merging.
This session will showcase a robust end-to-end approach for precise and effective data consolidation, adaptable to batch or incremental processing and predominantly reliant on graphs. Key themes will encompass customizable similarity rules, harnessing node attributes and relationship patterns, and strategic utilization of Neo4j indexes and GDS. Additionally, we’ll delve into diverse data modeling strategies, evaluating their advantages and drawbacks and how to accommodate dynamic data changes.
With Federica Ventruto
Get certified with GraphAcademy: https://dev.neo4j.com/learngraph
Neo4j AuraDB https://dev.neo4j.com/auradb
Knowledge Graph Builder https://dev.neo4j.com/KGBuilder
Neo4j GenAI https://dev.neo4j.com/graphrag | 2024-11-19T11:17:27.170361 |
https://www.youtube.com/watch?v=FisiD2AyIQQ | So I'm Alex and I'm going to be presenting Runway today. So just a little introduction. I've been with Neo4j as a consultant for about a year and a half. In a previous life i was a neurophysiologist working in healthcare i'm based in chicago and this is my puppy named pochita so what we're doing here with this package is we're using llms to automate the graph data modeling design and creation processes with the goal of reducing the time that it takes to go from source relational data to working with that data in a graph format. And we're also trying to enable new developers to Neo4j while also providing tools that can be useful to those that already have a lot of experience working with Neo4j and graphs. So the package is currently in version 0.14 so it's in beta we're still developing but it is usable. All you need is Python 3.10 or greater. You'll need an open AI API key for most of the modules in this package and if you'd like to visualize any of the data models, you'll need GraphViz installed as well. So there's five primary modules in this package. There's data discovery, graph data modeling, there's ingestion code generation, the actual data ingest module. And then thanks to Dan Bukowski, we recently just added the Graph Exploratory Data Analysis module as well. And we'll cover all of these in the example notebooks. So as we go through these examples and you think about how you could use this package, just keep in mind the goal here is to get roughly an 80% functional graph. And so what we mean by that is that the output that you get from Runway is probably not going to 80% functional graph. And so what we mean by that is that the output that you get from Runway is probably not going to be a production graph. And so you'll need to take that and massage it a bit, maybe add some properties, add relationships and nodes where maybe it didn't exist. But out of the gate, you will be able to explore your data as a graph, and you'll be able to get insights immediately. hopefully it'll get the gears turning get you thinking about how you could make this even better um runway is not fully autonomous so there's multiple times where a human can step in and kind of change the behavior of what's going on um and what's really important here is that we still need domain expertise um runway does not know your data as well as you do. And so upfront will provide a data dictionary that takes away a lot of the assumptions that would otherwise have to be made in the modeling process. So here we'll go to our first example. And we'll zoom in a little bit. So in this notebook, we'll go with an end-to-end solution using Runway, and we're going to have a couple CSVs with just some data on countries. And so up here, we have just our imports, and then in here, we're loading our data. And so when we load data in Runway, what I would prefer to do is use this load local files function. And so when we use this function, we pass a data directory, we can pass some use cases that we want Runway to keep in mind as it's creating a data model, and then we can pass a data dictionary as well. And so we'll take a look at what this data dictionary looks like. If you notice, we're loading this from a YAML file. And so this is how I would recommend storing your data dictionary to be used with Runway. And so we see we have this primary files list, right? And so we have a file name here, it's countries.csv. And then each file has a list of columns. And so a column consists of a name, a description, and an alias. This alias is optional. And so what this specifies is that when we have multiple files, sometimes a column can be known as something else in another file. And so we can tell Runway that this ID column in countries is also known as a country ID down here. If we go to the states.csv file, we see that it has a country ID. We can also see that we can specify to ignore a column as well. And so what this does is that it lets Runway know that this column exists, but it won't be used in the modeling process. And so here we're ignoring this country ID column because this is just a foreign key, and so we don't actually want this to be used in our data modeling. And so once we have all of these declared, we can just run this, and this will load all of our data into Runway's internal table collection class. And so this just handles all the data and make sure that the data dictionaries are associated with the proper tables and so on. So just so you have an idea of what we're working with here, we can see here is just the head of the countries table and then we have states and cities as well and so this is we're going to combine all of this into a single graph data model. So once we have our data loaded we can use the discovery module and so what we do here is we declare a LLM that we want to use so So Runway has two types of LLMs. It has a discovery LLM, which we're using here, and a data modeling LLM. These both serve different functions. So that's why we have two different ones. So we're going to use GPT-40-mini, and we're going to run this in async because we have multiple files. So we can just run all of this in parallel. And so all we have to do is just pass the LLM that we want to use and give it the data. And then when we run this run async method, this is going to run some analytics using pandas. Then it's going to format a prompt, pass it to, in this case, OpenAI and generate some insights about how we can use this data in a graph and here we're specifying to do this in three calls so in three calls we'll analyze all the CSVs and then we'll collect that all into an aggregated discovery string as well that contains all the important information and that's what we're seeing here where we have unique IDs that it's found, some significant properties, we have possible labels and possible relationships, and then we also get how it's going to address the use cases that we specified. And so all of this is going to allow a very rich context for when we actually create our data model. We can also optionally save this to a markdown or text file, and we can actually load this back in as well, if you want to bring this in later for another session. And so once we have our discovery, we can import our graph data modeler. Here, we're initializing a different LLM. In this case, we're using GPT-4O, and we can also specify some model parameters as well. In this case, we're using GPT-4O. And we can also specify some model parameters as well. In this case, we're just specifying a temperature. And this is possible with both of the LLM classes. And so here, all we do is we pass in the LLM that we want to use, and then we give it that entire discovery object that we were just using in the cells above. And by doing this, Runway is able to pull in all of the important information it needs to generate data modeling prompts and interact with OpenAI. So we can run this. So we wanna create our very first model. So we say create initial model. And there's some parameters you can specify here, such as enforcing uniqueness and such, but we're just gonna leave everything at default. And so we can see that we get some nodes first, and these are all valid. So behind the scenes, we're using Pydantic and Instructor, which are Python packages that allow for a lot of advanced validation of your LLM results and your Python objects. And so we see these are all valid and then we generate some relationships as well and we see that our entire data model is valid. And we can visualize the data model that we get back since we have graph viz installed. So we can access the current model property on our graph data modeler and then we can use the visualize method on that data model. And that's what we get below. And we can see that we have a city that's located in its state, and this belongs to a country, and this country is part of a sub-region. And this is fine as a first pass, but I would like region to be its own node. And although we can do the analysis on currency that we asked about in our use cases, I would like currency to be its own node as well, because that will allow us to easily visualize the shared currencies between different countries. And so if you want to make any changes, it's just as easy as specifying a string that says what you want to change. So you just say, hey, create some currency and region nodes. And then once we already have a data model, we can just run this iterate model method and pass our corrections. And so this will attempt to apply those corrections to the most recent data model that you have in your graph data modeler class. And so we see, we receive a valid model, and this looks right. We can also visualize this again. Notice how the current model is also incremented so we're pointing at the model that we just created. And this looks the same and then we also have currency coming out of country and we have region as its own node as well. And so I like this data model much better. I feel like we can get a lot better insights out of this. So once we're happy with our data model, we can generate some ingestion code. And so to do that, we use this PyIngest config generator. And so all we need to do is pass the data model that we'd like to generate code for. And then we pass some connection credentials so that we can actually connect to our Neo4j instance when we run the ingestion code. And so all we need to do is just say, hey, generate a configuration string. And you don't need to know any of the specifics here. All you need to know is that we pass this configuration string to this Pyingest function in Neo4j Runway. And all we do is we pass that to Pyingest, hit Run, and this will ingest our data. And so we can take a look here after we ingest, and this is what that looks like. And so we can see that we have regions here in green, we have our subregions, our countries, and we can see all these currencies in yellow. And so we can easily visualize immediately that we have some currencies such as USD that are shared between countries and actually span across different sub-regions and regions. So it's very easy to immediately get insights out of this graph with the relationships that we've established. So we can also use the graph EDA module as well after we've ingested or even on a graph that you've generated without runway. And so this is going to be ran on that same country's data, although I have modified it a bit just to kind of see some of the other aspects of this graph EDA module. And so all we need is we need this Neo4j Graph class here. And so we just pass it some connection information. And then we pass this Graph instance to the Graph EDA class. And once we have that, we can see the available methods that we can run with the available methods property. And just for an example, you know, we can call database constraints and we get all the constraint information of our database. If we want to run all of these methods, we can just use the dot run method and pass it some optional method parameters, you know, for the specific methods if we want to tweak this a little bit. And if we want to refresh all the data, we can say true for refresh. All of the methods have this argument. Otherwise, it just uses the results that are in the internal cache for the Graph EDA module. And so once we have some information in the cache, we can run create EDA report and we can specify what we'd like to include there with a couple different arguments and we can view that report and here we can see that we have information on node counts relationship counts indexes etc and then we also have information like unlabeled nodes or we have disconnected nodes with the node ID. So we can go back into our database and search for these nodes explicitly and see where the issue is. Like maybe there's something wrong with our ingest pipeline, etc. We can also see node degrees and we can see nodes that might be a problem if we're trying to tune our cipher because high degree nodes could slow down our cipher queries. And if you'd like to persist this report, you can just save the report, and this will output a markdown file with all the information contained inside. So to summarize, we're using LLMs here to streamline the end-to-end data discovery to ingestion process. We're enabling new developers, hopefully, with best practices using graph databases. And we're also accelerating time to value. So hopefully this allows you to easily get your data into a graph and start playing with it on a quick timeline. So are there any questions? | NODES 2024 - LLM-Assisted Data Exploration With the Neo4j Runway Python Library | 853 | Neo4j | 20241121 | In this session, Alex Gilmore will guide you through how to use the Neo4j Runway Python library to explore your relational data as a graph. Runway abstracts communicate with LLMs to assist in data discovery and graph data model generation. It also provides methods to import data models from other tools, such as arrows.app, and automatically generate ingestion code. It can also easily load your data into Neo4j.
Get certified with GraphAcademy: https://dev.neo4j.com/learngraph
Neo4j AuraDB https://dev.neo4j.com/auradb
Knowledge Graph Builder https://dev.neo4j.com/KGBuilder
Neo4j GenAI https://dev.neo4j.com/graphrag | 2024-11-21T19:21:52.879173 |
https://www.youtube.com/watch?v=r2jscke0P3U | Hello, everybody. Good morning, good evening, good afternoon, and thank you for joining this Neo4j live. Two weeks after Nodes, so I hope you all recovered from 24 hours of Neo4j presentation on the beginning of November. But now we're back with our sort of regular schedule with live streams on the channel. So happy to have you all back here today we had a funny conversation already in chat earlier while the video was running about graph aid which i need to check out somebody miko posted that in chat so i need to check that out so people having a good time here saying hello from from pune from paris from new york from Manchester, from Lima, from India, on LinkedIn. So great to see you all join today. And yeah, welcome, everybody. I'm happy to have Johan Salami join me today on hybrid AGI, graph-powered self-programmable AI. So that's going to be our topic for the session today. Hi, Johan. How is it going? How are you doing today? Hello. So hello, everybody. I'm doing great. So to present myself, I'm a PhD in AI and robotics. I've been working before in humanoid robotics and social robotics, the LLM crazy started, I started to implement a system based on my knowledge on making a robotic system. So I will share my screen just a bit. Cool. OK. Let me see, when I get your screen, then I can switch over here. Here it comes. Right. That works. That works? Yep. OK. right that works that works yep okay so um the company that we started is called uh synalinks and maybe you maybe you i'm sorry for interrupting maybe you wanna get rid of this little box that says you're sharing your screen maybe uh yeah or you can click on a little uh that yes exactly okay and then it goes away thank you cool so uh when uh all the lm crazy craziness started um we we started a company that is called synonyx and the main idea behind it was uh to um to promote a neural symbolic system. So here is our team. And when we started the company, our idea was really to push neural symbolic systems, to push neural symbolic systems, because in robotics, we use these kinds of systems a lot. So here is some of my previous work that I've done with human robots as i said previously and it was about creating a graph structure for explaining the scene so you have a different kind of objects you have maybe people in the scene and he wants to describe the scene as a graph and that graph goes into an ontology system. And based on that, the robot is able to interact with the people and ground information and entity in the real world. So actually, my work was about many different things, but mostly about combining the robot perceptions and simulation engine to make the robot perceptions and simulation engine to make the robot able to infer the position of the objects, even when they are occluded by the camera. And also about theory of mind. So theory of mind in robotics, we use a lot the graphs but we don't only infer a graph for the robot but we also infer a graph for the humans and the way we construct it is that we take we have the graph from the robot and we filter some part of the graph and how we do that we do that using what we call visual perspective taking so we take a picture from the point of view of the the user and from that we are able to compute the visibility of the objects and based on that we can filter the main knowledge graph and create other knowledge graph for the the people that the robot is interacting with um so when we started uh hybrid agi the the goal was really to um so to give you a bit of history the system is now one year one year old and we started at the very beginning of the agents, the LLM-based agents, and we had kind of a frustration about agents. They were going through infinite loops. Sometimes the, well, the TLDR is that they were not reliable. And so the idea behind Hybrid HGA is to use concepts from computer science but also from a cognitive science to create a system that is able to work reliably and so we do that by controlling the the agent system from end to end using a graph so there is no react agents in our system I mean you can implement a react algorithms I would say but that's not the point of the system the system is really made to be able to program more complex agents than the React one, that didn't give us really reliable results. So we do that by managing the knowledge using graph databases, and we picked the Cypher language. We picked the Cypher language because it's really easy to learn and easy to read for humans so our hypothesis was that if we have a language that is easy to to learn for for humans it could be easier to to learn for LLMs because as LLMs are really a proxy of our of natural language. And if we have a language that incorporates a lot of natural language, we could create an intermediate language between the goal of the user and the execution of the system. So basically, it's really a memory-centric system. We have different kinds of memory, and we can use them all at once, if we want in the same agent. We are not limited by the number of tools that the agents can use, simply because the system doesn't really choose the tool that it have to apply because they are specified in the program. So the way that it works is that each time it needs a program, the prompt is constructed in a way that it gives only the information that it needs for this particular tool. And so we are able to accommodate an infinite number of tools as long as you describe the whole logic into a graph. So there is different memory that we implemented. So the program memory is actually its dependency graph. We see this memory here. And it's a dependency graph why a dependency graph because in our system we decided to really create a kind of a software written in cipher that was the the innovation behind our system so you can describe the whole software and as a software you start with a main program you start with the main program. So you have the main program, which is the entry point of the agent, and it will interpret the program. So the interpreter is very like a Python interpreter. So in Python, for example, an interpreter is going to interpret line by line and going to execute the the given execution in our system the interpreter will navigate inside the the graph and will execute node by node the graph programs and that's really key to to understand i would say like it's really if you come from computer science, you already know what is an interpreter. You already know the differences between a compiled language and an interpreter. For example, our system is very different from Langraph, for example, because Langraph uses finite state machines. The thing with finite state machines is that you define you define them and then they are fixed they are static okay and there is no really a way to modify this uh this uh this state machine on the fly the only way to do that is to create an intermediate language that the the llm can uh read and modify so here it's what we've done with the Cypher language. We have also the memory, the document memory, which is now a classic in React system. I won't talk too much about that. It's really a classic now. The fact memory, so if you're here in this Neo4j stream, I think you already know what is a GraphRag. And the fact memory is the system that store the triplet. So the triplet is the subject, verb, object. And it's basically your determinist fact that you work with. And we have, finally, the trace memory. So the trace memory is a bit special. It's kind of the episodic memory of the system. But we didn't want to bring too much concepts from cognitive science. So the idea was to really use the concept from cognitive science, but ground every concept into computer science because it's easier for people with programming knowledge to understand it and it also it's avoid many pitfalls that people have when they try to mimic too much the human cognitive processes. So, the trace memory, what it does is, we basically record each action that the program was performing. So, actually, what you have is that in this memory, you have the trace of the program that's already executed. As I said, each memory can be used multiple times and all are at once in the same agents because they have special tools that we can use within the agents to access them to read-write. So the agents have read-write access to most of the memory system. About the repository, so we currently have started to get quite famous, and we are really happy about that. So... Well, it's good. Great. Yeah, today we had a really... I would say, like some stars are shining more than others. And today we had one star from François Cholet, which is the creator of Keras and the RKGI benchmark. And actually, we used some of his work with Keras to come up with an abstraction that was easy for people to understand and to leverage. So actually that was really quite something for us. And so yeah, so here is the link, see if you want to go and check and have a start to the repository uh next i put the link to uh to the github reports and chat okay thank you thanks uh so uh now uh we are going to drive into um into some notebooks so i will show you uh really uh i would say basic notebooks so it will be a graph rag, so you can get a sense of how to use the library. And then we are going to switch to a more experimental, I would say, feature, which is a copilot for our system. Cool. Okay. our system. Cool. OK. Oh, and if anybody's watching and have any questions, any comments, then please feel free to use chat and let us know. There was already a question on your slides, if they would be available afterwards. So maybe we can share them with the audience later. But the recording, definitely, this will all be recorded. And you can watch it afterwards, anytime, in our YouTube channel. Yeah. So Knowledge Graph Rags are nowadays, I would say, common. But when we started IBuild AGI for, I don't know the reason, last year, nobody was giving attention to graphs because everybody was focused on vector-only rags and react agents. And coming from robotics, that was really something weird for us because graphs are really powerful and they can model almost anything so actually they are really useful in many cases and many applications so here in this notebook we are going to build a graph that is about so a movie basically a movie accommodation system it's kind of a toy example but it's show well the how the system works so we have here our input data and we are going to build the facts that we are going to need in our system. So here it is. We have the facts that we are going to use. So basically, it's just the fact is a triplet that we have. So here we are. So here we just like create the facts for the fact memory. So the fact, as I said, are actually triplets. And then create a graph from them. As you can see, the nodes are not connected with each other, which is a classic problem in a graph database. It means that there is duplicate nodes. And actually, we can create a pipeline. So in iBuild AGI, there is a really handy way to create a pipeline for data processing. And it really looks like Keras or Hugging Face library. In fact, we took inspiration from them. So it's actually quite normal that the system looks like Keras or Aguinface. So we can stack modules with each other, and we can create a data processing pipeline. So here in this pipeline, we did duplicate the entities, and we embed the entities and the facts. And by doing that, we can create a knowledge graph. And based on this knowledge graph, we can infer some things. And in this case, we can recommend movies for the users. So here is the part that in iBrill AGI, so the thing is we create a graph program. And this graph program is actually done in Cypher, but we created a way to create them using Pythons because when we started our system one year ago, the only thing that we did is that we had the software written in Cypher. And this many people complained about because they didn't know Cypher or it was too complicated for them. So we took inspiration from Keras, and we created an intermediary language on top of the DSL to be able to create the programs in Python. So the program can have four different nodes. It can have control nodes, which are used for the start and the end of the program. It can have action nodes, which describe the use of the tool for the agents. We can have decision nodes. So the decision nodes are used for the system to branch over the graph based on a question that the developer asks for the system. And actually, the question, the good way to see that is not about really asking questions to an LLM, because the LLMs are quite limited and they really cannot say that they understand the prompt or whatever. But actually, they are quite good to do some one-shot classification in a one-shot setting. So you have to see the decision of the system as a classification task. So the goal of the decision is to classify the context and based on that classification, the system will branch over the graph to execute the action of the branch. So, and also the last node that is possible to have in our program are the program nodes. So the program nodes are actually like a sub-program in a programming language. So the interpreter have a stack, like in any, I would say, proper interpreter, and you can call other subprograms. And by doing so, you can call a subprogram, a subroutine, and you can go back to when the routine ends, you can go back to the program that called that routine. So it really looks like a programming language, in fact, for LLMs. you can go back to the program that's called the truth scene. So it really looks like a programming language, in fact, for LLMs. And so you create your program. And at the end, what you got is a program that is expressed in Cypher. And this program, due to the fact that this program is really, use a lot of natural language, the LLM is able to understand it quite easily. And it says here tool FactSearch, because Paul also has the question in chat, is that a tool you built yourself, or is that some, what is fact search? So the fact search is when you search for facts in the fact memory. Ah, I see, okay, the one you showed earlier in the slides. Yeah. Yeah, exactly, here. So actually it's going to look for triplets inside the knowledge graph. Okay, I see. We have a way to... What is really different from our system from a land graph is that because we have... I can also show you this notebook. Because the... I'm sorry, here... Okay. So this is another notebook, and it shows... Because actually the programs are embeddable. And what it means is, it means that you can have many subroutines. Like imagine you have millions of graphs and each graph is actually a skill for the agents. And so it's impossible to have in the main program, to create a program to have the good condition to call the right program, etc. Because at some point, if you have a growing library of programs or you have an agent that is really general and that have a lot of different skills, you need a way to search for the right skill to apply. And this can be done because we use the description. So here is the description of the program. We use the description of all the sub-routines to create an embedding for the program. And by doing so, the system is able to search inside its own program library and is able to search inside its own program library and is able to dynamically call a program when needed. So here we create a different program. So for example, this is the one that looks like a rag, basically, and we have another one that is about telling a joke, for example. So we create a pipeline to embed each programs. So we embed the graph programs that we created earlier. And based on that, we create a dependency graph. OK? So here we have the dependency graph. In this example, it's quite simple, but there is no limit on the complexity of the dependency graph, obviously. And what we noticed when we were experimenting with self-programming agents, because remember, the goal of IBDGI is to have a system that is able to program itself and to evolve. And I would say, like, learn new skills on the go. That's really the main goal of the system. And this, when we started to experiment with that, we noticed that sometimes the LLM wanted to modify its own main program, which is, if you think about it, it's quite not good. And we came up with a system to be able to protect some program. So in programming language, we have some scope. For example, in Golang, for example, you have... Oh, no, no, no, no, no. In C++, you have the scope of the... Some program have a scope, so you can be protected, etc. And here we have made a kind of scope for the graph programs by only allowing the system to call programs that were disjoint from the main program. Because when you create, for example, when you create, I would say, a learning program, so you will create a main program that says, okay, look for a skill to perform the user query. If that skill is not present in the database, so we are going to learn a new skill. So we are going to plan, create a plan, convert this plan into a graph program, okay? And we are going to save this into memory. And this is basically a learning algorithm that you can add. And your learning algorithm is in your main program. But you don't want the system, the AI system, to be able to modify this learning algorithm because it will mess up with all your architecture. So we have to come up with a way to protect some of these programs. And also, it's really interesting in the sense that we can implement some guardAR writes in the system by, for example, if you usually, if you want to, I don't advise to have only this GUAR write for a system based on LLM, because there is a lot of problems that can arise, like prompt injection, etc. But it's useful to have, like, I would say, a first decision when you receive a query from the user that will be like checking that the user query is about this topic, the topic of the system, OK? Or check that what the user asks is not against some ethical rules or some safety guards. And what you do is instead of calling the subroutine that's going to execute the user query, you shunt your system. So this is really useful to have in a system, and that's why we implemented this safety mechanism to the program. So for example, here... We have the input query of the user. It's called the Fulfill Objective subroutines. And this subroutine is going to find into the memory system, the program memory, other programs to call. Okay. It's going to search them. And then we are going to ask, are they relevant or not, etc. And if they are relevant, we are going to use a tool that is called CallGraphProgram that, based on the name of the program, will actually call dynamically a program. So basically here it's what happens. So the system is calling the detailjoke program that was searched previously here, and it's going to execute it. So it's able to execute the program. So here is the execution of the program. And then it ends the execution of the program and ends the main program. Are you guiding this a little bit by this question from Don in chat? Are you using weighted path concepts to help the system make decisions? Or how is your, are you helping the LLM a little bit or giving a little bit of a structure for it to come up? Yeah. To help the LLM to make the right decision, we actually use a chain of thought concept. So if we look at... prompting... here. I have a notebook that explains the underlying prompt that we use exactly. So when we are making a decision, decision-making steps. So here we have a program that has a decision-making steps. And we create the program as usual. And when the system is actually performing the query, we have this reasoning. And this reasoning actually comes from the SPI, because our system is built on top of the SPI. And the SPI have a way to easily create some chain of thought resonance. And so the goal is to, for example, in this case, it's going to create reasoning steps, I would say. Reasoning, it's not real reasoning, but you know what I mean. And it's going to help the system to perform better pattern matching by letting it think before answering. And by doing that, the system is able to have a pretty good decision-making system. About the decisions, there are a few ways to enhance them. One way is to actually ask something that is really grounded in the information that the system has in the prompt. Because when you ask LLM about a question ask something that is really grounded in the information that the system has in the prompt. Because when you ask the LLM about a question that requires its internal knowledge, there is a higher chance that it's going to miss the point. When you ask the system a question about something it has in its context, you can see it as like a regex, for example. Let's say you have a question that is going to be... So I retrieve some documents based on the similarity metric, so like a reg, and you're going to ask, is that relevant? Is your context relevant? Is the list that you fetched empty? If it's empty, you will output just the empty word. If it's relevant, you output relevant. And if it's not relevant, you output, let's say, irrelevant or iranus or something like that. I noticed that when prompting LLMs to actually make decisions, the more distance the classification word that you use in your decision, the more distance they are from each other in the embedding space, the better. There is less ambiguity for the LLM to answer. And if, for example, you have relevant and irrelevant, because the tokens are made of the same, the words are made of most of the same tokens, there could be an ambiguity in the decision, in the decision, and it can make the system choose the wrong path. So actually, the wording of your decision is really important. And I think it's the same problem in many, many system. The more distance the words for your decision, the more distance they have with each other in the embedding space, the less tokens they have in common, the better will be the output. Yep. So here about the decision system. Shit, my computer is completely dead. I wanted to show you also the copilot, but unfortunately, I don't have it here. So I can, if you have other, if there is other questions, I can ask and I will, anyway, I will push the notebooks about the Copilot tonight so people will be able to check it and to run it. JOHN MUELLER- OK. Is there any question about, for example, here is a notebook about creating custom tools. So basically what we do is we take the description of the function to create a tool. So it's really kind of basic, but it allows people to add new tools to the system to satisfy their use case. So here is the graph that we use. And the system is able to, for example, to fetch the price of a cryptocurrency and using the tool that we created, so the custom tool. I see. So I don't see any other questions, but maybe we give it a few moments. But what I wanted to know is maybe we can explain or you can explain what you wanted to know is maybe we can explain, or you can explain, what you wanted to show. I mean, unfortunately, we cannot show it now on screen, but we can at least maybe talk a little bit about it, and you can say what the general concept or the idea is and what it brings to the table that helps people work with Neo4j and hybrid AGI. Yeah, yeah, sure. So the Copilot, for example, the Copilot is really about creating. So as you see, like when we create a graph program in Python, okay, but at the end what you got is a programming cipher, okay. So the copilot is actually a simple program that is used to ask the LLM to create this program, okay. And that's where the usefulness of having to choose a cipher for the system helps. Because the system is quite like a large language model I've been trained on many programming language and many languages. And they are able to put a cipher program. And basically what for the system, we ask the system, for example, let's say you start with write a poem, okay? So the first step will be to convert this instruction, write a poem, into a plan. So this plan could be like a first step. I fetch some documents in memory to see if there is a poem about the given topic in memory. And then I write a draft. And after writing this draft, I ask the user if they're happy about the draft. And then I finish the, I give the last, the last, I give the, write the last version of the poem, and then I finish the program, okay? So the first step will be to like take this query from the user, convert it into a kind of natural language plan, which is going to be like a bullet list. And from this list, then you have a prompt that asks the LLM to format this plan into a program. So you give the LLM some examples on how it has to format the plan. And because each step is actually involving a lot of natural language and is just like changing the format. It doesn't have, the system doesn't have to learn or to, because actually LLM have not been trained on our DSL. Okay. So they don't know the exact DSL. But if we give them the instruction to create it, like for example, we say, okay, for each action, you have to create a node that looks like this, that have this IDE, that have a purpose field, a tool field, a prompt field, what you have to do, put in the prompt, etc. Given that in the previous step, you have given already all information, or you have created an intermediary step that allows the system to populate these steps, the system is able to actually create this program, this cipher program. And so the whole idea, and that's why we created an interpreter and not a static language. Because the goal at the end is to have... Let's say you have an agent, but you cannot program it for all the tasks that it's going to perform, okay? And if we want to have a system that is general enough, we have to come up with a way to create new skills and new tasks on the go. And the way to do that is to actually use this Cipher intermediate language. And so the system is actually able to create on the go this representation, this Cypher program, for example, is able to create something similar and to save it into memory. So then if you ask again the system about the same task, it will be able to dynamically call the program that it just created. That's why I League AI has these features. So actually the dynamic call is needed if you want to have a program that is able to learn new tasks on the go. Okay? And so we have to... It's a way because when you start your main program, let's say you start your main program, you just have your learning algorithms that say, okay, look for the programs in memory. If there is no program for the specific task, call create program, the subroutine create program, and then this subroutine create program is going to have some programs to actually come up with the actual cipher program and save it into memory. The library, when we look for known programs to solve particular problems, okay, at the beginning, the program memory will be almost empty. The only program that will be inside the memory will be your main program, and the program that your main program is called, using the program nodes. But as you use it, the library of tasks that the system will have will grow. It will grow over time. So actually, it was really important for us to have a way to embed the tasks, the programs, and to search for them in a dynamic way in order to be able to, when the library of program is growing in real time, on the go, to have a way to fetch the correct one and execute it. So that's why we have these tools like to search for known programs inside memory and to call them dynamically. And of course, to add it into memory to save it. OK, cool. Yeah, that's really cool. It's a shame we cannot see it now in action, but maybe, I don't know, if you can record a video at some point and maybe add it to your to your github page or something where then people can can follow up and and see how it how it looks like on the other side um the only other question i see here is from andrea that would bring us to the end anyway from andrea and they ask have you evaluated the models used in your repo and not used with text to Cypher? What are your results? I don't know if you played with that. So text to Cypher is actually very different from what we do. Text to Cypher is used, like traditionally, Cypher is used to carry a knowledge graph. So if you look at text to Cypher, for example, I don't know if you already looked at this data set, but it's about carrying the knowledge graph. Here, we use what the goal of the system is to create a DSL based on Cypher, but it's not the graph program is not a traditional query, I would say, that you use in Cypher. And it's really different. So here in this system, the Cypher language is used as a proxy, is used as a way to describe a graph in natural language, but it's not only about retrieving facts in a knowledge graph, which is two different things. So actually, it's very different from text to cipher. JOHN MUELLER OK, good. And maybe just to close it down, because you talked a little bit about this earlier, from large language models to local small language models to to local small uh language models is is what what are your experiences here and uh what uh why have you chosen you know the the tools you are working with now so first uh there is a different um i would say justification for for choice and design First, I've been working on AI for, I would say, a very long time. And the trend that we see on AI system is that when they always get better and smaller, okay? It's something that we... I've been working with computer visions. And when we started computer using deep learning for computer visions, the system was quite big and very quickly they reduced in size and they they got almost real time frame rates and they get better and better and more importantly smaller so that's that's actually a global trend in ml uh that's as soon as uh there is a lot of money invested in uh in machine learning the the size of the the models is reducing reducing and the capability is augmenting so that that's the first justification. That's why from the start we thought about if you have a system that requires multiple steps, the only way to reduce the cost of running the system is to actually perform computation on the client side. So actually on the computer users. And this can only be done with local LLMs. For example, let's say you have a system for predictions. Having the LLMs on the client is not really problematic. What is important is to have the graph database on the client is not really problematic. What is important is to have the graph database on the cloud, okay? So all your systems are going to be connected to Neo4j, to whatever the graph database you are using, okay? And so actually, when you are using a neural symbolic system, the importance of the system shifts from the normal network to the data to the structured data so the the the graph database is really more important than the large language model the large language model is used just as a execution engine but the the the whole value is in the graph database that's where you store your programs your programs or plans that's where you store your data your business data etc so actually you when you're creating a a program that is for for example for you have a team of workers and they want to have a graph right the thing that is going to be important to have on the cloud is the graph, not actually the LLM that you can run locally. And that way, you can actually reduce a lot the cost of running your systems. JOHN MUELLER- Yeah, that's great. That's a good point, a very insightful view on this. And yeah, it's really really good good that you that you you made that um connection here between using using your own data is is is what what makes the difference really and your data is can be should be in your in your graph database and then from there you can you can use any uh any other i i think uh you mentioned it when when all this uh craziness about llm started not everybody was uh looking into graph database but i think that a lot of professionals were actually looking into that because that's that was the way that they make the data-intensive systems are all based on the graph database. And graph database are really key for that. For example, I have some story when we started, for example, I believe HGI, we started with this graph. And they dropped the support for their graph database. And for me, I was really surprised as someone from robotics. And I think many professionals were really also very surprised by that decision. And so they wanted to stick on a vector-only database. And for us, it was really something weird because graph database are really like the missing points, the missing link, I would say, in the architecture. They give the possibility to have insightful analysis, but also you can use them in many ways. Like for instance, in Ibuild AGI, we use a graph database in many ways that are not traditional. But it's because the graphs can describe almost everything, they can model almost everything. So you can actually use that to model many kind of algorithms, many kind of models, and not necessarily knowledge graphs, traditional knowledge graphs. Yeah. Cool. Yeah, thank you very much. That's right. I think that was a good end point to our session. Yeah, like I said, links to the repository are in the video description on YouTube. Yes, I will check my computer. I hope it's doing fine. It's definitely out of our control this time. Yeah, no worries. The more fact, We talked about that. It's so... It happens. Don't worry about it. We can pick it up again at some point. So like I said earlier, next stream is tomorrow. We're going meta. But don't forget about the Neo4j Developer Survey, so that's currently running. So, And I will publish the notebook about the copilot on my LinkedIn, so. Okay, cool. And you can share that with me maybe, then I can add it also to the video description. Obviously, I will share it with you, and if people want to follow us on LinkedIn, I will be happy to connect with them. Cool. Yeah. And I think with that, we're at the end of today's episode. Thank you very much, Johan. Thank you everybody for watching. Like Johan said, reach out to him and the team if you want to know more about Hybrid AGI. Check out the GitHub repo, give it a star, and play around with it some more. And let us know what you thought of this episode by giving it a thumbs and and you know play around with it some more and yeah if let us know what you what you thought of this episode by giving it a thumbs up that would be obviously great and yeah with that have a good rest of the day and yeah thank you thanks for new 4g for having invited me and thank you everybody for uh for participating for participating in the in the chat so you | Neo4j Live: HybridAGI – Graph-Powered, Self-Programmable AI | 3,068 | Neo4j | 20241120 | Discover HybridAGI, a groundbreaking Cypher-based neuro-symbolic AGI designed to make AI adaptable, reliable, and knowledge-driven. With the power of graph technology, HybridAGI goes beyond traditional Agent systems by using a graph interpreter that dynamically reads, writes, and executes graph-based data, enabling self-programming and on-the-fly adaptation.
Tune in to explore how HybridAGI combines graph and LLM capabilities, allowing users to create secure, memory-centric AI applications with deterministic behavior and extensive customization through Cypher.
HybridAGI: https://github.com/SynaLinks/HybridAGI
Notebook: https://github.com/SynaLinks/HybridAGI/blob/main/notebooks/graph_programs_copilot.ipynb
Guest: Yoan Sallami
Knowledge Graph Builder https://dev.neo4j.com/KGBuilder
0:01 - Welcome & Guest Introduction
5:06 - The inception of HybridAGI: Addressing limitations of early LLM-based agents.
6:37 - Why Cypher was chosen for HybridAGI and its benefits for AI programming.
10:12 - Explanation of the four memory types in HybridAGI and their roles.
17:06 - Demonstrating a toy example: Building a Knowledge Graph for a movie recommendation system.
22:46 - Discussing dynamic program calls and embedding program descriptions for enhanced agent capabilities.
26:01 - Importance of protected programs for safety and maintaining architecture integrity.
32:55 - Insights into effective decision-making with LLMs and optimizing prompts for clarity.
37:22 - Exploring the concept of self-programming agents and dynamic task creation in HybridAGI.
41:00 - Why graph databases are crucial in neuro-symbolic systems and their role in reducing LLM dependency.
49:16 - WrapUp
#neo4j #graphdatabase #llm #genai #aiagent | 2024-11-26T13:05:01.454426 |
https://www.youtube.com/watch?v=g21royNJ4fw | In a previous video, we looked at light rag, which is a simple and fast retrieval augmented generation system that combines knowledge graphs with embedding based retrieval and is a great alternative to graph rack from Microsoft. They just added support for local models using all Lama and also added some visualizations of the graph. In this video, I'm going to show you how to set this up using Olamour. So everything is going to be running locally. Now I'll highly recommend to watch my previous video, which they have kindly linked in their main GitHub repo for better understanding of how light drag works. So the first step is going to be to clone this repo. In my case, the repo is already cloned. So I'm going to just use the git pull command to ensure that I have all the latest changes. Next, you need to create a virtual environment. In my case, I have created a virtual environment, which I'm calling lightrag. So we'll activate that virtual environment. And since there are changes to the GitHub repo, so we need to install all the new packages that are added. So if you want to build it from source, you're going to use the pip install dash e dot command. This will recreate the project from the source. Now to run the local models using Olama, make sure that you have Olama downloaded and installed on your local machine. Okay. If you go to the repo under examples, there is an example file called lightrag Olama demo. So in here you can provide your model name and also the embedding model that you want to use. By default, they're using the NOMID Embedged Text, which is a text embedding model from NOMICAI. But you can replace this with any embedding model that you want. Similarly, you can replace this LLM with any LLM that can be hosted. If you go to Olamour, you can search for models in here. In this video, we're going to be using the QIN2 model because that seems to be a pretty good model. Now for the embedding models, I will recommend to search for something like embed. And then you can see that there are a whole bunch of different embeddings that are available. And you will need to run both the embedding as well as the LLM through OLAMA for this to work. Now, one more thing to consider OLAMA limits the context window to 2048 tokens in order to preserve the VRAM that is needed. So we need to expand that context window because during the chunking and vector creation process, we need a much longer context window. Assuming that you have OLAMA running, we need a much longer context window. Assuming that you have Olama running, we need to start two different servers, one for the LLM and the other one for the embedding model. So we're going to start the server for the embedding model, which is easiest of the two. In that case, we will use the default NOMIC embedding. So just go to the Onama website and copy this command. Now in here, I'm going to paste the command and just run it. So this will download the model and load it. Next, we need to set up our LLM. Now you'll need to do a couple of additional steps. So assuming that we want to use the Quinn model. So so first I'm going to go here and copy this command. This will download the Quinn 2 model for you. The same process is going to work with any LLM that is available on Olama. In my case, the model is already downloaded. That's why it just loads the model. Now, the next step that we need to do is to change the context size or context window, and you can do it in two different ways. One is you type in Olamar run, Quinn two, and then once the model loads, you can set there, and then we are going to be setting the context window. You can type number of context and provide the new context window. So for example, if I want to set it to 32,000 tokens, that's how I'm going to set it. Now, if you type this in, it says that the parameter is set to 32,000 tokens, but I want to follow the official instructions in the light rack repo. And I highly recommend to do it because this probably is the easiest solution. Now the way we can change this within the configuration file is that we're going to ask Olama to show the model file for Qint2. So if you run this command, you are going to see that it will create another file within this folder. And this is basically the main configurations of the Qintu model that are stored through Olama. So I'll just go to this parameter section. And in here, we're going to add this new parameter, which is set the context window to 32,768 tokens. So just save this. And instead of overwriting the configurations for Quintu, we will create another model file. So this command will create a new model file. So this created a new model file for us. And now we can run this new model file using this new model that we created. So now I can do olama, then run Quinn to M, which is the modified model. And if you run this, this will load the new model with the updated context window. So now we need to go back to our example file that we want to run. In here, we're going to provide this new model name, which is going to be Quinn to M m that is the model we just created and here we are running our text embedding model okay another thing that you can check out is if you go to any terminal and just type olama ps that's basically show all the processes that are currently running so you can see that is currently running the quint 2, which is the modified model file that we created. And right now it's using the 100% GPU. So now we are all set. Again, we will need some sort of example data set to run this on. So based on the example, the repo has provided, I downloaded the same file, which is this textbook Christmas Carol by Charles Deakins. And we stored this as a book.txt. Now in my case, I deleted quite a bit of the text from this file, just to make sure that we're able to run the demo pretty quickly. But in your case, you will need to parse your PDF for HTML files and put them in plain text. And you can run the indexing process on the plain text. So just a quick recap of how this works. I already created this. I'm going to actually delete the index that was previously created. So first you have to provide the working directory. I am calling it deacons underscore ulama. Then if that directory does not exist it will create that directory for us now in terms of the configuration that we are using we are saying that look at the working directory so if there is no indexed or knowledge graph within that working directory it will start creating that for us then the we want to use the olama model so that's why olama model complete that's the. Then the we want to use the old armor models. So that's why all our model complete, that's the function, provide the name of the LLM that you want to use, which is going to M that we just created. And then the embedding model that you want to use. Now, I think you can mix and match. So for example, we can use a local LLM, but with the API based embedding model, that probably is going to work out of the box, but the configurations are going to be a little different. I'll probably show that in another video. Now, once we create this basic light rag object, then we load the text and pass that text to the rag system to index it. So it will create the knowledge graph. Plus the embeddings are going to be computed. I highly recommend to watch my previous video. If you haven't also, if you want to understand how rag system works, I have a course on advanced rack systems and rack concepts. We'll highly recommend to check that out. I'd probably create a more detailed video on light drag and include that in the course at some point. So do check it out. But then once we create the index, we can just run queries on the index. And the way you do it is you call this query function on your index. You provide the query. In this case, I'm asking is who is Mr. Scrooge and what is his relationship with other characters? Now it supports four different types of query mechanisms or query modes. I have discussed these in my previous videos, but you can run naive rag, then using the local search on the knowledge graph or using the global mode, or you can combine both the local and global mode. And depending on the type of the query, in some cases, this hybrid approach can be extremely beneficial anyways. So we have set this up now in order to run this, we need to run the light drag or limit demo file, which is under the example folder. So we're going to type Python examples and then light rag and Olama. Okay. So after running this, you're going to see a few things happens. So the first one is it will show you all the different parameters. The embedding model that we are using has an output, a vector of 768 different dimensions. The max number of tokens are here so seems like everything is correctly loaded we are using the correct model name plus the max number of tokens are set correctly as well so this is good to know now there was a single document that has 20 different chunks by default it's using 1200 tokens chunk. So it created all 20 vectors for those chunks. And then right now it's running the entity extraction process. So this can take a while depending on the length of your documents and you can provide multiple documents if you want. So first it ran the embeddings for each of the chunks. And now it's using the LLM to create those relationships. One more thing that you can do is you can go back to your terminal and then again, run the olama ps command. So it right now shows that two different processes are running. One is the LLM and the second one is the embedded model. The knowledge graph creation process is complete along with the indexing and it created 99 nodes with 31 edges for those 20 chunks. It took close to 10 minutes and I have seen the indexing process can take a variable amount of time for the same documents. So keep that in mind. And then we ran the same query through these four different modes. Now for the naive rag approach, it is I think taking the 20 chunks, but they are only using three chunks in this approach. And the result is pretty good. Not bad at all. So it talks about Mr. Scrooge and it says that he's a central character in the novel. Then he is a portrayed as a greedy and miserly businessman. Then the different relationships with different characters, they are highlighted in here. If you look at the local approach, which is basically using the knowledge graph we see the data is presented in a little different way so in here the query uses 60 entities 24 relationships within those three text units and i think these text units are very different than the text units. And I think these texts units are very different than the text chunks. So again, it talks about who Mr. Scrooge is, and then the relationships that this specific character has with some of the other characters. Now you see similar patterns for the other modes as well, but the number of entities differ based on whether you are using a global, if you were to add more data to it, it's a non destructive operation. So if you look at something like a graph rag, in that case, it will recreate the whole index. But in this case, you just need to call the insert function provide the text, and it will update the knowledge graph plus the index rather than evaluating everything. That is a good thing for a graph rag. Now let's change one of these questions. So I'm going to just say who is Bob. And if you run the same query or the same file again, it should take a lot less time because it's going to be just reusing the index that it already created. So it persists on the on the disk, and it's going to reuse that index for us. So I'm going to run this and then I'm going to show you how the index itself looks like. And then we were going to look at some visualization. So this is a real time process of how I'm running it. So it loaded that. And it says that the local query uses 60 entities, 26 relationships, three text units, depending on your hardware, it could take some time for running the actual query. Okay. So I would say it took close to 10 or 15 seconds, but he has the response. Bob is a character and he is portrayed as Mr. Scrooge clerk, highlighting him as someone who struggles financially and works diligently for Scrooge in his office. Right? So this is a quick response. We can see that the number of entities that it used was close to 60 with 26 different relationships. Now, if you go to the actual folder that it created, there is this graph file, which preserve the actual relationships between different entities. And we're going to look at a visualization of this. There are other files. For example, this is the LLM response cache. Then I think there is even like different chunks that it created, plus the entities that it found in the document, and then order the different relationships. So within the example folder, there are two other files that are, I think, interesting one is the graph visualization with HTML. So you can run this and this will visualize the graph. Now, in this case, I had to change the folder. So by default, the folder name is De in this case, I had to change the folder. So by default, the folder name is Deakins, but I change it to Deakins Olama because we created the index in this updated folder. So let me show you how to run this part for visualization. So we're going to type Python examples and then graph visualization. So we want to use the simple HTML one. There is also a new 4J, but in that case, you will need to create an account on the new 4J platform. So I just want to see a representation of the graph on a HTML file. So if you run this, you're going to see that it created a new knowledge underscore graph dot HTML file. So if you run this, you're going to see that it created a new knowledge underscore graph dot HTML file. We can open this file. Actually, I'll have to go to the main folder and then run that HTML file. So here's the folder. If we run it, now here is how the visualization looks like. We can zoom in. So in the center here, we have Scrooge and then the different relationships with other entities. So for example, Christmas Eve, Liberty, ghost, clerk, right. Then there is another one, another character. I am assuming there are some of these other entities, which don't really have any type of relationships. I was using a subset of the data. So it's not the complete data. And that's why probably not all of the relationships are preserved in here, where it's a pretty neat visualization, pretty decent overview of different entities and what their relationships are. This was how to use Olama. Now there are other demos. So for example, in here, if you look look at this file which gives you a demo of using the models available on bedrock you will just need to provide your model name that is served through bad amazon bedrock and then the embedding model that you want to use similarly you can run models available on hugging face directly without using ol Lama. Now I ran this example. The main issue with this specific part was that it was loading the model for each request. So for example, if it was trying to figure out entities in a chunk for each chunk, it would reload the model that takes forever. I hope that the team is going to look into this specific problem. There is another example for using other models through the API, which have, which are using the open AI compatible API endpoint. So potentially you can use something like Grok or Mistral in here. I tried to use Grok for this specific example. i was running into rate limits because it is making a lot of api calls there is also a demo on vram management because some of them these models can actually take quite a lot of your vram depending on how much context it's using from the llm so this is a project which is under active development but i have seen some really good progress in the last few days. Now, this was a quick demo of how you can use Olama with light rag. I haven't done extensive testing on how good this approach is compared to some of the other approaches, but based on the performance tables that they have shown, plus the results in their technical report, it does seems to outperform most of the other RAG approaches. So I think it's definitely worth checking out, especially it has an MIT license. So it's not too restrictive either. Let me know if you want me to make more videos on light RAG or any other topics that you are interested in. If you want to support my channel, there are quite a few options. Details are in a few options. Details are in the video description. Also make sure to subscribe to the channel. If you are interested in technical content related to rag agents and anything in between, if you want to get a better understanding of how light drag works, I created a previous video link is going to be at the top. Or if you are interested in the graph rag project, link to that video is going to be here. Thanks for watching. And as always, see you in the next one. | Local LightRAG: A GraphRAG Alternative but Fully Local with Ollama | 1,135 | Prompt Engineering | 20241021 | In this video, we explore how to set up and run LightRAG—a retrieval augmented generation (RAG) system that combines knowledge graphs with embedding-based retrieval—locally using OLLAMA. This video provides a step-by-step guide on cloning the repo, configuring local models like the Qwen2 LLM, adjusting context windows, and visualizing knowledge graphs generated from example data such as "A Christmas Carol" by Charles Dickens.
LINKS:
https://github.com/HKUDS/LightRAG
https://lightrag.github.io/
https://arxiv.org/pdf/2410.05779
https://microsoft.github.io/graphrag/
https://youtu.be/vX3A96_F3FU
💻 RAG Beyond Basics Course:
https://prompt-s-site.thinkific.com/courses/rag
Let's Connect:
🦾 Discord: https://discord.com/invite/t4eYQRUcXB
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
|🔴 Patreon: https://www.patreon.com/PromptEngineering
💼Consulting: https://calendly.com/engineerprompt/consulting-call
📧 Business Contact: [email protected]
Become Member: http://tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: https://bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Newsletter, localgpt:
https://tally.so/r/3y9bb0
TIMESTAMP
00:00 LightRAG with local models
01:38 Setup with Ollama
02:53 Serving Embeddings with Ollama
03:40 Changing the context window of the LLM
07:00 Configuring the Ingestion process
08:12 Advanced RAG Course
09:14 Indexing and Knowledge Graph Creation
10:45 Testing it out with local models
All Interesting Videos:
Everything LangChain: https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr
Everything LLM: https://youtube.com/playlist?list=PLVEEucA9MYhNF5-zeb4Iw2Nl1OKTH-Txw
Everything Midjourney: https://youtube.com/playlist?list=PLVEEucA9MYhMdrdHZtFeEebl20LPkaSmw
AI Image Generation: https://youtube.com/playlist?list=PLVEEucA9MYhPVgYazU5hx6emMXtargd4z | 2024-11-27T21:50:39.747758 |
https://www.youtube.com/watch?v=99eGIphjrQ0 | or omniscience it depends on um i like the double entendre so yeah so the context for this and um okay so when i was doing a business impact assessment at uh kite pharma uh last year one of the things that was you know definitely clear is that um you know the impact of an issue and outage was often the patient's death and i mean they were manufacturing therapies that were therapies of a last resort where, you know, the client had already been through a number of treatments and this was sort of a therapy that was, could make or break them. They were having very good results in terms of their therapies, but, you know, that was very much their context. And, you know, so a couple of slides I have at the beginning or you know who are we talking about and i mean we're talking about patient level therapies we're not talking about mass production we're not talking about big assembly lines we're talking about sort of things manufactured in in like labs so yeah and in terms of the overall supply chain, and this was another aspect that I looked up at, because, you know, supply chain traceability all the way from, you know, they called this a vein-to-vein manufacturing process from, you know, extracting the blood from a patient that was terminally ill, shipping it to the manufacturer, infusing what's called a car gene, and then, you know, which produces car tea, and then they incubate it, they wait for it to grow, wait for it ready to harvest, and then expand it, and then, you know, package it to ship it off um back to the patient so you know the products that these um companies are manufacturing uh are just amazing and just so critical um you know to you know the patients their families etc so um yeah so just so much that that can go wrong and so the what sort of interested me in this was sort of the aspect of well how can we introduce redundancy and use llms or vlms in this case to reliably sort of improve the end result. So this is the manufacturing sort of facility. And so during different phases, I mean, it's literally in a room where you may have a biosafety cabinet, you have an incubator, and then you have the bench where, you know, other sort of devices sit. So it's very small. One of the drivers that they're trying to do in terms of manufacturing as well is drive things to, you know, being able to do inline monitoring. So instead of doing tests where they have to sort of extract a sample, you know, have a camera or other sort of devices where you can sort of inspect what's going on without having them disrupt manufacturing. So, you know, very, a lot of complexities, a lot of potential points for failure, but with a high sort of impact to failure, like at the individual level so and that's sort of the hardcore um intro before i was gonna ask if there's any questions but i included that in case there were to give people a little bit of context of why it was important and sort of some of the complexity and process flow so this gets a little bit more into sort of the vision specific and on the left-hand side, if you see, these are some of the operation steps and also some of the things that they need to look for, to see like in terms of a product, is it ready for release, so you know some of the simple things to imagine is you know doing something like cell counts and doing a cell count for specific cells expressing certain characteristics but also looking for things like cell death, etc. So, okay. So when I was doing a deeper dive, one of the very cool products I ran into was called Landing AI. And okay, I see my time. And Landing AI, it's from the team of Andrew Ng, you've probably heard that name. And the very cool thing that they did with their product is that they have a core process that does, in essence, like an architectural trade-off analysis. Like you give it the requirements, and based on those requirements, it sort of expands them to specific steps, looks for candidate products. And then based on those products, it will suggest and recommend it. And then the other key thing that they did is that they abstracted sort of the calls to the back end where these various LLMs can sit. And so in terms of object detection, which was the key sort of framework that I was looking for here. So like, you know, detecting specific types of cells, etc. You know, that was sort of where, you know, there was three core vision models, OWL grounding Dino in Florence 2. And so the, what I ended up focusing in on was looking for opportunities to be able to fine tune Florence 2. So the other thing that I did in terms of the solution is that, which even when calling API is not just LLMs, is that we've got a product called Weave from Weights and Biases that does an awesome job at being able to instrument these calls and help you reverse engineer the product very quickly. Okay, so in terms of when I was doing discovery and looking at alternatives in terms of vision language models so small but in terms of complexity detect brain tumor and brain tumor in terms of complexity it was almost like it you know the more words you threw at it the more difficult time it had um but you know tumor i mean it in terms of it nailed it it didn't sort of get confused by eye sockets. It knew where the tumor was. Dino, grounding Dino, which is an object detection model that's been around for at least a year and it's been strong, didn't detect it. And Florence II and PeliGemma, there was a couple of issues here. I dropped looking at PeliGemma because when I was querying, And Florence 2 and Pallegema, there was a couple of issues here. I dropped looking at Pallegema because when I was querying, I couldn't get a good baseline. A lot of the sort of detection I tried to do in these types of models, it wouldn't give me a bad inference. It would just say, as a generic model, I can't answer that. And so since I didn't have a good starting point, I didn't do it. But Florence 2, it's not here because Florence 2, object detection, you don't pass it a prompt, you pass it a task, you tell it you're an object of the request level, when it was doing object detection, to begin with non fine tuned, there was two different scenarios I looked at, I fine tuned both based on cells and cell types, and I also did it based on liver samples, so liver biopsy. So, and it's small, but blood samples, nine times out of 10, it was detecting as jellyfish. And these liver samples, based on a staining, wanted to say it was a flower. So, yeah, so that was kind of the starting point. And I tuned sort of more rounds with, or more epochs with the blood samples, but there was less sort of, there was fewer sort of samples overall. And I think the quality wasn't quite as good. And the complexity based on the number of sort of items to detect was much higher compared to the liver biopsy. The liver biopsy was primarily that, it wasn't 20 to 40 cells, it was, you know, one sort of liver disease or condition detecting. So, but very positive and very quick results in terms of being able to fine tune. Okay, so the proving ground that I created was just to be able to visualize it. It was. Okay. How much time do I have left? Is it over? It's telling me 8.54 remaining. Why is that? Yeah, Don, you are at time now. If you want to just wrap up with a final sentence. Okay. That's fine. I will. So sorry about that. Yeah, so in terms of issues, fine tuning was imperfect focusing on the blood samples. So that is an opportunity to both, I think primarily it's look at potentially switch away from the blood samples until I can get better sort of, you know, source data, improving the overall sort of evaluation and fine tuning data flow. I appreciate Chris's coaching and help there. Yeah. So the other thing is getting back to this solution from landing AI is get it to the point of, you know, dynamically sort of divide the work by a specialized team of experts that is vision language models based on, the specific sort of symptom that you're looking for. Thank you. Sorry for messing up time. No, you're fine. | OmniScience | 691 | AI Makerspace | 20240729 | Using a micro-services approach to build teams of fine tuned vision models for diagnosis and detection.
GitHub: https://github.com/donbcolab/omniscience-poc
Slides: https://1drv.ms/p/s!Am4UQ5DtcZMUgaBXcdhsmeO4hrgoNA?e=xwC5LN
Connect with Don!
https://www.linkedin.com/in/donbranson/
Apply for an upcoming cohort of The AI Engineering Bootcamp!
https://maven.com/aimakerspace/ai-eng-bootcamp | 2024-11-30T15:14:50.731577 |
https://www.youtube.com/watch?v=UmP0pFFsMsE | Hello everybody! Good morning, good evening, good afternoon and yeah, hello, welcome to the final episode of the year of Going Meta for 2024. It's season two episode number four. Can you believe it's December already, Jesus? It's crazy. So many angles, it's just time just flies, isn't it? Yeah, it does. It does. It feels like we just at the same time, it feels like we are just into April, but also in, you know, somewhat some completely different dimension. I have a question for you, Alex. Are we are we starting a new season in January? Or do we still always like an academic year till September? It's an academic year. Yeah, that's a good... No, I think we can continue with season two for a little bit more. I don't think we have to get up to 20 episodes for season two. We can make a break earlier. But yeah, as long as it feels like one story, I think we continue here. And then we maybe go about a season three eventually, maybe. Absolutely, yeah, no, we'll probably continue all the way to the summer, right? Might be a reasonable time to take a little break. Good, yeah, it's one of those questions that suddenly comes to you at the most unexpected time. Yeah, no, it's a good point. But I think we have a little bit more to tell about the topic we are covering since the beginning of season two. So I would say let's continue a little bit more in that direction, and then we can. We have plenty to tell. I think we're not going to run out of topics for the next four to five years. But yeah, probably not not no that that's true yeah so but yeah we we have uh uh an interesting topic today it's kind of like um you know we we progress here a little bit more but if you if you tune in for the first time if you if you watched a couple of episodes but haven't watched it as regular don't don't worry um we'll we'll keep keep you up to speed or or don't hopefully don't lose you uh this episode but you know you can always watch watch previous episodes um afterwards uh but today's topic is um and we try to make it i mean we always try to make it short and and interesting but this one yeah just again it was a bit difficult but i mean yeah we never tried to to pass it to to chat gpt maybe he could have given us a reduced version of it anyway but yeah that's that's uh that's our our challenge every every month every episode to try to come up with a concise title. But you know, it was hard. I mean, if anyone has any ideas of how could we have shortened these, you know, more than welcome. But yeah, we've spent the last three episodes focusing very, very much on the knowledge graph construction, right? The knowledge graph engineering part. So how do we build it out of specifically, mostly unstructured data or mixed data. And we looked at how to do it with the knowledge graph builder, you know, in a kind of no code fashion, where you just have to point and click and things happen for you that we look at the code base approach. And then we look at some kind of blueprints, as we called it, on how to approach more complex cases. And while I wanted to touch yet another angle, which is basically the use of the Graphrag Python package, I also wanted to look at it for the first time in a more kind of holistic way. So that's where the end-to-end comes, because we ultimately were building a knowledge graph, but we're doing it to then be able to do something with it. And to do RAG is where we're basically going. And I thought that that was a great opportunity for us to have a little tour of the GraphRag package. And I know you've been talking about it in other sessions with other people. Actually, that's part of the context. I'm going to bring up my slides, shouldn't I? Because I'm not sharing. And let me find my... I'm going to share my entire screen so let me share this one and bring this out of here and i'm probably sharing the wrong one right so as usual many okay not not what i was looking for but we'll find it here we go that's that's uh that's what it is so um yeah i was let's go on slideshow at least for this introductory part so yeah here's the the long title but i was saying that you know the the reason actually the context and there's always a reason why we bring something up is you know because i wanted to to take the time to work a little bit with the with the graph rag uh package and like i was saying uh i mentioned this was just a few weeks ago right the session that we had with alex and will on the yes that was uh exactly wrote to nodes workshop um ahead of ahead of nodes which happened um early early november so uh the latest one was end of October. So yeah, about one and a half or so months ago. Let me share the link in the channel as well to the workshop. Yeah, it's a very good one that also connects with the content in the Graph Academy, which also uses it. So I think it was a totally relevant topic. There's one of Zach's most recent posts. And then the, I actually forgot to put the URLs. I'll put them later. But that's the documentation, the manual pages for the GraphRack package, which has had a recent new release, and that's kind of a continuous thing. But yeah, so that's sort of the context and why we wanted to connect that with what we have been doing over the last few sessions, few episodes. So that's the plan for today. So this is the idea and that's kind of an extension of what we have seen. And we always, in this series, we always want to go meta, right? So we just not want to only use the package, but we also want to, in this case, to have its behavior driven by an ontology, which is something that we've seen in the knowledge graph engineering, knowledge graph construction part, but not so much. Yes, we have, because if you look at previous sort of season, we did it a few times, right, in the kind of advanced RAG sessions, episodes where we looked at how ontologies could do that. But I think that this was a good opportunity of connecting that with the Python package, because we're going to see how the ontology is, again, relevant in how we define what the structure of the graph is going to be, what kind of entities, what kind of relationships, and we're going to be able to inject that in the entity extraction step. We're going to see how it also, and that's something that we've only mentioned but not really got into, the entity resolution part. One of the problems that we saw is that, for example, Alex, how we imported data from unstructured sources, and we ended up finding nodes that were actually the same thing, but were not being merged properly. So we'll tackle that in at least a first iteration, a first pass of that, and we're going to see that the Python GraphRack package is going to help us, but it's going to be driven by whatever we define in the ontology. That's the objective, as always, is I don't want to write the code. I want the code to be generated for me or to run for me. And all I have to do is describe in the ontology what I expect to be the behavior of my whole system. And then it also, again, it's used in the RAG step, because there's many RAG patterns, RAG approaches, RAG strategies that the library offers. And we're going to see how the information in the ontology can also feed that. And I've kind of marked in yellow there all the elements, all the steps in the blocks that are actually implemented with the with the graph rack package so so these are the things if we have time that i'm planning to cover and um yeah of course at some points it interacts with llms as we've seen for the entity extraction for the rack part and uh ultimately there's also neo4j acting as the other knowledge acknowledge graph platform where we're going to be storing the graph that we built. So that's the idea. Make sense? Yeah, that's cool. It's good to see that this is pretty... It's called the GraphRack Python package, but you think maybe it's only usable for one specific use uh if you if you don't think about it or if you don't read about it what it can do but now this this this graphic here shows that it actually is much much much more versatile and can be used in uh into different um you know parts yeah yeah the entity recognition part and integration part i think is both are you know topics that came up ever since we we touched this for the first for the first time i think every every now and then we get the question hey but what about this uh how about a duplication how about uh you know recognizing my my entities basically what what do i how do i map them correctly and uh it's good that we you know it's it's it's it's difficult but we try to you know and and this this uh this package really really simplifies it you're going to see how the the code is there's going to be a fair amount of code but it's all pretty pretty straightforward to follow and and yes to your point i mean, I mean, this is the documentation. You can probably pick the link there and share, but you see how there's the three main elements. So there's the Knowledge Graph Builder part, which we're going to look at first. Then there's the Rack part. And then there's the notion of pipeline, the definition of a pipeline, which is a whole construction. Even remember from a couple of episodes how we wanted to build a graph that had the two parts, the domain graph and the lexical graph that chunk the unstructured data. So, so these will take care of all that. And it's very well described. So I encourage everyone who wants to play with it to go and look at the documentation because it will show all the steps and all the capabilities in the package, which is pretty comprehensive, as you said. And similarly, on the RAC side, the different approaches, the different types of strategy, the different retrievers, vector-based, vector and cipher, and custom retrievers. Lots of really, really high quality stuff in there. So that's what we're going to be doing today. So that's it. Yeah. And to quickly maybe cover, because there was a question in chat from Zaki Coder, and they ask, is ontology necessary or recommended so if you i guess if you if you work with the graph rack python package you don't need an ontology um you can just use it without without the ontology but because of these previous episodes where we had the ontology as a core part of um of us trying to to model data trying to get data in and trying to help with the ontology to make it a little bit more sensible. We are using ontology again in today's episode, but if you don't want to use an episode or you don't have an ontology available for you, then you can just not. Yeah, exactly. And if you go to the examples, I mean, in the documentation, you will see that in order to do entity, I mean, you can do blind entity extraction, as we've seen, but you will want to give some guidance to the entity extraction element on what you want the graph to look like. And you don't need to use a formal ontology like we're going to be using, you have to provide some kind of description, but then you're going to use probably the same description to feed to the rag step when you want Cypher to be generated on the fly out of some text. So you might as well have a standard, formal and controlled way of doing it, which is exactly what an ontology does so we find it a convenient way of doing it but strictly speaking I mean you may call it an ontology or not but you have to have this kind of definition and it might be more or less formal it will be managed in one way or another and and it's entirely up to you how to you how you want to do it yeah yeah that's a good point maybe too ontology always sounds like such a big word you you have to you want to do it yeah yeah that's a good point maybe too ontology always sounds like such a big word you you have to you have to set so many things up but all you do if you if you're working with a set set you know types or entities or labels or whatever you call it properties you always have to set up usually we recommend this set up some kind of semantic definition of what's what, basically. And this is an ontology, so you would create this anyway, but maybe it's just a very small one for your specific use case, for your specific application. Then it sounds like ontology has to have thousands of entries and listings of things. It can be very small, very tight, but it is, in essence, an ontology has to have thousands of entries and listings of things. It's just maybe it can be very small, very tight, but it is, in essence, an ontology still. Absolutely. And like we all say, never reinvent the wheel. Reuse what's out there. And we see how our ontologists pick things from schema.org, from maybe your organization has an enterprise ontology that already exists. And again, don't boil the ocean don't you know start small and uh and then grow from there i mean the powerful thing of this approach is that you can grow the ontology incrementally and and then repass this construction pipeline and keep adding elements to your graph so it doesn't have to be a daunting exercise it is it's daunting exercise it's kind of a natural thing that you're already doing and that there are tools that will help you kind of putting a bit more structure around and that's exactly what we're going to be showing today cool should we get started well just I'm going to again apply again, apply what I'm preaching, which is reuse what our datasets that we've used. So we had that from a couple of episodes where we looked at the Tate Modern dataset. So I'll reuse that. I've extended it a little bit, but basically it's going to be information about artists, artworks, and that's what we're going to be building the graph about so um let's uh let's start let's get things going and and i'm going to jump straight in the in the code and um let me uh well first of all let me know if that's readable i suppose that's more or less okay i think it should be fine. Yeah, from what I can. Perfect. No, this would be good. Yeah, if you think it's too small, please let us know in chat. But from my viewpoint, this looks good. Perfect. So I have a Neo4j database, an empty one here running locally, and I'm going to start, I've kind of broken it down in several parts, so I'll go through them in detail. The first thing that I'm going to do is just create the vector index. As you remember, we're going to load unstructured data, this description of artworks from the Tate Modern, and we're going to create a graph out of it according to our ontology. So this produces two graphs. Oh someone's saying that it's blurry. I don't know if that might be down to my broadband, my internet connection. You know it should be fine but you know maybe check that you actually receive uh in in uh 720p at least or i think linkedin only supports 720p um if you have the the highest receiving resolution if you still don't get a better picture you can always try to switch over to YouTube, which should support up to 1080p. But from my view, I don't think, no, I don't see your blurry, meaning you could try to zoom in once, but just to make it a little bit bigger, but then it's very big. So yeah. Yeah, I mean, the problem now is going to require a bit of a. So I was saying that we're going to produce this kind of double graph or two-layer graph where we have the lexical graph that's the one formed by the chunks that are created out of the unstructured documents and then the domain graph where what will have the entities that we are going to be extracting. This is purely initializing. As you can see, I'm pointing where's my local Neo4j instance is running locally, my credentials as you can see, high security, that's my user and password, and I'm just giving my index a name and defining this is important, the dimension of the vectors that's going to be storing. Other than that, it's just literally creating the vector index and giving it a name and specifying that it will be applied to entities, to nodes labeled as chunks. And in particular, we want to embed the property called embedding so that's that's all this does so if if we run it there will be no output only an index has been created on the neo4j side and we notice it because there is a um kind of a node label there and if I do schema, we will see that there is now a vector index here that's online and that's called chunk index just like I called it. So nothing more than that. That's just pure initialization. And with that, we're going to go straight to this simple, I've called it Python code. This is literally a copy paste with very, very minor changes of the example in the documentation. So I'm talking about this is the graph rack package. If you go to the Knowledge Graph Builder and you go to the, I think I took it from here, the pipeline structure, components, if it's described here, here we go. So I think there's a default pipeline, maybe it's in the actual GitHub. And yes, we will be sharing the code with you. It's in the episodes GitHub repository. There's a full repository. The link should be in the video description already. And each chapter or each episode has its own folder with all the code that is shown in the episode. So yeah, I was saying this is actually in the GitHub repository. So if you go to GitHub and we'll share the URI as well, you have a simple example here of an implementation of a simple pipeline, which is called simple knowledge graph pipeline, as the name shows. So I've literally copy pasted that and I've made small changes to it, but that's what we're going to be following. Other than the imports at the top, and I haven't been very careful or very clean on my... I still have the definition again, but this could be factored out, obviously, what's the pointer to my local instance and my credentials, but basically, I'm creating a driver, I'm going to be interacting with my Neo4j graph database. I create a driver instance, and here's where things start to diverge a little bit. We said we're going to have an ontology that's driving the behavior of our pipeline. Here's where I'm reading the ontology. Some of you must have seen us using the RDF lib library, which is probably the most popular one in Python to deal with RDF data, which we're going to use to read the ontology. The ontology is here. I'm not going to show this time. We created it in previous episodes. I think we use normally protg or web protg, some of the open source ontology editors there. But it's like you were saying, Alex, it's very, very, very simple. So this is what we call the art ontology. And there we define, I think it's in different order. So we define two classes. We define the notion of an artist and the notion of an artwork. Right. And all we did is say is that an artist is a class, a category, which happens to be a subclass of person. There's an artwork, which is a subclass of creative work. As I was saying before, we connect it to the schema of the orgontology. That's optional if you want to do it. But we're defining a category and we add a small comment indicating what that is. So it's a creative of artworks and an artistic creation. That's the definition of the two main categories that I'm caring about. Then we have relationships and attributes. If we go to the top, we see that there is a property called artwork known as, which is a pretty long name. I could have called it title, but why not? So that's just to show that. Actually, it's a subproperty of name, but it's a property of type string that applies to artworks. It's the name, the title, or a way of identifying an artwork. The same with professional name. Professional name is the name of the artist, but because artists sometimes don't use the real name, they might use a pseudonym or something, I decided to give it a... But, you know, again, to your point, Alex, this is a very simple enumeration of these are the categories, these are the properties. It's using the RDF standard. No one's meant to write this, so we create that with an application which is visual, we save it and it's stored in this RDF format. Same with the relationships. We say that an artwork represents a person, that an artwork was created by an artist, these sort of things. It's a very simple definition. The good thing is this is formal, we can parse it and we can make use of it. We're using it here to create the schema out of the ontology. Let me explain what I mean by that. Actually, let's put this on hold. So we read the ontology that's telling us what the graph is going to look like. Now, the thing is it has to look in a particular way, which is the schema that the graph rag package expects. And this method is actually applying the transformation. I'm going to show you in a minute what that looks like. But that's what we do. So we provide the schema. Then we need a splitter, which is basically a configuration on how we want to chunk our input data. So I'm going with some standards. Well, again, there's no standard, really. Whatever works for your case. I'm saying that the chunk size I've been playing around and these behave reasonably well is 2500. There's a minimal overlap so that there's no abrupt cut between the chunks. Then for each of the chunks, I need to create embeddings and I'm using a service, in this case, the OpenAI embedding service, and I'm using this service, in this case, the OpenAI embedding service, and I'm using this small model. I didn't want to go for a larger one. I think that's sufficient for this example, but not real code. I mean, this is just like I was saying, copy paste from the other case. And then there's going to be an LLM instantiation for the entity extraction. And again, standard, all that specifies which model do I want to use. I'm going to go with GPT-4.0. I may want to limit the tokens that I'm exchanging to keep the costs and things like that. So this is exactly the configuration that I've copy pasted from the previous example. And here's where I instantiate the pipeline itself. So I pass all the elements that I've created, the driver to interact with the graph, the LLM to do the entity extraction, the splitter that I've configured there, the embedder, the service for creating the embeddings, and ultimately I pass here the schema. This schema is in the form of a list of entities, a list of relations, and a list of potential schema as it's called, but basically it's triples of the form source, target, and relationship. Basically, I'm indicating the Knowledge Graph Builder how to construct the graph. I'm telling it to ignore if there are errors. I don't care too much for this experiment. And in this case, I'm not parsing PDF. I'm parsing text data directly from these files that I've downloaded. Makes sense? So this is what I'm going to run. And what I'm going to do basically is iterate over the files that I have. And this is just a local download. I wanted to avoid the usual problems reaching out to the internet and having issues. So this is just the text. It's the text that comes in the description of the artworks that you get in the modern catalog. So it's just a text out of there. All natural language text, as you can see, there's nothing structured there. And this is just a loop I'm saying, well, iterate over the files in this directory, which is the data directory, and pick the ones that end with txt. And for each one of them, all you have to do is read the context of the content, sorry, of the file. And then just, I indicate that I'm going to be processing that and just run the knowledge graph builder I run the pipeline and it will do the whole thing it will process the text chunk it embed it entity extraction and then populate the graph I don't want to run it on all of them so I'm going to to add an additional and file name starts with maybe just a bunch of the Hockneys. Hockney and image. So, it's just a way of telling it to load just one, two, three, the first four or five, right? Not all of them because i'm committing out this line where i do the the the resolution the entity resolution afterwards but i want to do it in in two steps just to see how things work so i'm going to save this and i'm going to run this so if if i if i run this step and hopefully there's no errors this is going to start processing each of the files one by one. Image of Kent, so probably it's going to run these three or four. And as it does, it's going to apply the pipeline that I was describing. So chunk the files, create the lexical graph representation in Neo4j, and we can go to Neo4j and start can go to Neo4j and start see the results. So the first thing that we're gonna notice is that we see a series of chunks, right? So this is just fragments of texts that have been embedded. So you see that's where the embedding step has taken place. So we have called the service, the OpenAI service that takes these texts and creates an embedding. And it all happens I would say transparently but quite easily quite compact way because I didn't I only have to configure the the elements in the pipeline and it does the whole the whole work for me so um right so now these uh these chunks contain the the kind of unstructured data vectorized but at the same time we have been extracting entities like artists, artwork, basically the elements that we described in the ontology, and persons, and then they are connected through the was created by, represent, works with, influences. So these are what we call the domain graph. Actually, we can maybe look at it with a query. Maybe let's check if it's finished. Yeah, it's done. So it's loaded five of the files. So we have the chunks that are going to make it possible for us to do the vector search, the vector rag. But more interesting is the domain graph. And for that, I can do something like, let's look at what's created by. So we see small fragments here. So we see the problem that we saw last time, right? So we, let's put it like that. There you go. So we see a number of artworks, right? And for each artwork, we're seeing the properties that we describe, which is artwork known as, I know it's a bit weird as a name, but it worked well, basically picked up my recommendation, which is, it describes what this is going to look like, and it's calling it artwork. And in the same way, David Hockney has a professional name instead of just a name. That's the way we expected things to be. Now, the problem is that I see one David Hockney has a professional name instead of just a name. That's the way we expected things to be. Now, the problem is that I see one David Hockney here, another one here, another one here, and these are meant to even Hockney without the first name. So I would expect these to be all connected, something like that. So this kind of entity resolution don't know if in the in the artworks there's some case but there might be also an image of cilia there you go so this is this there's duplication here as well so you see these two other things so now i'm i'm not being totally fair because by default this pipeline that i've just run in the GraphRack package will do a first pass of entity resolution. But without any configuration, all it will do is it will take things that have the same label and the same name. Because I'm a bit particular and I've not used the name property, I've used my own professional name and things like that, it didn't pick it up. But if you use the default name, you would have seen some of the duplication. So don't think that that's magic. It's the default behavior. Now, because I don't, because I'm not using that, I need to configure it myself and say, well, these are the properties that are meant to be interpreted as kind of primary keys. So if you find two David Hockneys of type person, type sorry artists well then these are the same pair of nodes so this is exactly what I'm going to do in the second pass so that's the better it's good to remember because if you do this you will have probably something that matches with predefined sessions like like like persons or title or, but you will have something completely on your own, which is obviously not predefined and you will have to then help the system because otherwise it'll make these errors. Absolutely. There you go. I can extend it and get all the other properties and we see that it's produced a pretty decent domain graph already. The yellow ones are persons. For persons, why do we not have the name? It's interesting. There might be some, sometimes the chunking makes, it knows that there's a person, but it doesn't get the name. But sometimes we get it right. So not a problem here. So this is already an interesting graph. So we see Hockney at the center because he's the author that we're looking at with some of the artworks. Sometimes the artworks represent individuals, right? So these three paintings, image of Celia, red Celia, and Mr. and Mrs. Clarke actually represent the same. Now the problem is, as we were seeing, that there's duplication. So we want to solve that problem and also the additional data. So let's go into the second step, which is the entity resolution or the duplication, if you want to call it like that. And again, that is going to be driven by the ontology. So that's the bit that I had commented down here. So there's the notion of a resolver in the GraphRap package. And quite conveniently, there's one that's called a single property exact match resolver, which has a long name, but it's exactly what I need. So basically, I want to say whenever you find an artist, there's a property that I want you to use as the primary key, which is the artist, what did we call it? The artist name. And again, my idea is I don't want to write additional code. I just want to describe it in the ontology. And the way you do it, this might be getting a little bit detail, is by using this keyword called inverse functional property. So that's, in simple terms, primary key. That means that when two triples have the same value for that property, it means that the subject is meant to be the same. Or in other words, if two artists have the same value for that property, it means that the subject is meant to be the same. Or in other words, if two artists have the same artwork known as, then they are meant to be the same. And the same applies to the professional name. Sorry, no, that's the artwork, not the artist. The professional name is the one that applies to the artist, and it's also inverse functional. And the artwork known as is like the title of the artwork. I mean, that's not the case for all the types of relationships, but it is for these two. So what I've done is, again, I've created a function that will go to my ontology and read, is there any definition of these sort of primary keys? Because if there is one, and this is all in the utils function, so it's a very simple parsing that gets the primary keys. It basically reads the graph and gets all the instances of inverse functional properties, and I return those. And that's exactly what I'm doing here. I'm going to go and read these inverse functional properties, and I'm going to create a resolver for each of them. I'm going to say, go and apply this entity resolution on the list of primary keys that detected from the ontology. Just to continue with the test, what I'm going to do now is I'm going to comment out this fragment. Just a bit dirty hack to rerun exactly the same code. I'm going to run this, but now it's not going to run the pipeline, it's going to go straight to the resolution. If I do that, that's very quick, but it's done it now. If we go to the graph and let's keep the old version here, and I rerun the same query, we should get a very different graph now. And we see things linking a little bit better. Now we still have the problem of Pablo Picasso and Picasso and Hockney and David Hockney, but all the David Hockneys have now been merged. So you see how we had one, two, three, and four David Hockneys. They have all been merged into one, and the same has happened to the artworks. So basically, it has... We also had like four little graphs, and this is now one. Correct. So that's this idea of having the entity resolution built into the pipeline and driven again by the ontology. Of course, this is still not perfect, and this is one basic way of doing it. So I'm saying, in this case, if the artists are called the same, then they should be the same node. We still have the problem of Hockney and David Hockney. Here, we're going to be able to do several things. If you think of it, string similarity might not work because it's a big distance. If you compare them using, I don't know, Jero W or um um which one is the the other one that we typically use them well there's there's a bunch of string similarities string distance uh functions in in in apoc standard ones uh but these typically count uh with different algorithms kind of the the number of of swapping, swaps in the characters, right? Or the number of changes required to go from one to the other. What's the name of the other one? I never remember it. It's Levenstein. Levenstein is the other one. There's three or four implementations, but I don't think that is a very good approach. So, I mean, one thing that we could do is, in these cases, is maybe have some reference data that could help us. So in this case, we have from yet another episode, the Tate Modern catalog is one that we love and we use several times. We have a structural representation of all the catalog, and we have all the artists there. So we could have that as a master and we could... Thank you. Yeah, people are reminding me of Levenstein. So here you have to be a bit more creative and more custom in this case. So you might, I mean, you can build that into the pipeline, but you might want to create your own custom resolution in this case. This is a topic that we will work a lot more because it's richer and it requires a lot more attention. But I guess I'll pause here on that idea. We do a first step, very basic one of entity resolution here, and we'll look at it more in future episodes. But that's the second step if we go back to our diagram here, right? So we, if you remember, we start from unstructured data, we do the entity extraction and also the chunk extraction, but that's kind of the easy part. And now we've done the entity resolution on the duplication, and both steps are driven by the ontology. Now, the data is in the knowledge graph, and the third sort of step is that we are going to interact with the graph, to ask questions from it, and we can do a number of different things here. So let's go back to the code. And well, actually, why don't we maybe clean the graph and now run it with the whole list of files, which will create a richer graph. And we chain basically the construction with the application. And now I can remove the additional condition. I don't want to run just the Hockneys. I want to run them all. But for that, I'm going to maybe, I can empty the graph. And I could even, well, if we want to completely wipe things out, I mean, this is not really neat, but I could drop the index that had been created. Oops, I didn't paste it. Oh, I thought I had copied it. There you go. So that drops this index. And we are also going to drop the vector index. OK. Just to show how we can get rid of everything. We can drop this index here. And like this, oh, there's a dash in it, which it doesn't like. So I'm going to escape it. OK, so now the database is back to its original state. So if I run schema, we have just a lookup on nodes and relationships, so entirely empty. And we can run it again. So we start by recreating the vector index, it's just the one-off thing, and we can go back to the simple data pipeline, knowledge graph creation pipeline, that will run the whole thing on all the files, and then run the entity resolution on the primary keys that are defined in the ontology. So if we run that, let's give it a few seconds, and it will process. They're not massive files, so that should go relatively quickly. And we'll see the graph show up on the other side. Should we take a minute to see if we have any questions? Yeah, I think that's a good point. There was one from Submendendo and uh he asks about the the entity resolution if fuzzy matching could be could be something um or pm25 which i don't actually know what bm25 is absolutely honest with you also um yeah we we could we could and we'll explore all those all those options. I mean, there's not only a single one that does the job for us, as we're going to see. We could even, yeah, we could do semantic search. We could embed these elements and find this sort of semantic similarity, right? So we could string similarity, semantic similarity, right? So we could string similarity, semantic similarity, structural similarity. I mean, we will definitely look at things like leverage the graph data science library. So there's this idea of node similarity based on the neighborhood, right? And probably once this is finished and the thing that duplicates, we'll find that the two, Hockney and David Hockney, will have a lot of neighbors in common because they probably are linked to the same artworks. That's giving us a clue as whether these two might be, even if the name is only partially similar, they have a lot of neighbors. These metrics that we can get from the Crap Data Science Library is something that we're going to build also into the duplication step. So yes, I mean, all techniques will be relevant and we'll explore those. K&N, absolutely. Totally. Good point. And then John paid attention last time and he says, if I recall correctly, you were previously considering using Pidentic models to define the ontology. Did you find an issue with this? Hence, you're using RDF. No, no, no. Here I'm using RDF, but in a totally different way. Because, you know, actually, in a way, I mean, I'm totally abstracting what's happening behind the scenes in terms of writing to the graph. I mean, that's a very good point, actually. So I don't know how the graph frag package is actually interacting with the LLM to do the entity extraction. So that's a guess, but I suppose if it understands that the LLM can use functions, maybe it can pass a pydantic definition of the target schema. That's a very good one. It's not my choice here. I'm using RDF just because the ontology is defined in RDF, but that doesn't mean that I'm injecting it in the process. I mean, actually, when I pass the configuration, the target model structure, I'm doing it in a kind of natural language or even JSON format. So that's a very interesting one. I'll actually ask the team and try to understand how these kind of, whether that's generating cipher, if it's generating any kind of intermediate format, or how this actual approach works. It looks like it's thrown an error. What's happening here? Some, oh, it's retrying some, oh. Hognit, trade, self-clients, and what did I do? What did happen? I'm not sure what's going on here. To make sense of this following exception event loop is closed. This file opening element. Event loop is closed. I don't know exactly what's happened. I don't know if it's died halfway, but... Wow, it's... How far did it get? Let's see. What do we have in our graph? That's a good point. Let's see if we have the previous query so it didn't get to the it probably died before the um did the duplication looks like another problem we can run it manually so i don't know if it's run uh don't need to have it complete doesn doesn't matter, but anyway, so we can go to this step and just run the validation. That was quick. And hopefully that will give us some more, a better structured graph. OK. There's probably too much in it, but command scroll to zoom in. So OK. So we have even influencers here. We have works by Picasso and others. Well, that looks decent to me, so it might not be perfect, but that's enough of what we need to go to the next step, which is where we wanted to interact with the graph, to do RAC actually, to ask questions and to have the retrievers do the job of extracting the answer out of the graph. And we can do different things. So the first one is... Let's show the code here. And again, that's a pretty simple approach. I mean, I'm repeating the whole configuration of the graph connection and credentials, but all I'm doing is I'm creating a connection to the database. I determine which embedding service I'm doing is I'm creating a connection to the database. I determine which embedding service I'm going to use. Same as before, it has to be the same embedding service that we use for the Knowledge Graph construction. Remember what's going to happen in the retrieval step, in the rag step. So we're going to have a question in natural language. We're going to embed that question. We're going to have a question in natural language we're going to embed that question we're going to do a vector search at least for this first version of the retriever we're going to do a vector search find the chunks this is going to use the um lexical graph find the chunks that might contain potentially relevant information and we're going to decide which ones to keep. I can decide top one, top two, top five, however many. And then we're going to generate an answer out of it. And I don't know if this is still valid. I don't know if we have this element here. We can check it because that was from my previous tests, but we can take one example for example here red cilia or something from picasso for example la femme qui pleure so who painted that we're going to ask that to the who painted. And if I run that, that will, like I say, vectorize. Let's remember what's happened here. So it's going to take that question. It's going to create a vector out of it. It's going to do a vector search. That's what the vector retriever does. It uses the chunks that we have populated in our previous step. And then it's going to return, in this case, the best match, the one that we think contains based on semantic similarity, a vector similarity, the one that contains the answer, and then we pass that to the LLM. In this case, to, I think it's GPT-4, right? That's the model name that will ultimately generate the answer. And what's produced as answer is that Pablo Picasso painted La Femme qui pleure, which is correct, right? So that's the answer that we were expecting. Good. But that's not very exciting, right? Because we know that vector you know, vector search works, but it can hit limits or hit problems when the answer to the question is spread across multiple vectors and we have to combine, we have to rank, we have to prioritize. And again, I'm not going to go into that because, you know, we don't have time in today's session, but we explain how the combination of vector plus graph exploration solves that problem. And I would invite everyone to go to, I can't remember, episodes, I think it was in the 26, 27, when we talked about graph rag in the previous season, and we explained in a lot more detail how that can be improved. So, but what i'm going to to test before we conclude today is the text to cipher because that's the you know the the other case where i wanted the ontology to drive the the the creation of the cipher so i'm going to do something similar so now i'm going to have a different retriever, and that's what this piece of code does. Usual connectivity to the graph database, LLM to generate the answer. In this case, we're not going to use vector search. We're going to try to generate the Cypher query out of a question in natural language. And for that, we need to tell our LLM what's in the graph. And I know what's in the graph because what's in the graph is what I described in my ontology. I'm going to pass the ontology again to the Cypher generator. The Cypher generator, this text to Cypher retriever, will take a Neo4j schema. The Neo4j schema is nothing but the content of my ontology. I could pass the ontology in its raw form like we saw last time, but we saw that it was nicer and sometimes it behaved even better when we passed it in natural language. I can put the print here if you remember, Alex, but basically it just reads the ontology and puts it in a much more nice and readable form where it says, the entities are these, properties are these, and the relationships are that. So that's the context that we're passing to the text to Cypher Retriever, and that plus a question, we are going to generate a Cypher query. Again, let's pick maybe another one. So let's, we're gonna get um a hot new one maybe the master printer of los angeles there you go, for example. So who painted that? Let's ask the same question. And while it's the same structure of question, it's not very complicated. This time, the approach to answer it is completely different. And let's see how it behaves. Because there's always an element of unpredictability here. There you go. So I mean, we'll have to make sense. Let me make it a bit bigger so that it's more readable. Here we go. So this is the output. Oh, damn it. Sorry. There you go. Well, there's two parts. This is the output, and I should have printed the result in a bit more detail. But anyway, I didn't look at the structure. There's two parts. There's a Cypher query that has been generated that I can copy and show in detail. Copy. And then there's the output, which is Hockney. So that's the answer, which is correct, is what we expected. So it came to the right answer, but following an entirely different way. And let's paste here the query that it's actually run, because what's happened here is that it's managed. Let me remove the line breaks. So this is the cipher that the text to cipher retriever has generated, which is a perfectly valid one, by the way. So it said, well, I'm looking for an artwork that's called the master printer of Los Angeles, which is exactly the question that I was asking. And I know that an artwork is connected to the artist through the was created by relationship. And from the artist, I want the property professional name. So all this was not made up. It was actually injected in this previous step. Let me go back to the normal size. But it was when I passed the Neo4j schema. And the Neo4j schema is nothing but the contented Neontology. Let me show you what that looks like. I'm going to use the usual trick of commenting these out and showing you what I'm passing to the LLM. There you go. So this is the kind of natural language description of my ontology. So I'm saying the node labels are artists, which is the creator of an artwork. Artwork is a person, the properties are boom, boom, boom. So all these things that then it used in a very correct way were passed as context so that it could work out, it knows Cypher, it can speak Cypher. What it doesn't know, of course, is what are the terms that I'm using in my graph. And that's what I'm passing by sending the ontology in context. So I don't know, maybe a lot of code, a lot of jumping backwards and forwards, but I hope the idea was clear because ultimately we saw how the ontology was injected at the three steps. So the code that I'm going to share in GitHub is entirely reusable. There's not a line of code that was specific about the domain model that I was creating. So what you have to do is, not you Alex, but anyone attending, you can do that too as well, of course. But basically, you pick your data set. Now, you're not going to be loading artworks. You might be loading a product catalog from whatever you're interested in. You want to load the Apple catalog, the products, or you want to load literature or whatever you're interested in, you want to load the Apple catalog, the products, or you want to load literature, or whatever you're interested in. All you have to do is define an ontology with the relevant elements, like schema elements, what categories, what relationships, what properties you want to extract out of it, and exactly the same code that I've shared with you is going to work because we didn't write a single line of code. It's going to pick the ontology, that's going to determine the type of graph that we do. It's going to determine how do we duplicate, at least in this first pass of the duplication that we use, and it's going to drive the text to cipher creation. That's how powerful to the question that was asked at the beginning. It's really useful to have a centralized way of doing that, because that will be reused across so many different steps in our pipeline and that's that's where i wanted to to go today i hope that that was not too confusing i know when we show code sometimes it's it's hard to follow but it will definitely make more sense when people get their hands on it on github but um but yeah that was that was it yeah definitely yeah no that's very good and i think with the ontology again uh there was an interesting a comment and i'm not sure if you if you were aware of of this existence but there is a from getty um there is a the getty research institute they have an own ontology the a art and architecture the sorrows or a aat and basically is a library or an ontology on its own for um that's you know i would say artists as a whole pictures probably images um maybe videos i don't know yeah um so that's pretty if you are in the space i guess it's a good good good thing to look at and to to have like we said in the beginning don't don't try to reinvent the wheel if you if there's something already existing that you can use and this is a good example there are others for life sciences there are others for manufacturing form for you know material anything. If there's anything that is in your field, then you can use that and you can rely on that in addition to what you have for your, let's say, company or use case specific application. The standardization part has probably been done by somebody already, and you you can use that as a as a um you know as a back end which you don't have to do yourself again from from scratch very interesting yeah totally and i totally missed it and one of the thing one of my you know my complaints is why are these things so hidden sometimes why there's not like a kind of universal repository of all ontologies and standard vocabularies no i mean the answer is yes we didn't use it i mean we thought we were kind of being kind of applying good practice in the sense that we were not reinventing the wheel we were extending the schema.org but i was not aware of the existence of uh which makes a lot of sense a domain specific ontology so maybe it's a it's a good exercise to replace the ontology that we created, the simple one with the actual terms from the Getty ontology. That would be an interesting way of kind of refining this example that we run today. Definitely, definitely. Another comment here came in from John. And he says, I also wonder if a modified version of Anthropics contextual rag can help with deduplicating nodes, the idea of adding a summary of entities from the chunk source document to each child chunk. Oh, look. I didn't follow the modified version. We can definitely use Anthropocontextual.rag. One of the things that I was considering, I don't know if that's what you mean, John, is building into the entity extraction part, some kind of basically fine-tune the model to introduce things like reference elements, so reference data. So if I have a list of authors, in this case, artists in my catalog, maybe I can, instead of passing it in the context, which would be too much and super high cost, why don't we fine tune the model so that whenever it comes up with Hockney, it doesn't use Hockney, it uses David Hockney or even a unique identifier for David Hockney. So if that's part of the fine tune, I don't know if that approach is even viable, but I wanted to explore that and maybe, you know, enriching the entity extraction in that way could be one way. So, i'm i'm totally open to to exploring all these all these ideas absolutely yeah another comment from cameron here would love to see how it performs if we can automate passing schema right from the graph itself this is yeah i mean in in what step but at the at the retrieval because yeah that that's one of the ways in which we can do it, too. I mean, in this step here for RAG, if you don't have an ontology, you can always, you know, introspect the graph and get the labels. So you can get somehow the ontology from there, too. And I would expect the performance to be comparable because by injecting the ontology and the entity extraction step, we guarantee somehow that the graph is going to be aligned with it. So whether it comes from the ontology or from introspecting the graph, I don't expect it to be very, very different. So if that's what you mean, I totally agree. I think that's what he means, but yeah. Yeah, and I don't know why these are so super hidden always. It's hard. I think there's no... It would be great to have one page or like a repository of ontologies. Maybe there is one, but we don't know. If there is one and if you know one, then let us know, and we can we can talk about it maybe at some point um about all these different ontologies and um and yeah how to customize them as well because this could this is another question is how some of these ontologies are very very big and it's almost overwhelming especially i could imagine the getty one is like you know imagine you're into let's say art and the Getty one covers videos and pictures and all of the other things. How do you shrink it down? You don't want to ingest a whole super ontology into your data if you don't need all of this, if you just need a specific part of it. Exactly. Yeah, yeah a that's a fair point yeah totally because um yeah in this case we're injecting the whole target schema but maybe yeah maybe there's there's a previous step of of because we knew what kind of data was uh in the input files and what kind of data we wanted to extract but yeah i suppose if you're in an entirely blind space where you probably need to do some form of sampling of the source data and then identify the parts of the ontology, which ontologies might be relevant for your case, I mean, that's taking it even more meta, right? So figuring out what should be the ontology to be fed. But yeah, I suppose that can be, that's another area to explore, right? How to determine which is the portion of the ontology in an automated fashion that's relevant to specific source data. Yeah, and then if you, like what Benjamin just said, I would like to answer questions using two knowledge graphs to answer questions of chemical compatibility for products used in art restoration. So you have arts, the whole ontology, and then you have chemical and they're, they're all, so you, you combine these two and then you, you have all of that together. So it's a, yeah, it's a cool project, yeah, definitely. Super, yeah. Yeah, Sumendu is saying that he's never written an ontology. I don't think you should write an ontology. I would say go to Web Protégé, which is something that we've used in the past, Web Protégé, and use any of these visual environments. I mean, that's going to be a lot easier. So, I mean, I don't know what I have here. I know that we've created some in the past, but going meta 19, I don't know what was in there. Oh, the clothing. So, there you go. So, you can, this is a very visual environment where you can define, that's a class, you give it a label, a description, you can even do, this was the one about multilingual, multiple, so you can have multiple languages and annotate it in RDF and use it to have different representations. But yeah, you can create additional classes here, then you would create the properties. In this case, you would create, you know, so use any of these visual environments. And all you have to do at the end is just download the project, and you will get exactly the format. Because these are W3C standards. So these are serializations. If you go back to the, where would it be? If you go to home, I suppose you can go and download. Here we go. And you can, there you go. You can specify which serialization format you want. So RDF XML, Turtle. So all these are, the one that I was using is Turtle. So if I do that, you just need to download, and it will give you the file that you need. Yeah, that's the kind of process that I would recommend. So go to WebProteger and do it there, rather than, if you're keen to write the RDF yourself, totally go for it. But it's a much simpler yeah exactly great yeah I think with that we we are at the end of of the episode today thank you everybody for for joining and for your questions and thank you Jesus for presenting today and going through the three three use cases for for the the graphic Python library but I think there are more but these three in a in a architectural overview so I think that was that was really interesting that it is more versatile than you think on the first when you look at it on the on the front i think it gives you more options than you actually realize um the next session and this is going to be the last live stream of this year of 2024 will be on the 17th of december where uh our dear colleague zit is going to join me to talk about the demystifying the mahabaratha as a chatbot with graph rack so this is a a huge tome of of knowledge and like a story i would say like um and he will show us how how the data is is is put into a graph database to use it as a as a chatbot to talk to. So that should be very interesting in two weeks' time. Our graph, Neo4j Graph Developer Survey is still up. So if you'd like to give us your thoughts on how you learn, how you discover new things, how you find Neo4j pages that we make available to you, and if you can improve something then use um use the link to to let us know and then finally if you are now just to do some more stuff the graph academy courses that be depending to the graph um rack package they are now available on graph academy other courses as well so if you are interested in doing that, there's also a Knowledge Graph one. So if you like to extend that knowledge from a little bit further, a little bit more deeper, then check out the Graph Academy. Check out Aura-free if you want to get a free Aura instance to play around with Neo4j and check out our community and developer pages for questions around anything really. There's lots of documentation, a lot of great content available, the developer blog with lots of interesting in-depth reading material, but also the forum on our Discord server where you can ask other Graph fans and developers for help or if you want to comment something. And yeah, with that, thank you so much for watching. Thank you for joining today. Thank you, Jesus, again for being part of this great episode today. And I guess I'll see you all in the next year with Going Meta, Episode 5, Season 2. | Going Meta S02E04 – Ontology-driven end-to-end GraphRAG with the GraphRAG Python Package | 4,169 | Neo4j | 20241204 | Season 02 Episode 04 of Going Meta – a Series on Semantics, Knowledge Graphs and All Things AI
Topic: Ontology driven end-to-end GraphRAG with the GraphRAG Python Package
Jesús Barrasa: https://twitter.com/BarrasaDV
Repository: https://github.com/jbarrasa/goingmeta
Knowledge Graph Book: https://bit.ly/3LaqE6b
Previous Episodes: https://neo4j.com/video/going-meta-a-series-on-graphs-semantics-and-knowledge/
GraphRAG Python Package: https://neo4j.com/labs/genai-ecosystem/graphrag-python/
GraphRAG Pythong Package Docs: https://neo4j.com/docs/neo4j-graphrag-python/current/
Road to NODES GraphRAG Workshop: https://youtube.com/live/OALrsghrP_I
GraphRAG Github: https://github.com/neo4j/neo4j-graphrag-python
WebProtege: https://webprotege.stanford.edu/
#graphdatabase #neo4j #graphrag #knowledgegraphs #ontology #data #python | 2024-12-04T08:48:27.645921 |
https://www.youtube.com/watch?v=bq1Plo2RhYI | Hey this is Lance from Langchain. So Meta just released LLAMA 3.2 today, which is really exciting. I've been looking forward to this for a while. And what's really neat is you can use increasingly small models to do pretty interesting things locally on your devices. So today I'm gonna show how we can go from scratch, build a fairly complex RAG agent using a 3 billion parameter model that will run locally on my laptop. And this diagram shows the entire flow of the agent. I'll just zoom in here. So we're going to be able to take questions, route them using the model to either an index or web search, grade documents from retrieval based on relevance to our question, produce an answer, grade the answer for hallucinations, grade it for relevance to the question, and finally return an answer, grade the answer for hallucinations, grade it for relevance to the question, and finally return an answer to the user. We can do all that logic locally with LAMA 3.2 3 billion parameter model and I'll show you how to do that shortly. So first there's kind of two variants of the models here. There's compact and there's vision models. So there's a 1 billion and 3 billion parameter compact model. We're going to be working with a 3 billion. There's also vision models which are interesting for other reasons and which I may talk about in a future video. So again these compact models are meant for smaller on-device use cases. And what's kind of cool is I show some statistics here comparing the prior 8 billion to the current 3.23 billion and it's pretty close. Three versus eight, you can see the three billion parameter models as strong on one of the evals. And so it's pretty promising. It's telling us that smaller models are getting indeed better and that opens up access to more folks which is very exciting. So I want to talk a little bit about this RAG agent we're going to build and let's just get into the code. So we're going to incorporate ideas from three papers. One is routing. So that's the ability to take a question and route it to either a vector store or web search based on the content of the question. The other is fallback, this idea that you can basically retrieve documents and grade them so if any documents are not relevant you can filter them out. A big problem with retrieval is erroneous retrievals and so we have a fallback for that. And self-correction, we can fix and catch hallucinations and we can basically reject answers that don't address our question. We can build all that from scratch and I'll show you right now. So I'm going to be moving between a notebook. I've already set this up and this code will be available to you. I've just done a few pip installs and here's where some interesting decisions can be made. I'm going to use local embeddings from gnomic, these GPT for all embeddings. The pip installs are presented up here. You can try other local embeddings. I tend to like these, but obviously that's up to you. And I'm going to access LLAMA 3.2 Instruct FP16 from OLAMA. You can try other providers to access local models. I happen to like OLAMA. So if you go to OLAMA's website here, they provide a bunch of different models that you can just download. And all you need to do is just Olama pull and then whatever model you want. So that's all you need to do. Very simple, just make sure you have Olama installed and you can access any of these models for free. They're pretty cool. So I've done that and now there's two other pieces of tooling and then we can get right into it. I use Tavoli for web search. I'm going to set up this kind of web search component of my agent. And Tavoli is a really nice search engine optimized for RAG and other agent use cases. It has a generous free tier so you can sign up and use it. Simply, they'll give you an API key. Just make sure it's set. I'll also use Langsmith for tracing, which is just a way I'm going to visualize what I'm doing. This is totally optional. So those are the only things I've set here. And then we have a blank notebook. We can just get going. First, I'm just going to find my LLM. So again, I set the name of the model that I pulled using this command, olama pull. This is now available to me. And I'm using LangChains integration with olama. So I I just simply specify this model name and I load it. Now you see I do two interesting things here. First I load just the raw model and I also load this JSON mode and you'll see why we're going to use that a little bit later. Next I'm going to build a vector store. So this is where it's kind of up to you. This could be your personal documents, this could be really anything you want to work with for any question-answer application locally. In my case, I'm just going to take three URLs from some blog posts that I like, and you're going to see here this is just some very simple utilities to load the documents using a web loader, split them as I do here. You can select your chunk size arbitrarily. So what's happening here is we're loading the raw documents. We're splitting them into chunks of size a thousand tokens, and I'm going to add them to a vector store. This is a free to use vector store from sklearn. I load in my document splits. I load in my embedding. Now here's where I'm going to be using those gnomic local embeddings that again are free to use and I'm going to create a retriever from that. Let's get that started. So this retriever is a line chain abstraction that gives you a few common methods. So you can use invoke to very simply agent memory get documents related to any input. Here we go. So we've just retrieved some documents relevant to this question or statement agent memory. Of course, what's happening is it's taking the statement, embedding it using our model, and then doing a similarity search with the embedded documents to find similar matches based upon semantics of my question relative to the documents. That's it. So we get documents back. So now we have a retriever. The retriever you can see falls into our flow here. This is kind of one of the core components of a RAG system. You need some document retrieval mechanism and we built that. So now let's build some of this logic around it. First let's start with routing and I'll show you how we can do that. So now I'm going to build a router and I'll show you a really simple trick for doing this with local LLMs like this. Very simply, I'm just going to set up a prompt that says you're an expert at routing. The VectorStore contains documents related to agents, prompt engineering, and adversarial attacks. Use the VectorStore for questions on these topics. For everything else, especially current events, just use WebSearch. Now, here's where I just instruct return JSON object with a single key data source that is either web search or vector store, depending on the question. Now, when I use this LLM JSON mode, if you go up here, right here, because I've set format JSON, it's going to enforce the output of the LLM to be JSON. So what's nice is I've built myself a simple little router just using kind of LLM and JSON mode. So all I need to do is I'm going to pass in a question. Now, one other thing I want to mention, this is a chat model. So we can pass a set of chat messages. You can import from Langchain core messages, human message, system message. This is my system message with the router instructions. And then my human message is just whatever question I have. What are the types of agent memory? And those can be passed in as a list. That's it. And I can go ahead and run that, oh, the data source for this is VectorStore. And I get a little JSON back that says DataSource VectorStore. So you build a simple router using JSON mode and a local LLM. And I use this all the time. It's an extremely convenient way to just do really simple kind of binary routing using the llm to reason about some input and return a structured object like a json with for example the two outputs that i want either vector store or web search so now we've built this red router piece and next i'm going to build this blue greater piece so if i go down here it's also really simple let's just give some instructions for grading. The main idea here is I want to look at the retrieved documents and determine that they're relevant to the question I ask. Because sometimes with semantic similarity search, you can get kind of erroneous retrievals of documents that are not particularly relevant to the question, but happen to be returned based upon the idiosyncrasies of your chunking or your embedding model. So this can happen sometimes. It's nice to have this additional layer of kind of protection or reasoning on your retrievals. So you're greater assessing the relevance of retrieved documents. If the document contains keywords or semantic meaning related to the question, grade it as relevant. Okay, so here I'm just going to pass in the document, the question, think carefully about this, and return a JSON with a binary score, yes or no. That's it. So here we go. I'm just going to ask a question. What is chain of thought prompting? I'm going to retrieve documents from my vector store. I'll pick one of them. Okay, you can see that's what I'm doing right here, and I'll go ahead and pass that into my prompt right here. I just format the string with those two inputs. And I do just what I did before. I'm going to go ahead and pass in those greater instructions here. I'm going to pass in my formatted prompt here, which has my question and my document. There we go. So we get binary score yes, which makes sense. Basically, what is chain of thought prompting? We can look at the document to confirm that. And there it is. So the document contains chain of thought prompting here. So it mentions it in a few places, so it's relevant to our question. That's good. So now we've built the router, we've built this particular document grader. You can see we've done this really simply using a chat model and JSON mode. That's all we need to build these logical building blocks. Now let's just show you how to do really simple RAG. Here's just a RAG prompt. You're an assistant for question answering. Here's some context. Think about it carefully, review the user's question, and then provide an answer. That's all I do. I retrieve some docs. I just format them really simply, just to pull out the page content from each document object, and I format my prompt accordingly, just here, just string formatting, and then I just go ahead and pass that as a message into my chat model. In this case, I don't need to use JSON mode because I want natural language to string output. I don't need a JSON object. Let's try that out. There it is. So this is like a nice answer about chain of thought prompting. So we've actually already done quite a bit. We have a router. We've shown how to do document grading. Now we've shown how to generate an answer. Pretty nice. This web search thing we actually already have. We use Tavoli for that. We have the web search tool defined down here, so that's cool. I can show you how we would use that really quickly if I go down here. So this, all I need to do is just invoke it, invoke, and I can pass in a question. And I get some documents out that are related to LLM agents and it gives me URLs and all that. So this is pretty nice. We'll be using this later. So now let's just show these final two components, hallucination grading and answer grading. So it's going to be really similar we already did. We'll copy this over. So just some hallucination grader instructions, just kind of logically the same as this but with slightly different prompt but same same concept, right? You're a grader teaching a quiz, grading a quiz, you'll be given some retrieved documents, you'll be given an answer, and you're basically saying, hey, grade to see if the answer is ground in the documents, that's all we're gonna do here. So here is the final prompt, this is where I'll put the documents, this is where I'll put the generation, this is where I'll put the generation. And I'm just going to format it using the document that I already retrieved, which is here. That's all my doc text. And the generate answer from above. That's this. And let's just run this greater on it and see. Cool. So whatever the nice thing, I add basically this whole explanation in the prompt. So I basically say, hey, give me a grade and explanation. In this case, you can see why it graded. So if we step back, what's kind of cool is this is all being done on my laptop locally using a 3 billion parameter model, and it runs pretty quickly. So, you know, we've already implemented quite a bit of logic here. We're through the grading hallucination bit and there's one more grader. That's just the final answer grader. This is going to be similar, but it's going to look at the final answer and it's going to look at the question. It just checks, hey, does the answer actually address the question? So that's really it. Let's go ahead and run that. Here I have a test case. I basically pass in a question. I pass in an example answer that I got from one of the earlier tests of the system and I just run that. Cool. So I get a score and I get an explanation. That's great. So we've actually already done really most of the hard work here. We've built all these components individually. We've shown how to use JSON mode to build a router. JSON mode, how to build graders for documents, hallucinations, and answer relevance. We've just used an LLM to produce answers using RAG. We have a web search tool, and we have a document grader tool. And we have a retriever. So we have a document grader tool. And we have a retriever. So we have all these interesting components. Now here's the question. How do you actually orchestrate this flow and tie them all together? And that's where I want to introduce this idea of LandGraph. So LandGraph is an agent orchestration tool that we built. I use it quite extensively. There's other videos on it. And I just want to show again how you can use it to orchestrate this whole workflow really easily and running locally on my laptop. So the first part of line graph is actually defining some state that's going to live across this entire agent flow. So the state is basically going to be a set of things, this can be like a schema, that each of these nodes has access to. Now because we're doing RAG, it's going to be things that are intuitive to the RAG process. It's going to be like the input question, the retrieve documents, the answer. All the things that I input and output from each of these steps individually, I'm going to save to this common state. And that's going to persist throughout my agent's lifetime in LandGraph. So what's cool is in LandGraph, I can go ahead and define this state class. And this is very flexible. I use a type dict for this. Here's the keys. These are things can be relevant to my agent's behavior and here is a type pin for each one. So that's all I need. That's kind of the first component of my agent. Now the second component of my agent is basically taking each of these steps that we did individually and wrapping those all as individual functions. Okay, those are known as nodes. So those functions really all do one thing. They take state as an input and they modify it in some way. So let's look at an example. If I go down, I look at our router, for example. What did our router actually do? It took in the question and it output where to go next. Go to a vector store or go to web search, for example. So this function is, for example, an edge in our graph. So it will basically look at the question and route it to one of two places. So edges provide some connectivity between the nodes in our graph. Nodes, on the other hand, if you look at retrieval, for example this, this takes in a question and returns documents. So for retrieval, this is what I call a node because in this case we're actually modifying the state, we're adding something new to it like documents, and in edges we're just deciding where to go next. Those are the two concepts they're going to be using here. Nodes and edges. Edges kind of direct where to go. Nodes modify the state. Now I'm going to show you that all right now. So here's some code that lays out the nodes and edges of the graph using the things we already defined above. So we're going to find a node for retrieve. That's going to take our retriever, invoke it with a question, the question we get from state, the state we pass into the node, and the state is just this thing. The state's just this dictionary. This dictionary has a key question, which we can just reference easily, as we see here. Get the question from state, invoke the retriever with the question, get documents. Now see this? We just can write the documents back to state using a dict, just like we do here. So in this case, documents then overwrite the value of documents here in the state. That's really it. Same idea for generate. You saw above that we actually ran. Same with generate. So if you look at this logic here, this is exactly what we did above. Here. So this is actually where we ran and tested this in isolation. Now I'm just wrapping this functionality in a function called generate, which is going to now grab from our state question documents and loop step. This is going to track the number of generations that I do. And it's going to write back the generation and increment the loop step, write those back out to the state. That's all that's going to happen. Now we're at grade documents. So if you kind of look at our overall flow, grade documents is this piece. We've done retrieval. That was the first node we defined. Grade documents is this other node. And in great documents we're going to do, this is exactly what we did above, but in this case we're just going to grab the documents that we had in state, which were written to state by a retriever. So if I back up a little bit, we've retriever node that takes in a question from state, retrieves documents, writes those documents back out to state. Then the grade documents node takes in state and has access to the documents that we wrote and does something else on them. In this node, it will basically look at each document, grade it for relevance to the question, this just like we did above, and here's where we can add some logic. So basically we will keep it if it's relevant. If it's not relevant, we're going to set this flag to do web search, and we're going to at the end of this process write the new value of our web search flag and the filter documents back out to state. So that's really all we're doing in these nodes. We're basically taking the state, looking at it, reasoning about it, writing something back out to state. That's all that's happening. Web search is another node. We basically start here. We take the question. We take any existing documents that we have. We do web search using the retrieval tool, and we write the documents back out to state. That's all that's going on. Now we're almost done with this process. Here's where we define our edges. So the edges define the logical routing between the nodes. So if you go back to the start of our graph, we had this router. The router is just an edge. It's conditional edge that looks at the question, decides either to go through the retrieval node or go to the web search node. And so you can see we invoke our router just like we did above. We get the source like where to go. If it's web search, return web search. If it's vector store, return vector store. You'll see a little bit later how we use that. But in these edges, you can see what's going on is we're returning a node to go to where web search is our web search node, vector store is our vector store node. So the way to think about this is in nodes, I take in state and I update state. In edges, I take in state and I decide what node to visit next. That's all that's happening. Decide to generate. This one is going to basically decide if there is web search flag that's been enabled. I go to web search. Otherwise, I go directly to generate. And that matches exactly what we show here. That's kind of what this edge is. So basically, this is indicating I either go to web search if any documents have been deemed irrelevant, or I go to generate. Now, this is the final step in our flow where we do grading for hallucinations and answer relevance or answer usefulness to the question. So this is a hallucination check. We already showed that above. We're just adding it to this edge and we're going to get a grade. So check hallucinations. If the grade is yes, as in it's, it is not, there's no hallucination, it's grounded in the documents. Then we do the grade generation versus the question. So that's the second check. We already showed that above. That's what we do right here. And then if that's yes, then we deem the result is useful. So that is that the answer is actually useful to kind of finalize and send to the user. Otherwise, the answer is deemed not useful. Okay. Now you can also see we do this max retry check here. So if you recall in the generation node, we incremented that every time we do generation. We want to make sure we don't keep retrying. So we basically check this max retry. So if you default to three, and if that's exceeded, then we just will exit. And that's what you can kind of see here. MaxRetrize is reached. So this looks at basically if there was hallucination present, then the answer is not supported by the documents. And then finally, MaxRetrize again. So that is the final edge. Now we're going to add logic to stitch this all together. So this is where I'm going to add all the nodes I defined above with names and I'm going to then take all the conditional edges which I defined basically here, route question for example, this returns either web search or vector store right so basically this is where I can map the outputs of my edge to the node names in this case I make it simple because I keep them the same but you can imagine those can be different. So basically this is the mapping between the node to visit and the output of the conditional edge, but that's really it. So I add a conditional entry point, I then add an edge from web search to generate, and so I create the connectivity of my graph here, I compile it, and I can visualize it, and there we go. That's our entire graph, visualize, it's pretty cool. So think about all we're doing. We're starting with that conditional edge that's going to look at the question, decide to either visit web search or vector store based on the question. That's that router we built first. Then we're going to do retrieval. We'll grade documents. If any documents are not relevant, we're going to go ahead and do web search and supplement them with web search. Otherwise, we'll go to generate. So from web search, we'll go to generate. Now when we've done with generate, we did two checks. One is that the generation is not supported by the documents, so there's hallucinations. In that case, we just retry. That's what this is representing. The other is that it's actually not useful, so it doesn't actually answer the question. If that occurs, we kick back and retry web search. So that's kind of the current logic in the graph, and you can modify this accordingly, but that's what's going to go on. So those are what the two fallbacks do. And then finally, the answer is either useful, which means that it's both supported as a no hallucinations, and it's useful as an answer to the question, or max retries. That's another way we end. But that's the overall connectivity of our graph we've put together in line graph. So we zoom all the way back out. We've used Ollama. We've used Ollama to load Llama3b FP16, LLM and JSON mode. I've tested each of these components out individually. I built a router, built a retrieval grader, built a rag prompt, built a hallucination grader, built an answer grader. We tested all of them individually. Now I put them together into a graph using line graph. Here's the overall graph connectivity and we can test this now. So when I kick this off, all I need to do is supply some initial state values to work with. So basically, I can pass in a question, and I'll pass in maxRetrizes3. These will just populate my state with those two inputs, and that's all the graph needs to get started. Now, I just can run graph.stream. That's one way to run the graph. Pass in the inputs and I'm going to stream in the graph state at every step and I'll print it out so we can see this running right now. So first we're going route question. It's going to decide whether to use the vector store or web search based upon the content of the question. So it decides the vector store, that's good. It retrieved these documents, so it's basically spitting out everything here, so it's writing up the whole state every time. So this is cool, it's already done a bunch of work here, it's gone ahead and it has graded the documents, it deemed one of them to be non-relevant but the rest are relevant. So then it decides we'll go to web search, it's going to be non-relevant, but the rest are relevant. So then it decides, we'll go to web search. It's going to be kicking off web search. It runs that. This is the entire state printed out by web search. Cool. And basically it did the hallucination check and it also graded the generation relative to the question and that's it. So the entire graph ran. Now I can go over to Langsmith and show you also the entire flow of that graph. So here I'm in Langsmith. I'm looking at the trace. So you can see, first, we do retrieval from the VectorStoreRetriever. We get a bunch of documents. That's cool. We graded them for relevance. I can open that up and we can see here is Chet Ollama grading each document. Pretty nice. And here is what I pass and here's the system prompt I pass. Here is the human message. Pretty cool. Which contains all the documents and here's the rendered output. It deems that to be relevant. It goes through each one of the four. So that's cool. Now here's where it basically hits that conditional led to decide to generate. Because one of the documents is deemed to be not relevant, it decides I want to do a web search. So then it kicks over and goes to the web search node. That runs Tavali. That returns some web search results for me. Pretty nice. Then I go to generate, which runs Chattelama again. In this case, it generates an answer to my question. My question was related to agent memory. There are two categories of agent memory, short-term and long-term, and so forth based upon web search and my retrieval. It answers the question. So then this is the conditional edge that will both grade the hallucinations. So in this particular case, it deems that the answer is grounded. So that's cool. I can look at the grading here and it finally graded you can see here yes for answer helpfulness. So it deems that it's actually helpful and actually addresses the question. So that's the entire flow of our agent that does a whole bunch of different things, all running locally on my laptop using a 3 billion parameter model. So it's pretty neat. If I go back and kind of show the notebook, I'll make sure this is all accessible so you can play with this yourself. This is all free to use. I'm using local free embeddings. I'm using locally running 3 billion parameter model. I'm orchestrating this using Langraph. You can orchestrate things that are much more complicated or much less complicated, but this is really showing you that even with a 3 billion parameter model, you can build pretty complicated, interesting multi-step workflows that actually run reliably and will run locally. So again, I've actually been really excited about working with local models for a while. I find that orchestrating them with LandGraph is a very nice way to kind of build these reliable multi-step workflows that can run locally on my laptop to do arbitrary tasks. This is just a toy example I'd like to use to showcase the functionality in a RAG setting. But it can handle many other types of tasks you can imagine running locally and also reliably as we just saw here. This is an agent that would be fairly difficult to run using a more react style open-ended agent loop but using lang graph with each of these nodes and edges very clearly enumerated you're asking the LLM to make very specific narrow decisions at specific points as you can see here using json mode like we did in the router. And if you structure it that way, you actually can build pretty complicated workflows that actually run liably and locally. So please leave any questions in the comments. Happy to address them, and I'll make sure this is all open source. Thanks. | Reliable, fully local RAG agents with LLaMA3.2-3b | 1,864 | LangChain | 20240926 | LLaMA3.2 has released a new set of compact models designed for on-device use cases, such as locally running assistants. Here, we show how LangGraph can enable these types of local assistant by building a multi-step RAG agent - this combines ideas from 3 advanced RAG papers (Adaptive RAG, Corrective RAG, and Self-RAG) into a single control flow using LangGraph. But we show LangGraph makes it possible to run a complex agent locally.
Code:
https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_adaptive_rag_local/
Llama3.2:
https://huggingface.co/blog/llama32#what-is-special-about-llama-32-1b-and-3b
Full course on LangGraph:
https://academy.langchain.com/courses/intro-to-langgraph | 2024-12-04T17:49:29.842524 |
https://www.youtube.com/watch?v=80ImA4-zycw | Hello everyone. My name is Taz, tech leader for AWS Data Analytics, here today with Nitin Arora from Amazon Finance and Amazon Vice President and Distinguished Engineer, Ippo. Welcome to the Data Foundation in the Age of Generative AI breakout session. Now, really quick, who here is feeling lucky and can guess the approximate number of people we have in this room? Yes. All right, any other guesses? 300. 300, couple more? 275. 275. 275. 275. Good one. All right, one last one. 252. All right, one last one. All right, no prizes for guessing it right, but you are in Vegas, you know what to do. It's about close to 400, so thank you for playing along. The thing is, over the past three decades, the world of data has gone through many evolutions, marked by key defining moments. And I would probably not be wrong when I say that between the 400 of us over here today, we have witnessed the birth of several of these milestones, like data warehousing, big data, NoSQL, machine learning, and unless someone has been living under a rock, you've probably heard about generative AI too. Data has been the driving force behind these technologies and that is no less true for generative AI or Gen AI. But let's be honest, data is not the first thing that comes to mind when you hear about Gen AI, does it? So today, we will see how Gen AI impacts and influences data engineering, and how AWS is scaling and evolving its data foundation capabilities to meet the demands of building a GenAI application. And by way of agenda, in the next 60 minutes, we'll start with a quick understanding of the idea of a data foundation and how AWS delivers it. We'll then cover several data foundation capabilities that AWS is scaling to meet the demands of building a Gen AI application. Then we'll follow up with a real world use case presented by Nitin from Amazon Finance on how building their data foundation on AWS accelerated their journey to deploy a data mesh and then quickly enhance it with Gen AI features. And then we'll wrap it up with Ippo taking us through a deep dive on how AWS is evolving its data foundation capabilities to meet the demands of the future. Now, I have been in the data business for close to around 25 years. And although the term data foundation may seem simple enough, it is one of the harder things to articulate. So like, picture this, for example, you have someone new joining your team and they ask you, hey, can you tell me a little bit more about our data foundation strategy? How do you respond to that? A good way of understanding it is to break it down into a what, how, why and who questions. What is it? It's a behind the scenes organizational strategy that centers around the ingestion, integration, processing, transformation, and governance of your data. Who is it for? It is of course for the employees, but it's also for the partners that work with the organization data, and in many cases extends to the customers that work with the organizational data, and in many cases, extends to the customers that work with that data. Why is it needed? This one's simple, for data-driven decision-making. And the loaded question of how, how do you deliver it? This is primarily through a broad and well-interconnected set of data sources and solutions. And AWS delivers the how through a comprehensive set of data, machine learning, and AI services with direct and native integration capabilities for data within and outside of AWS and comprehensive data governance to do self-service analytics. And the AWS Data Foundation goes beyond technology. It understands the critical role of people and processes in an organization and incorporates these factors to accelerate what is called the data flywheel effect, where data is treated as a product designed to deliver a rich customer experience with scope to improvise based on feedback that is channeled back into the data. And the benefits can be broadly classified or categorized into business and technology. Business benefits include tactical outcomes like efficiently organizing quality data, making it easier to trust, which in turn makes it easy to monetize it. And strategic benefits such as having the ability to quickly adopt new frameworks, for example, data mesh, and easily pivot to a new way of doing things, Gen AI. And the technology benefits include the findability, accessibility of this data for effective discovery and access control at scale, better interoperability between different data types and different processing applications, and increased reusability with a well-defined usage contract for repeated use. And these benefits are true for all kinds of data architecture patterns, whether you are building a complex data mesh or a simple data pipeline, something like this. A conventional data pipeline with the typical ingestion processing governance components. So how do data foundation capabilities and data pipelines change in the age of Gen AI? We'll address this through the lens of how AWS is scaling its data foundation capabilities to meet this change. Before we get into it, let's quickly level set on what is Gen AI. Very simply, it's the use of artificial intelligence to create new content using AI models trained on large amounts of data. And we see a trend of popular application building patterns here at AWS. The pre-training your own model approach is when a customer builds their own model from the ground up, pre-trained on large amounts of data. The continued pre-training process tweaks the model parameters to accommodate newly available unlabeled input data. This is followed by the fine-tuning a pre-trained model approach. This is where you take a pre-trained model and train it further on a smaller labeled data set. And RAG, retrieval augmented generation, one of the more popular methods due to its ease of use is an example of in-context learning where you do not alter the model parameters or train the model on a specific data set, but instead you guide the model's behavior by passing it additional context. Context can be either situational or semantic or even both. And these building patterns influence the capabilities of a data foundation. But it is not all apples to apples. Along with your data being the differentiator, there are several aspects of building Gen.AI applications that require scaling the capabilities of a data foundation. So let's see how the scope of a simple data pipeline is influenced when you build a GenAI application. One of the very first things that GenAI introduces into the data pipeline is the need for additional data sources. And these are primarily in the form of unstructured data. Now unstructured data is not a new construct. However, it is the dominant part of Gen AI pipeline. And with more than 80% of the world's data in unstructured format, it is now front and center. Unstructured data is data that doesn't confirm to a predefined format and is not organized according to a preset data model. So metadata discovery for unstructured data can be challenging. Let's walk through a high level architecture of a solution that we can build on AWS to process the metadata for unstructured data. We start by ingesting the raw input data into an object store like Amazon S3. Then depending on the type of data, we can use one of several AWS AI services that are designed to extract metadata from this unstructured data. The most common use services are Amazon Comprehend, Amazon Transcribe, Amazon Textract, and Amazon SageMaker. The metadata is then extracted to an S3 bucket and we can then optionally apply additional transformations to enrich and refine the data sets with an AWS Glue ETL job. Next, we build a metadata catalog in S3 that houses the extracted metadata attributes that help in the discovery of this data. And for discovering and accessing the catalog itself, we can use the AWS crew data catalog and can also apply Amazon Athena as a query engine and deploy governance with AWS Lake Formation and Datazone. Coming back to our pipeline, again, depending on the Gen AI application building approach you take, your data processing phase is also influenced. You could be working with feature engineering to extract information from your data warehouse or data lakes, or you could be running inference for your continued pre-training approach. Or you could be managing vector data for a RAC based application for some real time meaningful context. The thing is, all this processing requires advanced forms of data integration and data stores. So let's take a vector data management example with the help of a RAC application. You start by using the Amazon Bedrock service to generate vector embeddings from unstructured data sources and save these embeddings within a vectorized database index. You then have a GenAI client that submits a question to a Lambda query encoder function, which in turn makes a call to a Bedrock API. In our case, we are using the retrieve and generate API. The API queries the knowledge base and then generates a response, which is returned as context to another knowledge base backed Lambda function, along with the original question from the client. Together, these are submitted as a prompt to a large language model, which then generates a more factually accurate response. As I mentioned earlier, RAG with vector data is a popular approach. Let's talk about vector data management a little bit more. Vector data management involves taking your domain data and breaking it into chunks in a process called tokenization. You take the chunks and pass them through a large language model to generate numerical vectors or number arrays. These are then stored in a vector database where the vectors place elements with related meaning close to each other into multi-dimensional vector space. And then searching for meaningful or semantically related data and returning it to a GenAI prompt is simply reduced to a mathematical function of finding the smallest distance between these vectors. In our previous RAG example, the vector storage and vector search capabilities were provided by the Amazon OpenSearch service. It's an elastic search based open source solution that is already widely used for search analysis and log analysis. So choosing a vector store that is familiar is a big plus as you do not take on any new licensing costs. And when your vectors and business data are stored in the same place, your applications deliver higher performance with very little data movement. And in the same spirit of familiar tooling, reduce licensing and faster experiences, AWS offers vector capabilities across many of its fully managed database services, including SQL databases, such as Amazon Aurora and Amazon RDS, NoSQL databases like Amazon document DB and Amazon Neptune, a graph database. Returning to our data pipeline, some Gen AI applications also incorporate advanced learning techniques such as reinforcement learning from human feedback, RLHF, a machine learning technique that uses, you guessed it, human feedback for a highly personalized and accurate response. Capturing this information in the right kind of data stores and efficient pipelines helps with latency and accuracy. Let's go back to a RAGA application example and see how we can tweak a data pipeline to incorporate user personalization context. So before submitting the client question to the encoder, the application can reference additional pieces of information to provide more relevant and meaningful user context. It can query a customer 360 data warehouse's Amazon Redshift to get additional unique customer attributes or it can also maintain a database in Amazon DynamoDB to store the state and history of the conversation. You're good. All right, moving on. Now, a customer 360 provides a complete and unified view of a customer's profile that can drive business decisions. So like for example, you can use C360 for your generative AI application to create marketing campaigns that are more likely to resonate with a specific customer demographic. However, building a C360 solution is non-trivial. It needs access to data that is spread across different types of databases and data stores. It also needs to be able to add new data sets over time. And needs to ensure the correct change data capture techniques are in place. SQL databases and NoSQL databases typically store user operation data like user profiles, conversation threads, and transaction history. To ingest this data into a C360, you would probably build complex data integration pipelines that connect to a source database, identify what has changed, and then load that into the target data store. Zero ETL data integration capabilities by AWS takes away this heavy lifting. They replicate data from your SQL and NoSQL databases on AWS into the Redshift data warehouse using CDC techniques without the need to provision any infrastructure. Likewise, a generative AI application many times needs most up-to-date real-time information on the latest user transactions for a more complete customer view. Streaming ingestion between Redshift and services such as Amazon Managed Kafka and Amazon Kinesis data streams, and the auto-copy feature between a S3 data lake and Redshift gives the GenAI application the most timely context for downstream processing. Now, because the GenAI application opens itself up to a much wider variety of data sources and data personas, data governance becomes a function of the entire end-to-end data pipeline. Behind the scenes, data sharing, data privacy, data quality, data cataloging play an important role towards delivering comprehensive data governance for a GenAI application. Data sharing on AWS makes data available without having to move the data or create copies of it. Whether you are looking to securely share data from a data warehouse or a data lake or a data marketplace, services such as Redshift, Lake Formation, AWS Data Exchange and Data Zone make it simple to do point-to-point or centralized data sharing. Data sharing also gets easier when your data is well-organized and catalogued. You can build technical and business catalogs with DataZone and Glue Data Catalog for different data types and natively integrate with many AWS services to help you quickly deploy governance best practices for data sharing. And finally, no GenAI application can be taken seriously if user privacy and data quality are not managed. So AWS has scaled the AWS Glue service to detect and process sensitive data such as PII data to increase compliance, and has also equipped the Glue service with machine learning algorithms and rules-based automation capabilities to monitor, detect, and mitigate quality issues. So to summarize, we covered several dimensions of how AWS scales its data foundation capabilities for an enhanced Gen AI experience. We looked at data processing features for structured and unstructured data to improve Gen AI functions such as training large language models. Then we covered data integration capabilities like zero ETL, auto copy and streaming ingestion that advanced the user experience, followed by databases that are optimized to include support for vectorized structures that add important context to the RAG approach, and rounded it up with data governance features that deliver a secure and reliable experience. But we are not stopping here. We continue to evolve our data foundation capabilities and Ippo will cover those right after we hear from Nitin from Amazon Finance on how a strong data foundation helped them build the data mesh and then quickly enhance it with Gen AI features. Thank you for your time. Handing over to Nitin. Hello everyone I'm Nitin Arora. I'm a senior engineering manager for a team called finance automation at Amazon. I lead teams that build and operate data services communication and work management software for finance operations. Now what exactly is finance operations at Amazon? Well in short it's an organization responsible for three key things. First it ensures that we pay our vendors and our employees on time with accuracy. Second it ensures that our customers are paying Amazon timely. And third, we handle every financial transaction at the highest bar of controllership and security. Finance automation is finance operations tech team. Our technologies empower finance operations to support Amazon's growth with better productivity, automation, and self-service. We support all Amazon business lines such as AWS, retail, advertising. Finance operations consist of thousands of employees. Our analysts who are a part of the finance operations team are responsible to support vendors globally across multiple business lines in a variety of languages. They respond to general questions from vendors. They handle multiple disputes or disagreements with the goal to build a strong partnership with world's largest ecosystem of suppliers. Let's start with a quick show of hands. How many of you have bought anything or something from Amazon? Pretty much all of you. So, wow, first of all, thank you and really appreciate you being our customers. So now let me provide you some context about the scale of our work. Customers like you are ordering various things at Amazon and to meet the customer demand, to meet meet your demand Amazon is ordering large volume of products from vendors in bulk we distribute these products to fulfillment centers and pay vendors for the goods that we've received for the most part this works pretty straightforward where it gets complicated is an Amazon scheme we need to handle different complicated scenarios such as defective products returns shipping damages which adds complicated is an Amazon scale we need to handle different complicated scenarios such as defective products returns shipping damages which adds complexity to the vendor payment process there are hundreds of teams and distributed systems across Amazon that are responsible to manage a complex supply chain functions like ordering inventory inventory, receiving, returns, and finance is responsible to track events across billions and billions of transactions and piece together the financial picture. Now that is just Amazon retail, one of business lines that finance operations supports. Amazon's vast scale has required specialized teams and distributed systems, each managing a specific business function. While this structure has really allowed Amazon to scale rapidly in trade-off, this has led to data silos. To create the complete picture of a financial transaction, finance needs to stitch together events across hundreds of these systems. Well this translates to trillions of events and petabytes of data to be processed every month and naturally our finance users want this data in near real time. Why? To support operational monitoring, machine learning and data analytics use cases across the company. Let me give you some examples. We are paying hundreds of millions of dollars in vendor payments every day. We use real-time data to monitor that these payments are made accurately and timely. We use machine learning to continuously monitor for payment patterns to detect and prevent any fraudulent transactions. We use data analytics to analyze our cash and collection practices to improve Amazon's cash flow. Such data forms a big part of how Amazon is making key decisions and running its finance operations effectively day to day. When we started on our journey to build the modern data foundation, we identified three key objectives. First, we faced a landscape of fragmented data, multiple teams, multiple copies, multiple locations, leading to inefficiencies and waste. Our goal was to create a single source of truth for business analytics and machine learning by minimizing the redundant copies of the data. Second, to truly minimize the data duplication, now we needed to make the data more accessible and discoverable. We required a central data catalog where users can find the data sets that they need. We wanted to ensure that every data set present on the centralized data catalog was reconciled and validated through a series of data quality checks. Third, with data from hundreds of systems, we required robust permissioning, stringent data security, and stronger governance. We required comprehensive audit trails to ensure that the right people have the access to the right data at all times. I mentioned about Amazon's distributed landscape and its trade-offs, which was data silos across the organization across multiple data producers. Well to address this reality we needed to identify a way such that data can be easily but securely be connected across domains and systems. We selected Data Mesh as our foundational strategy as it advocates for decentralized approach to data management. It advocates for self-service data sharing where business owners can come in, find the data that they need and make a request. That request is dispatched to a data producer who decides the right level of access to be granted. The central data catalog or the central catalog is place where we implement discovery, data governance, and store good documentation of data sets so that users can understand the quality, context, and the usage guidelines of data. Doing these allows us to treat data sets like data products. On this slide, you will see two sides of our Data Mesh Foundation, data producers and data consumers. Data producers are responsible for bringing data from those hundreds of upstream systems across various domains such as payables and receivables into the data mesh. The producers are responsible for data management, data storage, data quality, all those types of things that are required to finally produce good data sets, good data products into the data mesh. On the right hand side are the data consumers. Consumers are the teams that are doing analytics, advanced analytics and machine learning types of use cases. Our goal is simple, integrated once, consume indefinitely. We want to integrate that high quality data set into the data machine a way that is trusted in a way that business analytics and machine learning teams can consume the data easily and start solving business problems as quickly as possible. The central data catalog that you now see on the slide is the place where we implement permissions, apply guardrails for consistent metadata management, define governance policies and audit usage. Throughout the implementation, we are using AWS data integration capabilities such as Lake Formation and Redshift Data Share to share the data without actually moving or copying it. On benefits leveraging the data architecture of AWS we've achieved incredible business agility to evolve our data infrastructure with ever-changing business or technology demands. The time taken to make the data available and consumable has improved from many months now down to just few days. We've achieved faster performance for less cost, saving millions of dollars at this point by reducing our compute and storage footprint thanks to the AWS data integration patterns. We've moved from what used to be hours of long-running ETL jobs to now sub 15 minute job that has improved the data freshness and allowing business to make faster decisions. Our decentralized architecture has helped us to move away from the concept of large monolithic structure of teams to now move the data teams into the business or domain-specific organizations who have the specialized knowledge of data, allowing them to innovate faster while a centralized, horizontal data services team is responsible for data mesh, data governance, and applying consistent guardrails across the organization. Well with this data foundation we can now take the advantage of Generative AI much easier and faster. With our strong data foundation, good data, comprehensive catalog, reconciled data, we are now positioned to integrate these data products into our business processes. Generative AI is a huge opportunity for Amazon Finance. It has a potential to improve many business workflows, allows us to make faster and smarter decisions, and improves the productivity with better customer experience overall. As we looked at finance operations, our first priority was to improve the productivity of the analysts that are multitasking between several customer or vendor requests. At a high level, our data foundation and generative AI provides excellent capabilities to understand the intent of a customer request, identify solutions that align with these business policy documents, combine and present financial data across multiple sources, and finally execute or draft the next steps for review. Our goal is to automate any human routine task and position human as the final decision makers. To meet this end state, first we must identify how do we understand the business context from these policy documents, integrate data and then recommend actions for human in-loop execution. So how did we get started? Well we had the data mesh set up so what was missing was a good understanding of business rules from the policy document. The data in the policy document is highly unstructured, but a great source of reference to understand every human work that is happening at finance operations. We took a two-step approach, working towards our end state. In step one, our goal was to understand the business context from the policy document, while also enabling a conversational experience for the analyst, where they can search and retrieve solutions instead of having to browse through hundreds of pages of documents. In step two, we merge the financial data with the business context to suggest a precise action that an analyst should take to resolve a customer issue. Let's take a small detour and talk about how did we actually understand the context behind these policy documents and integrate data. Well, to build a comprehensive understanding, we utilize the power of AWS Vector Datastore. As you can see from this slide, with unstructured documents stored on S3, we extracted that data, chunked that data, converted into vector embeddings and loaded into a knowledge database. We chose Titan embedding models based on performance, accuracy, and cost benchmarking. Finance was already using OpenSearch for real-time analytics and high-throughput transactional search. Building on what Taz was mentioning before, the addition of vector database capabilities to OpenSearch allowed us to adapt quickly without building any additional data infrastructure. We chose OpenSearch based on performance, based on its ability to search across diverse document types and handle large-scale data volumes specific to our use case. Now, with evolving business, the policy documents change over time. They are not static. So we built a RAC pipeline, which allows us to search and retrieve policy information based on user prompts from analysts using a conversational interface. In step two, we built on the existing RAC foundation and as you can notice on the slide, we've now added the capability to pass the financial data which can allow specific recommendations with data for our analyst to resolve a customer request. But why is this significant? This approach combines the business contextual understanding from the policy document with precise financial data, the high quality data from the data mesh to enable targeted problem solving. I shared the data mesh architecture before. And now you will see that we've added the vector data store and LLMs connected to the centralized data catalog, an integral part of our data mesh. Once again, this is how strong data foundation has allowed us to adapt generative AI quickly and easily. With the combination of generative AI and data, humans are no longer the consolidators of data. It's LLMs that are consolidating the data for our use case. We're in the early stages of our journey, but our initial results are promising and encouraging. They are showing that we are accelerating the time it takes to understand the context of a customer problem. We're accelerating the time to get answers. We are accelerating on what should happen next. We launched a Gen AI chatbot to Q&A on the policy documents and improved the cycle time by more than 80%, improving productivity for hundreds of analysts across a major business channel. The human productivity continues to improve as we onboard more channels to the chatbot and achieve our vision to enable a virtual assistant for every finance operations employee. By integrating the data, we started to enable analysts with well-defined summaries and suggested next steps which will allow analysts to resolve customer queries across thousands of emails with improved productivity and elevated customer experience. Obviously we are not done yet and we have a way to go. We have more domains, datasets, business policy documents to onboard into the data mesh. We need LLMs to answer to different types of issues, policies, and a wide range of prompts with high quality data. As we add more sources, domains, we need to get better at discovery. We need stronger governance. We need the ability to semantically search the data within the data mesh. With good metadata, we can now generate queries and execute them using natural processing language. We still have cost optimization opportunities. About 60% of our pipelines are in scope to benefit with zero ETL and optimize the cost further. I second-task and reiterate that good data foundations on AWS are must-haves for generative AI. I'll end by saying that we strongly believe that our investments in the area of evolving our data foundations in the age of generative AI will transform how we improve the productivity for analysts and redefine the bar of customer experience at finance operations. I hope this was helpful. Thank you. I'll now hand over to Ippo, who's going to talk about AWS foundation or future of AWS foundations in AWS. Thank you. Thank you. Hello, everybody. Thank you, Nitin. So my name is Ippo. I'm an engineer, distinguished engineer at AWS. My responsibility is kind of the technical evolution of our analytical services at AWS. I guess Matt this morning said in his position he's not supposed to have favorite child but EC2 is his favorite child for me I come from Redshift and we're potentially can say that this is my favorite child so as Nitin said with our counterparts on amazon.com data is what it fuels analytical and AI experiences services and in particular it is your data that allows you to go and build specialized customized tailored applications analytical and AI applications that drive your business and you have been has been choosing over the years over the past decade or more you have been choosing our comprehensive set of services to develop these applications. You have been using a multitude of our services and you appreciate the depth of these services and we have especially our price, our performance, our security posture that differentiates us from other vendors. But over the past years one thing that we have heard over and over from you, from the customers, was that even though you really appreciate the depth of these services and the capabilities, you would really like to leverage the same set of services through more unified experiences. And you can categorize, you could say that there are two layers in what you were asking, unification that you were asking. The first one was on the data layer. You wanted the data unified data management layer. And on top of that you wanted a unified development experience that can allow you to develop applications faster and we are excited today this morning Matt announced our evolution of the SageMaker service the next generation of Amazon SageMaker that now it is becoming the center of your data analytics and AI on AWS. What we are doing, we are essentially doing, we're adding, we're enhancing SageMaker with a bunch of capabilities. There are three basic components that I would like to talk about today. The first component is the SageMaker Unified Studio. It's a unified IDE, a unified development experience that can allow you to develop data processing applications. It allows you to run efficient SQL analytics using Redshift, it allows you to go and do development of models, it also allows you to develop generative AI applications and as Matt said this morning we will keep on adding capabilities in this unified development experience by bringing in streaming, by bringing in search, by bringing in business intelligence. On the layer below that we are evolving the capability of our data zone service and we are offering SageMaker data and AI governance capabilities in a easy to use very very powerful capability that we're going to talk about. And then at the bottom of it or even at the core of it we are introducing SageMaker Lakehouse which is an evolution of our existing Lake Formation Glue Data Catalog, Technical Catalog that now allows you to manage and access data from Redshift managed storage, from open file formats in S3, or even the newly announced S3 tables that we talked about this morning, and a bunch of other capabilities that I'm gonna talk. And we're making it super easy to ingest, to move data into this lake house. And by doing that, we are allowing you to collaborate and build faster. You have one unified data and AI development environment, and we have a very broad set of tools that allow you to build generative AI applications. We are bringing down the data silos between warehouses and the lakes by offering a unified data access, data management layer that can be also accessed by open APIs such as the Apache Iceberg REST API. And we are doing that and we are allowing you to set up your enterprise security and governance by offering, for example, abilities such as fine-grained access controls in this unified data management layer. I'm an engineer and there are many talks about the next generation of Amazon SageMaker throughout this week. And I'm pretty sure many of them, most of them, will index on the unified studio experience because it's also easy to demonstrate. I love it. I have been using it. It's very, very impressive. But since I am an engineer, I will actually start from the diverse order. I will start from the unified data management layer. You have been using a bunch of tools from our service, a bunch of our services in order to address your use cases. You have use cases that require the performance, the high throughput, the ability to run multi-statement transactions and obviously to offer low latency by accessing data from say main memory and you have been using data warehouses in order to achieve that. But at the same time you also love data lakes. As Matt said this morning we have thousands, if I remember he said, hundreds of thousands customers that have built data lakes on S3. You like the flexibility of the storage, you love the flexibility of S3, you love the ability of the open, the flexibility of the open file formats. 15 years ago I was in my previous employer, I was part of developing Parquet back then. It was early on, we were developing, innovating on Parquet and I could not imagine back then the impact of this thing that would have in the industry. Pretty much everything right now is on Parquet file format and it is impressive to see this thing, how it has evolved over the years. And you also like the ability that you can have different tools, specialized tool to access this data. But you don't want to be thinking about this split brain. You don't want to be thinking about warehouses and lakes and have catalogs in different places and setting having to set up governance in two different places and whatnot. What we have been asking is a unified solution what people you want us to bring the warehouse capabilities and the lake capabilities together. I was advocating when we were developing this capability over the past year or more I was advocating to call it where lake. Because warehouse and the data lake were lake. But my suggestion did not go wrong. So we turned it around and we called it Lake House. So Lake House is a... we are bringing in the capabilities of the warehouse of the lake and we allow you to bring any service to operate on top of this data and allows you to set up consistent governance access controls to this data. And that's what it is. The SageMaker Lakehouse is a unified data management layer. It is open. You can access all of your data on AWS on analytical services on AWS through an open API. We are offering an Iceberg REST API so you can consume it these things from our one piece services as well as third-party services as long as they can speak the Iceberg REST API and it is secure. Let's dive a little into some of the capabilities. The first thing we believe at AWS is that you need to have flexibility, you have different needs, you need to have flexibility and options in the storage. We are offering three different types of storage for analytical data. The first one is very commonly used, the data link. You can have a general purpose S3 bucket and you can store your data. Most of you, probably with the ease of hand, most of you you will have a data lake somewhere in your ecosystem. You can access the data and one of the file formats that has been coming increasingly popular is Iceberg. And you can access your data using the Iceberg protocol such as the REST API. On AWS, we are offering optimizations on your Iceberg data lakes. For example, you can go to the Glue Data Catalog, enable automatic table optimizations, and we start making sure that we are keeping your data combed and whatnot. We are also enabling automatic statistics collection so that we can populate the technical catalog with statistics that then the query engines such as Redshift can take them and use them in cost-based query optimization in order to make sure that it generates efficient query execution plan and has good performance. And you can access the data from one-party services such as EMR, Spark, Athena, Glue, SageMaker, Redshift, as well as third-party services such as Apache Iceberg, Presto and what have you. Today this morning Matt also announced that we are offering also a managed iceberg as S3 tables which is a managed iceberg service by powered by S3. It's a new class for Apache Iceberg Data Lakes. It has high throttles on the written right throttles, RPS, which allows you to have very good performance right off the bat and it is a fully managed iceberg table maintenance. Estuary tables are part of the lake house, of the SageMirme lake house. And we offer management, automated management for these S3 tables. For example, right off the bat, you have ability to combat the data. And another request that has been highly requested was the ability to have snapshot retention. So you can say that this snapshot, when it expires, go and drop it and also clean up the S3 data that are associated and are not being referenced in full-cutter versions of this iceberg data. But we're also bringing the Redshift managed storage into the lake house. In Redshift, we have over a decade of experience and technology on analytics optimized proprietary format and we are opening it up to the SageMaker Lakehouse. If you have an existing Redshift data somewhere, we are starting today, you have a button to publish this database to this namespace into the lake house and you can allow now any other compute to go connect to these databases and start reading and writing to it you can even go to the lake house and create new database powered by the Redshift manage format and in this proprietary storage layer, which can be accessed through the same Iceberg REST API, we have many many years of experience and technology, underlying technology based of machine learning, which allows us to continuously monitor and optimize the system, for example, changing the physical design of the database in order to ensure to give you optimal performance. The Redshift Manas Torus is an analytics optimized format that is also very amenable for near real-time operational analytics applications. For example if you have some stream application streaming that inserts little data continuously into your analytical store, doing it on the Retsch with Mana storage is very efficient. And not only that, but performance does not degrade as you keep on inserting small, small data because of the way that the format is implemented. It also gives you multi-statement, multi-transaction, kind of transactional consistency. And what we have seen is that by having, for example, Spark writing to Redshift Managed Torus, you can get up to 3.5 times better performance than having Spark writing to, say say Iceberg on S3. And when it comes to low latency queries, we have a very common like you have a business intelligence application or you have many concurrent users that want to get answers very quickly, the Redshift managed storage offers up to seven times higher throughput than any other option out there. So that's what it is. The SageMaker Lakehouse is a unified technical catalog. You can put the Redshift managed storage, you can put the open file formats in S3, you can put your data in S3 tables and you can access it from any service that speaks 1P analytical service on AWS, as well as 3P services that speak the Iceberg Crest API. It's open and it is also secure. We are also spending a lot of our energy into making sure that you can bring in data into the SageMaker Lakehouse very easy. In particular, over the past two years now we have been innovating a lot around the zero ETL technology. We have GA the ability to ingest data, zero ETL data from Dynamo, from Aurora MySQL, Aurora Postgres, RDS MySQL into Redshift and by the mechanism of publishing that to the lake house. And I can tell you that every morning one of the reports I get is what is the latency? What is the other latency of all the customers that have been using Zero ETL and they are ingesting data, writing data to Aurora, what is the latency it takes to appear on the other side and this data to become available for analysis on the other side, on the lake house. And then does anybody have a guess? It's actually on average it's single digit seconds, around eight or nine seconds today on average to move the data from Aurora to the analytical side. And today as Matt announced this morning we are also excited to extend the capabilities of our zero ETL capabilities by now enabling zero ETL from enterprise applications. We are starting with eight applications that being Salesforce, SAP, ServiceNow, Facebook Ads, Zendesk, Zoho, Instagram Ads as well as Salesforce Management, I think the Pardot is Management Cloud, Marketing Cloud. And those now in the price applications can be zero ETL into the SageMaker Lakehouse and you can consume it on the other side with any tool of your choice. We are also bringing the federation connections that we have in the Lakehouse and we are also bringing in streaming. We have been having streaming support in streaming ingestion into Redshift and again to the Lakehouse. We have been supporting Kinesis data streams as well as Managed Kafka for some time now. As of last month we opened it up to Apache Kafka as well as Confluent and we have customers that are ingesting gigabytes per second using streaming ingestion to Redshift and Redshift managed storage and buy that to the lake house. And all these things are become of a unified technical and business catalog and in the studio you can go navigate your data and you can see all the data that are indexed by the Lakehouse. You have a dynamic catalog hierarchy and you can have managed catalogs, you can spin up a catalog that is supported by your S3 bucket or by Redshift managed storage, but you can also bring in existing Redshift installations as well as the S3 tables and the federated sources. The way it works, it's for example in Redshift, you go, you have say we have exabytes of data managed in Redshift, you press a button, it goes to the Lakehouse as yet another catalog in the lake house. And then you can have other existing Redshift compute, you can spin up another serverless compute to start reconnecting and reading writing to that, a third one some other data scientist that is very scary, writes these very crazy joins that you don't want to be isolated from the rest of your production and you can go and connect to that and start reading and writing. You can also use EMR Spark now or Glue or Athena or even third-party applications that can speak Iceberg APIs. And you can set up your access controls at this layer. For example, we give you ability to do column level security, row level security or a combination of them or we give you the ability to do tag-based access controls to all the data that are at an index in the SageMaker Lakehouse. And on top of that we have the data and AI governance powered by data zone and queue that allows you to create projects and you can put in data assets, models, compute, then AI applications and with a git based collaboration it allows you to go and develop and create applications on top of that. And we are adding that we have we have this also gives you, it's a business catalog that allows you to go in and discover data sets in a Q-based format. You can say, hey, find me the data sets that have information about my sales. You can subscribe to that, put them into the project and have your team collaborate and develop over there. And all of these are happening using the Iceberg REST API. Iceberg has been taking off over the past few years and we have many, there are many services first party and third party that can speak to that. So we are putting a REST API on top of that and you can come in with a SageMaker Unified Studio or analytical services or third-party services. I did ask for two hours for this session because I have too many things to say. Unfortunately they told me I have only one and I have only four minutes. So let's go to the development experience. The unified studio brings in together previously we were asking you to jump around consoles, go and do something on the EMR console or the Glue console, on the Redshift console or Athena or in a Glue data catalog and that was creating, it was challenging. So what we did and also developed generative AI applications on top of that. So the unified studio is a single data and AI development environment that allows you for example to go and create applications, machine learning applications, generative AI applications, it can allow you to do data preparation if you are a data engineer, you can run SQL queries, you can also use Bedrock IDE to develop generative AI applications. And you get access to the projects where you have put in your data, your code, your models and your compute. If you have an existing EMR cluster or some glue endpoints or Redshift endpoints, you can bring them in, you can keep on using, if you have developed some processes, you can keep on using those, but you are also evolving the way you can consume this setup you have already created. So you can train and deploy AI models using SageMaker AI, you can build generative AI models using SageMaker AI. You can build generative AI applications with Bedrock IDE which is in preview. You can prepare and integrate data using EMR or Glue and you can run SQL queries using Redshift. And all of these operations are accelerated with Amazon Q Developer, which is the most capable generative AI assistant for software developer. We have put Q everywhere there, you have a very nice natural interface to go and develop. So that's it. I tried very comprehensively to explain, to give you a glimpse of how we are evolving what is the next generation of Amazon SageMaker. If you are using our existing services you don't have to go and change something, we are bringing in the existing services and we have been integrating them in Amazon SageMaker and we are excited we want to hear from you, we want to see what you're gonna develop with that and looking forward for your feedback. There are many sessions over this week about SageMaker, the next generation of SageMaker. Please attend, please send us feedback and please develop on us. Thank you. | AWS re:Invent 2024 - Data foundation in the age of generative AI (ANT302) | 3,527 | AWS Events | 20241205 | An unparalleled level of interest in generative AI is driving organizations of all sizes to rethink their data strategy. While there is a need for data foundation constructs such as data pipelines, data architectures, data stores, and data governance to evolve, there are also business elements that need to stay constant such as organizations wanting to be cost-efficient while efficiently collaborating across their data estate. In this session, learn how laying your data foundation on AWS provides the guidance and the building blocks to balance both needs and empowers organizations to grow their data strategy for building generative AI applications.
Learn more:
AWS re:Invent: https://go.aws/reinvent.
More AWS events: https://go.aws/3kss9CP
Subscribe:
More AWS videos: http://bit.ly/2O3zS75
More AWS events videos: http://bit.ly/316g9t4
About AWS:
Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.
#AWSreInvent #AWSreInvent2024 | 2024-12-04T20:15:52.464848 |
https://www.youtube.com/watch?v=goqWzBW7hrM | Thank you everyone for being here today. As you mentioned, my name is Rachel Pang. I work at Anthropic. I lead our healthcare life sciences industry. So I've been working a lot this last year and a half, and hopefully to inspire all of you to kind of take it back to your organizations on some ideas on how you can start leveraging Claude models via Bedrock. So just to start off with some of our founding beliefs at Anthropic, the language models, as I'm sure you all have seen, the language models are going to continue to evolve at a very exponential rate. Just even looking at quad 2.1 last year compared to 3.5, there's been an extremely big jump. I'll just say for myself, it's been much more fun selling 3.5 than 2.1. And these advances, they will have the potential to disrupt the labor economy. And it's really important and something that we believe in our DNA at Anthropic, that this technology needs to be deployed responsibly. So just an introduction of who Anthropic is. You would like to learn more about us. We have a booth over there, but we really believe in responsible AI. We created this concept called constitutional AI. And that's something we really believe in. And. We created this concept called constitutional AI. And that's something we really believe in. And I think that resonates a lot, especially with customers within industries such as healthcare life sciences. And although it's still a very, let's say, new phase, we're still very new in the AI space, AI is already starting to have massive impact in organizations. So for example, especially in support, we're seeing 20% to 35% faster response times for support teams. For coding, it's been pretty incredible. Claude is extremely good at coding tasks. 30% to 50% faster content creation for sales and marketing, and 20% to 50% faster task completions for back office work. But that's not what you guys want to talk about today, I'm sure. So I want to tell you a little bit more about Cloud 3.5 Sonnet before we get into the specific HCLS use cases. But the most recent Cloud 3.5 Sonnet, it strikes the ideal balance between the market-leading intelligence, speed, and cost. It's available now on Bedrock. But what's been really interesting is in the last few weeks, we've announced some extremely cool features, such as computer use. And I'm going to share that a little bit here. So what is computer use? I think it's better just to explain it in a story. So let's say I have to plan my daughter's birthday party. I tell Claude, hey, I want you to book a jump house and then I want you to book, I want you to order me a pizza for all the kids and I need you to order me a bunch of stuff on Amazon for the party favors. With computer use, I give this instruction to Claude, and it actually does this task for me through the computer use, so through a bunch of APIs. This is now in public beta. I really encourage all of you to check it out. We know at Anthropic it's not perfect yet, but this is just to show that we are continuing to innovate on some really amazing features. And today, I actually talked to a customer, and they were extremely excited because they immediately thought, wow, I could do this for all of the different forms that I have to do within life sciences. Just imagine how much you can simplify some of your workflows within the life sciences, just imagine how much you can simplify some of your workflows within the life sciences space. So again, some of the example use cases you can see here. We launched this back a few weeks ago in October. It's currently in public beta, so not in bedrock, but it will soon be available. Unfortunately, we can't do the demo here. Some technical difficulties, but we have a really amazing demo on our website, so I encourage you to all check it out. Cool. So just a little bit more information about what we've been seeing in terms of trends in the last year and a half across HCLS. So, Novo Nordisk, for example, this was a public case study. They were able to use Claude partnering with MongoDB to reduce clinical creation documents from 12 weeks to 10 minutes. This actually prevented millions of dollars in opportunity costs for them. With Pfizer, they've been using Claude for drug discovery, specifically on drawing on scientific content. And then also within patient triaging, we're seeing a lot of customers using this as a patient interface. So they're using it to obviously message with patients and creating a more seamless patient experience. And diving a little bit deeper on the Novo Nordisk use case. So Novo Nordisk, something that they were seeing a lot of challenges with was with their clinical study report, which I'm sure all of you can understand and relate to. It typically took them about 12 weeks to compile, and this really delayed them in getting new treatments to patients. So what they did is that they developed something called NovoScribe. And it's a generative AI solution that used Bedrock, Langchain, and MongoDB to automate and streamline the CSR creation process. And again, what this did is it changed the process to go from 12 weeks to 10 minutes. And it also increased the quality of the CSR outputs with fewer resources. So this is just one of the examples. And what's really amazing, I would say, with these types of use cases is that you can easily translate these to your organizations, right? I mean, the processes are very similar, and these are things that you can easily translate these to your organizations, right? I mean the processes are very similar and these are things that you can start to do them yourself within your companies as well. Just to share a little bit more as well and what we're seeing across the industry, in terms of use cases that I've been seeing and talking to customers, we see a lot within, I would say, the clinical trial side. A lot of experimentation happening on the R&D side and a lot on the commercial side. I think now we're seeing companies wanting to experiment more in R&D, but it's still, I would say, early infancy stages on seeing the value of where generative AI will be. But in terms of just being able to cut costs, to help your scientists to minimize time and things like that, the clinical trial side of things is where we're seeing about 60% of our use cases. So what does this mean, I guess, from you to take away for today? So I'm not sure. I guess raise your hands. Who's used Claude in the audience? That's amazing. OK, great. So as you know, it's available on Bedrock. You can access all of our models. We try to keep model parity with Bedrock and as well as first party. So I think what could be a really great starting point for all of you is to look at the prompting suite. You can use our meta-prompter that's available on our website. There's a ton of documentation there. There's also a lot of resources on Bedrock as well. We also are focusing on contextual retrieval, so something called prompt caching. So the idea is that we're going to continue to make our context windows longer and longer. Right now I think we're at 500,000 for specific enterprise customers and we see a world where it's going to get into the millions, but prompt caching is a way for it essentially to do fine-tuning, kind of like fine tuning, where you can actually have the model actually read the prompts and not have to do actual fine tuning. You'll see more agentic use cases coming out. And then, of course, this comes up a lot. But how do we understand the model? Anthropic has started to do this. A lot of interpretability work, and we will continue to invest resources in this space. I kind of flew by that a little bit quickly, so if there are any questions from the audience, I'm happy to take them. | AWS re:Invent 2024 - Claude's role in streamlining drug development (HLS213) | 523 | AWS Events | 20241205 | In this lightning talk, the Anthropic team shares how leading life sciences companies are using generative AI to revolutionize drug development workflows. Learn how AI is addressing the “cold start” problem in creating critical documents, transforming it into a “quick start” solution. Also explore real-world applications in rapid protocol creation for clinical trials, efficient generation of safety narratives, and streamlined production of clinical safety reports.
Learn more:
AWS re:Invent: https://go.aws/reinvent.
More AWS events: https://go.aws/3kss9CP
Subscribe:
More AWS videos: http://bit.ly/2O3zS75
More AWS events videos: http://bit.ly/316g9t4
About AWS:
Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster. | 2024-12-04T20:40:54.223796 |
https://www.youtube.com/watch?v=shE5rBo3sBs | Thank you. Please welcome the Vice President of Technology at AWS, Mylon Thompson-Bukovec. Hello, everybody. Welcome. My name is Mylon Thompson-Bukovec, and I'm a Vice President of Technology at AWS. Now, I have been working on AWS Cloud Services for a long time, since 2010, and I have spent much of that time working with customers like yourself on the right data strategy for your business. Today, I want to share with you three different data patterns that have emerged among AWS customers across industries for data-driven innovation. The success of any modern business is tied to your effective use of data to differentiate. Petabytes of data power the customer experiences of NFL's NextGen stats, AstraZeneca's drug research, Pinterest's visual search engine, and Netflix's streaming business, among many other examples. Now, you have heard the phrase, good data means good AI, a lot recently. But that is not new. The importance of good data is nothing new to any of us in this room. When Amazon first started selling books way back in 1994, data was used at the very start to make important decisions about inventory management and personalization for book recommendations. That is still true today. And data drives everything from the MLI models that let your packages get there on time to how we in AWS manage the operations of our services. Data is the cornerstone to all of Amazon businesses, and that is true for you as well. Netflix transitioned from DVD rentals to a streaming business in 2007. And that was a huge shift for rethinking all of how they use data for the digital age. We had ride sharing companies like Lyft that disrupted transportation by using data to drive algorithms that matched up drivers with riders and optimized routes based on real-time changing traffic patterns. And Airbnb used data to create a marketplace that brought together hosts and guests. Now, the most common thread between all of these companies and many others is that they all used vast amounts of data for their business. There were major technology shifts, ranging from the introduction of Elastic Cloud Storage and Amazon S3 to open source innovations like Hadoop and OTFs like Apache Iceberg that made it easier than ever to use data for your business. Our first customer back in 2006 for S3 was SmugMug and SmugMug is a company that is in process of transforming the world of photography. Every day, people from all walks of life, from every corner of the world, pause for a moment to capture something they find special. And though the people, places and perspectives may vary drastically, that shared experience of capturing something meaningful. A moment worth remembering. Not only has the power to connect us all, but to change the world. We understand the value and power of these moments. And we're here to preserve what matters most, whenever and wherever you need us. to two decades, and really it's a unique perspective, given he was the first customer of S3, I'd like to invite on stage one of the very earliest AWS customers and my friend, Don McCaskill, the CEO of SmugMug, to the stage. Don? So as you can see on this slide and many of the future slides, these are the photos shot by that great set of photographers that we just saw the video for. I'm thrilled to be here today. As you heard, I'm the very first S3 customer, so we've been doing this a long time, for more than 18 years. Pretty remarkable. And one of the things that I want to talk about is our size and scale. When we started working with AWS all those years ago, we were an enormous customer for them at a whopping 70 million photos. Today, we're at 50 billion photos. So you can imagine the size of the scale over the years. In fact, we now have more than 100 million people a month using our services and our platforms, all built on top of AWS. So let that sink in for a minute. When we started 18 years ago, we had 70 million photos, and now we have more than 100 million people every month using it. The size and ability for us to grow and scale on AWS has been transformational for us. We have a couple hundred petabytes of data under management now, so it's really getting enormous. Now, some of these big numbers were helped a lot by our acquisition of another platform, Flickr, about six years ago, which we now own and operate on top of AWS as well. And when we first got started way back in 2006, we were most excited about storing all of those photos. 70 million, which seemed huge at the time, 50 billion now. But it eventually dawned on us that there was a lot more we could do with S3 than just storing photos. Now, back then, there was no sort of terminology or best practice guides or anything about big data. In fact, I don't think the term even existed in 2006. So we didn't have anybody to draft off of. We didn't have best practices or consultants or anything like that to figure this out. We had to learn by doing, and we made a lot of mistakes along the way. But it quickly dawned on us that we could use S3 for a lot more than just storing photos, that having an unlimited, bottomless, inexpensive storage tier meant that we could transform the way we managed our data. Our first big light bulb moment was realizing that if we got out of our data center mindset and we had seven data centers at the time, that we could keep all of our data. We'd gotten in some sort of bad habits around optimizing our data footprint because it was just so costly to run in our own data centers where we would keep fresh data around for say one to three months and then summarize it and age it out and eventually delete it because we just didn't have the storage capacity to keep it around but being on s3 enabled us to say take our web access logs and keep them for forever we knew that that that data was super useful in sort of a one-month time frame or maybe a three-month time frame, but it didn't really dawn on us that being able to compare data year over year, or in our case, decade over decade, would turn out to be super valuable as well. So we learned to keep all of our data, stop deleting it. Ooh, big win for us. And our next light bulb moment was realizing that we should really keep the data in its original format. We had similarly, like being used to being in our data centers, we tried to optimize the format either to make it smaller or maybe to make it work with our current software stack, which wasn't very future forward and future looking because of course software evolved and our software stack would evolve. And so we learned quickly the hard way that we should just keep all of our original data in its original format because that gave us a lot more flexibility and we didn't have to worry quite so much about cost because it was in S3 and on top of that we realized that we had better get better at keeping our metadata about all of this data we were accumulating around, things like timestamps and the origin of the data generation and all of that sort of stuff. So we got good at keeping all the data and keeping it in its original formats. And luckily, we were so glad that we did, because we had no idea back then that tools like Athena would come along that would literally let us query the original format data without even having to transform it. It was transformational. And then sort of our next big light bulb moment or epiphany was when we realized that we had inadvertently restricted our data and put up sort of access controls and gatekeepers and everything in front of it that were limiting our team's ability to get their job done and ask the data questions and get answers from it. And we did it out of an abundance of caution because we wanted things to be safe and secure. And we do deal with sensitive data. We're dealing with consumers and customers all the time. But there's a lot of data in there, average order value, sales volume, most popular photo of the day, or things like that that weren't sensitive, but we were still keeping it all locked up. So we learned that we really need to get as much of the data unlocked and share it with our people, our product teams, our business leaders, so they could make the fast, quick, great decisions that the data really existed to serve. And then last, but certainly not least, after we made it accessible, we discovered that that wasn't quite enough, right? We needed to go one step further and make it easy to get access to. It turned out that some of our sales, marketing, finance teams didn't love creating SQL queries to get access to some of this data. So we had to meet them where their tooling lived. We had to integrate with tools like QuickSight and create dashboards and natural language interfaces and things to really enable all of those teams to really unlock all of the value that we were now creating and storing in these vast, vast pools of data. So just recently, a couple of weeks ago, I mentioned that we had acquired Flickr six years ago. A couple of weeks ago, a team at Flickr analyzed tens of billions of data points across hundreds of millions of customers to answer a crucial, critical question for us that had long eluded us. When we acquired Flickr it was an amazing community but it was kind of a disaster of a business. It was in decline, it was losing a lot of money, and today I'm super thrilled that it is growing again and for the first time in 20 years it's profitable but it should be growing faster and we'd like it to be even more profitable so we can continue to invest in that wonderful community. So this tiny team at Flickr took advantage of all this work we'd done to create huge pools of accessible, easy-to-get-to data and answered this massive question for us, which is we have more than 100 million people using the service and platform who don't pay us anything. And then we have a much smaller cohort of members who gladly pay us a monthly or annual subscription fee. And we haven't really understood the behavioral differences between the two. We think we do now as a result of having all this data unlocked and accessible to our teams. We think we've identified a set of behaviors that are unique for those paying customers and we are hard at work designing new product experiences to really unlock that value. We're really excited about it. So I'll just leave you with a couple of our suggestions. If you want to transform your organization like we're transforming ours, keep as much data as possible. Don't transform it or if you do transform it immediately, at least keep the original data around, especially keep the metadata, and make it accessible to as many people in your organization as you can, and meet your people where they are. After all, it's the people making the decisions that the data enables for massive transformation. Thank you so much. Thank you, John. Amazing. Thanks. John McHasso. So today I said that we were going to talk about three patterns that are incredibly common across the breadth and the variety of AWS businesses. Almost every cloud journey starts with the first step of aggregate, what Dawn talked about, which is bringing together all of your data sources in S3 so your application owners can take advantage of the diversity of data and the depth of data in your business. Now, this is a very big change from on-premises data architectures. It moves companies away from expensive, vertically integrated data and compute all tied together, and it lets you scale all your data at a massive rate separate from the compute that uses your data. Application owners in the aggregate model have access to the data in a federated ownership model, and they can use it from everything from fraud analytics to knowledge bases, and it lets you take your proprietary business information and bring it to any business application. Many of our customers use aggregate data pattern on S3 very successfully at a very high scale, but we have other customers that take it to the next step, and that is the curate data pattern. In a curate data pattern, application developers no longer have direct access to all of the aggregated data. Instead, application owners select from a much smaller subset of curated high-quality data sets offered to internal or external data marketplaces. In a curate data pattern, you have the ability to control the quality of your data and you have the ability to control the access to it, which means you can manage the governance and you can add a lot more context into your product and your data. Now, sometimes AWS customers build on that data product concept with something called the extend data pattern. And what that does is it adds a data API layer to not just access curated data sets, but to add semantic meaning and standardize on the usage of their data. In Extend, application developers or newly developed agentic systems interact with the data using an API that operates on a single data model. It provides connectors and hooks for application extensibility, and it standardizes how to collect and use data for applications. Now, when you're aggregating data, you're basically sending your data from many different sources, like sensor feeds, data processing pipelines, data streams, log feeds, data warehouses, databases into Amazon S3 to store and use for business applications. This is where many of our customers actually shift another part of their architectures from a legacy batch system to using real-time data streaming services. Many of our customers now are using AWS services like our Manage Kafka service or Kinesis Data Streams to aggregate all that data in real-time from all of these different sources to consolidate, process, and analyze large volumes of information in just one data pipeline. Now, many of you in the audience today are already using the aggregate data pattern, and you are in good company. It is the reason we have over a million data links running on AWS. And we find that customers like the aggregate data pattern because the federated ownership model works well for your organization. It works well culturally and it works well with your different business units when you have so many different application owners. And when you have Amazon S3 as a foundation of aggregate, this data pattern will give you the most flexibility in how you want to use your data across a distributed organization with many application owners. For one example of aggregate, Georgia Pacific, a pulp and paper manufacturer, streams real-time data from over 85,000 vibration sensors on their production line, so they can correlate the telemetry streams from their connected equipment and production lines with other business data like scheduled maintenance so that they can build accurate predictive maintenance models. That is an example of aggregation of your data, again, using a real-time streaming model. Now, there are a lot of advantages to aggregate. By centralizing your data in Amazon S3, which is the common storage layer across all three of these data patterns, application and other members of the team, like data scientists or AI researchers, they have access to a wide variety of raw and processed data sets to use for experiments and application innovation, simply the act of bringing your data together into one place can significantly change the speed of your business. For example, Invista used to run analytics on their visual inspection site, stored in silos in each plant site. Before their migration to AWS, it took their data scientists two months to get their hands on just one plant site's historical data to perform analytics. And by bringing all of that data into AWS, now they can do their analytics in minutes or hours on an aggregated data set because it's more self-service. However, if you are using the aggregate model today, or you plan to, the key to success in the aggregate model is to apply standards in a federated model of data ownership. Okay? So I'm going to give you some examples of different standards. It starts with putting all of the data in S3. That is one standard. For example, Roche, a pioneer in healthcare, they use Amazon S3 to store all their data, genome sequences, radiology images, voice recordings, videos, medical publications, you name it. And when they standardize, they standardize in a second way, which is to do a single ETL pipeline, okay? And so they're using S3, but they run all their data through a single ETL pipeline, and that gives them standardization because it enforces consistent and accurate results across different document types, which helps their different users like analysts or business users accelerate the time to get to the data that they need for the right task at hand. In addition, I have to say many of our largest AWS customers, and that means Netflix, New Bank, Lyft, Pinterest, I can go on and on. They are now standardizing on a file format called Apache Parquet. And they're using it to store their business data, with many of them using open source Apache Iceberg to represent that Parquet data as a table to update it and to interact with it with SQL queries. Now, any text or numerical data that can be represented as a table to update it and to interact with it with SQL queries. Now, any text or numerical data that can be represented as a table, like credit scoring, transaction records, inventory reporting, all that data can and is very often stored in Parquet files in S3. It is a very common form of standardization in data lakes today. In fact, that means that Parquet is one of my most fastest growing data types in S3. I have exabytes of Parquet data stored in S3 today, exabytes of Parquet. Parquet is not a very big file type. It's a compressed format. I have exabytes of it. Customers make over 15 million requests per second to Parquet data, and we end up serving over just hundreds of petabytes of Parquet every day for requests. That should give you a sense of how that has gotten standardized in the largest data lakes running on AWS. Now, this combination of Parquet files and S3 and Apache Iceberg, it's what's powering these data lakes for some of the most data intensive companies. Pinterest has over an exabyte of data in their data lake. They have thousands of business critical iceberg table, and they are an example of a company that has standardized all their tabular data on Parquet. Now, as I said earlier, AWS is constantly evolving to build what you need for your business. And here at reInvent, we are going to be introducing a set of new capabilities that make the aggregate data pattern even more easy to use and even more powerful for your business. I'm going to touch on just a few of these, but there's many others that you will learn about this week at reInvent. Today, as Matt said this morning, we are introducing a new capability called Amazon S3 Tables. That means we are introducing fully managed Apache Iceberg Table support in S3 tables. That means we are introducing fully managed Apache Iceberg table support in S3, native to S3 for your Parquet data. If you store your Parquet data in this new bucket type, you are going to get three times faster performance for your queries and up to 10 times faster transaction or more transactions per second compared to doing it on your own, doing Iceberg on your own with Amazon S3. Now, S3 tables takes care of the work of maintaining the Apache Iceberg table layer that sits on top of your Parquet data. And that means it's even easier for you to standardize on Parquet files in your aggregate data pattern and then use Iceberg, any Iceberg compliant client to access your Iceberg tables. So for the first time ever, as of today, business users can now build business dashboards directly on Parquet data stored in S3 using Q4 QuickSight preview integration with Table Buckets. I am so excited about this. Let's take a look now at what a business user can do with Q4 QuickSight and S3 table buckets. I'm going to go ahead and get a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of a little bit of. Pretty amazing. I have talked to so many customers who are excited about Q for QuickSight. The ability to type in a natural language question and drill down into beautiful dashboards is so compelling. And now you can take that same capability and use it directly on your S3 table buckets and all the parquet that is the business data that runs your company. Now, I am also excited to say that there's another big change to S3 that we are announcing today, and that is S3 metadata. When you have a lot of data, one of the big challenges that you have is finding the data that you need. And by introducing S3 metadata, we are providing a capability where we generate and store both system and custom metadata for any object. And we're giving you access to that metadata in an S3 table. This is going to be incredibly easy to do aggregate with S3 metadata because you're going to be able to run a SQL query in any Iceberg compliant client and find the objects that you want to use for one of your business owners. So when you turn on S3 metadata for new objects, which is available in preview for you today and coming soon for the existing data you have in S3, whenever a new object is added, we automatically store information that we generate from the system, like the time that an object was uploaded or its size, or as well as any custom metadata that you put in your object header. You can also, after the fact of adding the object to storage, add a custom tag to your metadata, and what that can do is store things like data classification or other properties that help with the data lineage for the object. So a custom tag can be, I don't know, a streaming source, it could be the application that generated it, it could be a data classification. All of this metadata is captured and accessible in real time, and again, queriable by SQL in any Iceberg compatible tool. Now, we plan to extend this ability so that customers can add whatever semantic meaning you want in the near future to your metadata. We want to have unlimited metadata in S3 for you, whatever semantic meaning you want in the near future to your metadata. We want to have unlimited metadata in S3 for you. And that means object metadata, which at the end of the day is context about your data, will evolve into its own data lake, constantly queried, constantly searched to find data sets that your business owners want or your auditors need to audit. With S3 metadata today, you can find objects that match the results of a SQL query. So, it can be objects that have the same custom tag for data classification, they're of a certain size, they originated from a specific data source, like a stream. And this idea of metadata at the layer of S3 means that we are now introducing data understanding into your object storage, so you have the power of object context in your data layer itself. Now, with Curate Data Pattern, instead of having all these individual application owners developing and using their own data sets from aggregated data, the company standardizes on a few data sets that are used across the company. The data and business catalogs like AWS Glue Data Catalog and Amazon Data Zone become an important standardization to the Curate data pattern. Application data developers then access production data sets via an internal or external data market place for their applications, and they no longer go directly to the aggregated data and storage. Now, Curate makes it easier than aggregate to govern usage as well as audit for compliance because it shrinks the surface area for your exposure to just a few data products. A great example of a company that's using the Curate data pattern is Cox Automotive, who is transforming the way the world buys, sells, and uses vehicles. Just a fact for you, more than 60% of automobile shoppers visit Cox properties like Kelley Blue Book or Auto Trader to start their car purchasing process. They aggregate all of their data in S3, but they built a really rich set of data products over that data, which is used for everything from analytics to powering vehicle valuations. Now, Cox Automotive centralizes and catalogs petabytes of data in an internal product that they call the Cox Auto Data Marketplace for developers to use. This is the distribution vehicle for your data products, the internal data marketplace. And they say that the thing that makes Curate Data Patterns so successful for Cox Automotive is the quality of their data, the richness of the metadata in their data, the richness of the metadata in their data catalogs for data discovery, and the hub of having a data marketplace so all their application owners can come to one place to find it. That is a very common set of best practices for people who are using data products and the Curate Data pattern. Another customer that's built an internal data marketplace is Siemens. Siemens centralizes their data in S3, and they have hundreds of petabytes of data from over 300 sources, and those are used to build data products across more than 70,000 internal data consumers. Now, while the data providers are responsible for data ingestion into S3 and maintaining the technical metadata of the datasets, they have this central data marketplace that they call Data Lake to Go that lets consumers find and explore datasets that they need, like a research that wants to build an ML model, or somebody who's trying to optimize their rail operations. Now, before this, before they had data products, it took their internal customers weeks to contact the owner of the data and request access. This is probably very familiar to many of you in this room. But by changing that into an aggregate model with data products, they self-serve their data access now, and people can find anything they need, and they're seeing massive productivity gains. With data products and data like to go, developers were able to build an application that they call intelligent document mapping. And it takes a variety of different documents in their data set, and it builds step-by-step operations which include analyzing the design, creating a bill of materials, even enhancing the drawings. And this application that they built on a data product has really helped them save money and time. Before, their engineers at the factories had to do all of these steps manually. And by putting all of this data into data products, they're saving more than 15,000 hours of work, almost 1 million euros per year, just from centralizing the use on a few well-curated data products. Now, we have many customers using data products in the curate pattern, and to share his story, I'd like to invite onstage Moeed Waheed, the Executive VP and CTO for Experience Financial Services and Data Business. Moeed, welcome. Thank you, Mylan. Good morning, everyone. I'm super excited to walk you through the journey of how we build the right data strategy to build high quality data strategy to build high-quality data products that significantly unleash opportunities for our customers. Experian is a purpose-driven company. We are there when people are making life's important decisions like buying a car, buying a home, or sending kids to college. At least these are the very important decisions I made in my life. We serve 1.5 billion consumers and 200 million businesses with the credit data with the accuracy of three nines across the globe. Our obsession to precision and reliability empowers our clients to make confident decisions every single day. As we embark on our technology modernization journey, our goal was simple, to build the first, best, and only product to delight our customers. I'm going to structure my conversation today, six years of our journey, in four phases. The pre-cloud, aggregate, curate, and AI or extend phase. During the pre-cloud phase, we faced several challenges. We had a monolithic code base, which was preventing us from innovation, and we were using closed- source technologies. Our ETL processes were lengthy and complex, causing significant delays. The data duplication was widespread because of the siloed data across. The data velocity was limited to weekly, monthly, in some cases, quarterly. There is significant underutilization of our data. And a few business processes used to take more than 24 hours. The reason because we had to gather the data from various ETL jobs. These inefficiencies was major roadblock in achieving our business goals. As we embarked on our technology modernization journey to address these challenges that we faced during the pre-cloud phase. My vision was to build a data platform that powers heterogeneous workload, like transactional workload, real-time batch and analytics. First, we hydrated the data from OLTP to our data lake on S3. We invested a lot in automation, automation of our data pipelines using Airflow and S3. This is like super important step that we took standardizing on S3 and Airflow. And for the SQL engine, we invested in Spark and Athena, and we've standardized on Parquet as a data format. The most significant step that we took was from the day one, we enhanced our security posture. We encrypted the data at file, field and disk level. Not only that, we encrypted the data at rest, in transit, and in flight. Today, we perform over 10 billion encryptions per day. For the first time, we empowered our customers to perform advanced analytics using our credit and fraud data. This process used to take 20 days. Now it takes less than two days. However, despite having an incredibly talented team, we face some challenges with data discovery and governance. While we are making some really good progress, recognizing that we need to improve our data velocity to streaming, we embarked on the next phase, building on the progress that we made on aggregate phase. This phase made a significant transformation. We put Bureau on the cloud. We established a unified metadata management using Glue that allowed us to maintain a comprehensive and a consistent understanding of our data assets. Most importantly, we developed an intelligent data lifecycle based on when and how the data is accessed to optimize the storage. to deliver fresher data and help our customers to make decisions faster. And now we process 1 billion events per day. And we are operating at scale, and we also created internal and external marketplaces for our customers to build products on our hundreds of petabytes of data. The next evolution is extending this catalog to AI and ML workload, allowing our customers to power the advanced analytics so that they can derive greater value from our data. This phase not only laid the foundation for our current capabilities, but also set the stage for advancements in data-driven innovations. Building upon the success of the curate phase, in this phase, we extended our data to be accessed through API and implemented federated data governance through data mesh architecture. We also established several KPIs. Observability is the key for operational excellence. To name a few, cost to build, cost to store, time to live, time to archive. By having a unified catalog of data assets and ML and AI models, we were able to innovate and deliver amazing product, Experian Assistant, in less than 90 days. Experian Assistant is embedded in our Ascent platform. This is an on-demand Gen A Assistant designed to accelerate your modeling lifecycle while ensuring responsible use of AI, privacy, transparency, and security. We are hard at work on AI governance and evolving our data strategy for LLMs and multimodal. This phase represents a significant leap in our data capabilities, ensuring that we'll continue to deliver cutting-edge solutions for our customers. There are multiple dimensions in our data strategy that we evolved over the last six years. Every transformation stage has presented exciting opportunities for us to innovate and bring the best products to the market. last six years, every transformation stage has presented exciting opportunities for us to innovate and bring the best products to the market. As we evolve from the pre-cloud to the AI phase, our speed to build products increased exponentially. Our data grew from 100x, but our cost grew only 10x. And we leveraged managed AWS services to accelerate the transformation. Our investment in automation helped us build diverse data set, enforcing governance and compliance has become much more easier. Thank you. Amazing what Experian is doing with the Curate data pattern and soon to come using Extend. So when you think about the capabilities that you use to create data products, one of the things that we are launching right here at reInvent is Amazon SageMaker. And Amazon SageMaker is going to make a huge difference to the curate data pattern because it's going to simplify the processing of data into data products. It's going to bring together data analytics and AI into one workbench so you can do everything from model development to training to data processing to SQL analytics, assisted by Q. This is going to be a game changer for you to build data products. It gives you unified access to your data, whether it's stored in an ASRI data lake or a Redshift data warehouse or it's a federated data source, and you have governance built in to help you with anything that you are doing from an enterprise auditing perspective. Now, once you do have data products, you can do amazing things. United Airlines Data Hub lets their data scientists and analysts self-serve on data requests, and it's helping drive a data-oriented culture into their organization. Those clean data products are also what you want to use with RAG for AI inference-based applications. The PGA Gulf Tour does it this way. They use Bedrock to retrieve and integrate both structured data, like scoring or stats, and unstructured data, like commentary or social media, to help fans closely follow players through real-time updates, betting, multilingual commentary, all powered by clean, curated data products. Now, many companies find great success with curated data products. Now, many companies find great success with curated data products. They publish these clean data sets into an internal or an external marketplace, and they let other parts of their business leverage them. And other companies like Experience that you just heard are working to take their product strategy a step further with an API layer. So, rather than directly accessing the curated data sets, the API acts as an intermediary. It helps you control and govern the use of underlying data, and it can also be used as a form of standardization, to standardize on the formats of data ingestion and retrieval, and that means your application owners all over your company aren't building and owning their own pipelines. They are just using your data API. Now, making this transition to an API-driven model is a heavy lift, but it can provide a ton of value back to your business. You have that centralized control over data governance, security, and access policies. You can monitor usage patterns. You can apply rate limiting. You can enforce compliance for your data regulations with a data API. And once you build that data API layer, you are taking ownership of the quality, the recency and the availability of your business's data and you have to make sure that you have the talent and the resources to do so. But it also means that if you are rolling out a data API, you can take advantage of all the innovations that you're hearing right now on AI and benefit every user of your API. For example, as part of building your data API, your engineers can use LLMs to create summaries to enrich metadata, to find examples of personally identifiable information, or build an agentic workflow that standardizes data into a common parquet scheme or other data processing tasks. And you know the easiest way to do this is with a bedrock agent. This is the next stage of evolution for data processing. AI agents leverage foundation models now, like LLMs, to reason about and take action on goals you set out for them in these agentic workflows. They utilize a variety of tools that connect to different data sources like knowledge bases and APIs, and they are incredibly predominant now in the evolving architectures for data stewards. Moody's is a leading financial provider of analysis tools, and it is working on a proof of concept. It uses Amazon Bedrock's multi-agent orchestration, which Matt announces in preview earlier, and they have a workflow that involves a cohort of specialized data agents that collaborate to synthesize data from multiple data sources, including the product APIs and customers-owned data sources. These Bedrock agents work with different APIs, like the data APIs for Orbis, which is one of their data providers, and it gives both data and context around company financials, corporate structures, patents, and more. So again, this is the next evolution of where data is going for data processing, and what that means is that you can take advantage of that. You can put it behind your data API, and the rest of your organization can leverage the investments that you've already made. Now, to tell us more about how to build a next-generation data API, I'd like to invite Ravi Langathan, EVP of the Salesforce Data Cloud to this stage. Ravi, welcome. Thank you, Milan. Thank you, Milan, for the wonderful introduction. At Salesforce, we have always been thinking about our customer journeys, and we have really embraced the whole shift towards data-driven applications. That's really why Fisher and Paykel, as an example, is able to connect the IoT signals from their refrigerator to the next field service agent. It's the same is true when you really think about a major airline like Air India merging with Vistara and bringing customers together. SiriusXM, as an example, knows when to reach me at the right moment with the right offer to move me forward when it comes to subscription re-enablement. This has been a foundational shift for us. That's one of the reasons why we are seeing amazing adoption. Just this past quarter, we had more than 2.3 quadrillion records that we processed. More important than that is the fact that we had 1.1 trillion activations. What are these activations? Just think about it. This is the touch point that our customers have had with their customers in interesting ways to really lift their business forward. Now, we did not come about this in vacuum. We really lean into our customers to try to understand what did they do. At the heart of it, we all understand as data practitioners, enterprise data is trapped and siloed in applications, thousands of them. In fact, in Salesforce ourselves, we have tons of them. There is, of course, the combination of lakes and warehouses that people have created, creating a chasm between where the information is versus where the action needs to be with the customers. The integration complexity is paramount to understand here through the lens of the different patterns. Some were doing it for AI, some were doing it for BI use cases, some were just building it for decisioning, so on and so forth. And everybody always thinks about it as a data problem at salesforce we had a fundamentally different learning as we dive deep into it first it was about metadata it's also about the semantic data it's about the security and last but not least how do you really action all of this somewhere so that's why we fundamentally reimagined the entire salesforce stack on Salesforce platform from the ground up with every piece of information that's relevant for the enterprise. Whether it is inside Salesforce or outside Salesforce, we wanted to activate all of that information. We wanted to create an inversion from data to insights which gets orchestrated into applications and now across to the agents. With AgentForce now you're able to bring together humans, agents, data and CRM all in the same flow. Let's see how this all comes together in action. So here I'm role-playing. I'm Ken, a bank customer and I have my credit cards. Now I'm able to have a nice conversation here to bring together all of my activities from my credit card details and transactions from a warehouse. Equally, I'm also able to understand my APRs and the fees. When was the last time you guys remember where it is that you signed up for? This is unstructured data in S3 that is coming seamlessly in context. Now, there is also interesting nuances around agents, that is guardrails. Not all agents are created equal, and security is paramount. Here, as Ken is going to go about and asking for a credit line increase, it's super important to understand the role of humans. Seamlessly, because the data is the same, the metadata is the same, the context is the same, we are now going to be able to transfer Ken to a human agent in the form of Alex. With this part of the Salesforce service console, Alex is able to see everything that has just happened in context just so far. She's able to re-eason over what is going on. Normally she would have swivel shared over ten applications to answer. Now, thanks to her own employee facing agent, she's have swivel-chaired over 10 applications to answer. Now, thanks to her own employee-facing agent, she's not only able to see the basic information about the requests and what Ken has in his view, she's able to see much more deeper. With the Mule API, she's able to see what's the credit score. She's able to see much more deeper activities that he has done, including the credit card activities of his spouse. Now, organizations also have a lot of policies that are documents that are sitting everywhere. This unstructured content needs to come to life in this flow where she's able to understand the policies of credit scores with how much limits are allowed. This is foundationally important to understand how this all comes together in an agentic reasoning way. Now she's asking a loaded question to say, what should I do with Ken? It's bringing the structured data from CRM, the structured data from the warehouse, the unstructured data from the S3. It's also bringing all of the APIs to bear to say what should be the next steps. But it's super easy with Salesforce, thanks to our flow, where you can now automate all of that behind the scenes to go and really manifest this new change for credit line increase. That is the power of humans, data, AI, CRM, all coming together in this agentic ecosystem. Now, you're all technologists. You might be wondering, how is this all coming together? First and foremost, our fundamental philosophy is open and extensible. What do we really mean by open and extensible? First, every piece of Salesforce information is now available in data cloud in 60 seconds or less in the form of a parquet with an iceberg. Now, we didn't stop there because we really needed to understand the semantics. As I said before, the source objects have to be serialized down. As an example, we know which specific folder in S3 has the order and which one has the order details. The semantic information is piped down, which is allowing all our applications to seamlessly access the information with the right model. As Milan alluded, this is the data API of Salesforce. We call it data as a service. That's now ubiquitous across our ecosystem for all our applications, sales, service marketing, everything to consume. That's why we were able to unlock the demo you just saw before. The second aspect of it is this also allowed us to reimagine the entirety of the enterprise data. Thanks to partnerships with Redshift as example, we are now able to take any piece of information in Redshift, any table, and any schema, and make that seamlessly available with zero copy and in the form of both the metadata and the data. So you can use it in Flow, Apex, or the agent itself. Equally, anything that is there in the Salesforce ecosystem can now be manifesting itself in Redshift for the next query execution for your analytics workload. This has been a foundational shift for us to embrace the entirety of the enterprise. Now it's great, we got all the data, what's next? We really need to think about how we think about processing that information. As data practitioners, we always think about data quality and governance in the sense of how do we structure our pipelines. But now more important than ever is how are you going to harmonize? There's going to be tons of other data, tons of customer information from all of these applications. So thanks to the Salesforce information model, we have 700 plus objects right in there, which is going to get you started in your journey of how you think about harmonization of your schemas. Once you harmonize the schema, there's still the nuance around customers, accounts, and entities that you need to do with regard to identity resolution before you can take action so that you know the right Ken and the right Ravi that you're working with. When we looked at unstructured information, we don't see it as any different. It's equally important for us to really give structure to the unstructured. That's one of the reasons why, in addition to the classic rag pattern underneath that you would expect that we would have, we would also be able to seamlessly marry that with the structured information. Now, as we move forward, it's super important to think about governance. It's great that we got all the data from all over the enterprise. It's great that we have harmonized it. What are we going to do? It's really important to really embrace and build a new system of governance. What we had to do was connect the RBAC system that you typically have in enterprise applications with a new generative AI-led policy system. This system is automatically scanning all your data, all your metadata, to really propose what should be the right classification, what should be the right policies, and how these policies should manifest themselves in the context of these different agents. The demonstration you saw where the employee agent had much more broader security primitives than the consumer agent is a reflection of the system. We were able to do both object level, field level, record level, and masking all in the demonstration as you saw before. This is foundational as we all embark on this agentic journey. Now, in many ways, we have blurred the lines by breaking open all the strapped data and all the silos, and data fluidity is the norm, not data center of gravity as we see it. Once you have that, it's super important to think about harmonization and unification. Then comes the whole notion around governance, tagging, classification, and security. But more important than that is how are you going to activate it? How are you going to use it everywhere in your organization? Whether it is for transactional use case in sales or service, or for analytical use case in Tableau, or for a prediction use case with the builder that you might have, with a prediction model or a propensity score, or the agentic use cases that we are looking at across the board. So thank you. Amazing, huh? That is a data API in action in Salesforce right now. Now, the great thing about these three patterns of aggregate, curate, and extend is that you can choose to deploy one of them across your whole organization, or you can apply different data patterns to different business units in your organization. It depends on what they need. The starting place in the cornerstone is always going to be Amazon S3, but if you think that the fraud analytics organization might always want access to raw data, they can use the aggregate pattern. And if you think that extend is a better approach for users of highly regulated data, you can give extend to your marketing team to use as their data pattern. Now, because you are building on AWS, you know we are constantly innovating, so that it is easier for you to use any of these patterns in your business. And that this week means S3 tables, S3 metadata, the next generation of Amazon SageMaker, Bedrock agent, multi-agent capabilities, and they're all there to help you apply the data pattern that you need. With AWS, it is easy to build data-driven decision making into the DNA of your company. You have choice, you have flexibility in how you apply your data patterns on AWS, and that is critical to how you evolve and you accelerate your business with data. Thank you very much, everyone, and have fun here at reInvent. | AWS re:Invent 2024 - Modern data patterns for modern data strategies (INV201) | 3,635 | AWS Events | 20241204 | CTOs and CDOs must adapt their data strategies to manage both the explosive growth of data and its mission-critical usage to drive application experiences that span personalized user experiences, petabyte-scale business analytics, and real-time AI inference in applications in every industry. To keep pace and optimize for the business, leaders must balance governance with flexibility and scale by separating data growth from data usage using emerging patterns like data semantic layering. In this talk, discover how AWS customers manage data growth and usage at exabyte scale, and explore AWS capabilities to evolve your data strategy for this rapidly changing landscape.
Learn more about AWS events: https://go.aws/3kss9CP
Subscribe:
More AWS videos: http://bit.ly/2O3zS75
More AWS events videos: http://bit.ly/316g9t4
ABOUT AWS
Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world’s most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.
#AWSreInvent #AWSreInvent2024 #AWSEvents | 2024-12-04T21:30:57.154232 |
https://www.youtube.com/watch?v=Tskrymzbu7M | Welcome to Unified and Open Governance for Data and AI. My name is Zeeshan Papa. I'm a product manager here at Databricks, and I focus on Unity Catalog, data, and product governance. My name is Fabien Tukor from MasterCard, and I'm not here to talk to you guys about data governance. I'm here to talk to you guys about what it enables, namely getting access to data, finding data for the right use case, and using it for innovation. Thank you, Fabian. Fabian will be beginning his slides a little bit after I initially talk about Unity Catalog. So let's start with what is governance and why is it important. We've talked to lots of customers over the years and we believe that governance is critical to scale your data and your AI initiatives. In the age of generative AI and the age of large language models, the governance for things like data and AI has actually become even more paramount because we're now utilizing and things like data, ML ML models and we're seeing explosive growth in the sector but the foundational principles of governance things like accountability compliance quality transparency etc these things were all traditionally extremely important and essential for data management they now extend to artificial intelligence and AI and this is becoming a really top-level topic at most companies for example Forrester predicts that AI governance is now the next big board-level topic at most organizations. Now everybody sort of understands the importance of this, but governance of the data state is actually practically very difficult. Let's see how. In most organizations, you have a bunch of different data living in a bunch of different places. You have data in different data sources like data lakes and data warehouses and databases and catalogs, possibly multiple catalogs. You have all these different formats, things like iceberg and Delta Lake and Parquet. You also have fragmentation when it comes to things like the actual types of assets, so things like notebooks and files, machine learning models, dashboards, tables, views, et cetera. You have all these different diverse data and AI assets and they're across a variety of different heterogeneous tools. What you end up with is a bunch of different silos, silos across formats, silos across assets, silos across sources, et cetera. And this causes a whole bunch of challenges from a governance standpoint. For starters, you suffer generally from fragmented governance for data and AI. And this makes it hard to manage things such as access policies or auditing, monitoring, observability, lineage across all the individual silos that you might have in your organization. There's also a lack of open connectivity which makes it extremely challenging and difficult for consumers to be able to access and share data and AI across different tools, engines, and platforms in order to be able to collaborate and interoperate in a broad way. Finally, there's a lack of built-in intelligence and that really means the fancy way of saying that if I think about trying to discover data and I want to understand and get a domain-centric view of my data, it requires having and being able to connect information from all these different silos and putting them in one place. What really ends up happening is you end up having a few key technical experts that are sort of the tribal knowledge holders within your organization. And so without that intelligence inside your platform, the platforms tend to fall short of connecting things like business concepts with underlying data and models. And this event inevitably creates that bottleneck where these experts are needed. And these governance challenges have significant impact in terms of business value. So fragmented governance often leads to increased complexity in terms of compliance, security, and quality risks, as well as increased operational inefficiency. So businesses tend to struggle to maintain a unified view of their data and AI estates. Lack of open connectivity often leads to vendor lock-in, and this limits an organization's ability to switch vendors or integrate things like new technologies as the needs and the ecosystem sort of evolves. Poor inoperability also further hinders collaboration and the scaling of data and AI initiatives. This leads again to higher costs as the organization has to now leverage different tools or multiple tools, replicate things back and forth, creating more silos and more of the same problem. Furthermore, the lack of built-in intelligence actually limits your ability to have broad data and AI democratization across your organization and effective use of these platforms as well. So what this means is for the folks that are non-technical users, they're not able to find and understand the various data and data and AI assets that are in your organization. Ultimately, this slows down things like innovation and hampers the ability to make really effective and timely business decisions, which is, as you can understand, limiting your organization's ability to fully leverage the data and the AI that you have for an actual competitive advantage. Our point of view here at Databricks is that if you create an ideal governance solution from the ground up, that it needs three things. Unified governance across data and AI assets, and open formats so that organizations have complete visibility, monitoring, security, and compliance across all their data sets. Open connectivity so that teams can connect to any data source and can all their data sets. Open connectivity so that teams can connect to any data source and can access their data and AI assets from any tool, any engine, or any platform. Built-in intelligence that connects the dots across all these different diverse data and AI assets regardless of format, understanding things like semantics, understanding usage, understanding lineage, and how they are all connected in order to be able to provide that domain knowledge and those insights to all users across any type of workload and autonomously perform things such as platform optimizations, which also leverage that information. So there's a whole bunch of really cool things that we think about when we think about governance. It's not just access controls on data. It's the ability to find data. It's the ability to find data. It's the ability to manage data effectively across your enterprise. And it's the ability to use that data across a variety of different tools in and outside of the Databricks ecosystem. We took a pioneering approach here at Databricks with Unity Catalog by releasing the industry's only unified and open governance solution for all data and AI. This solution works across any open data format and any data and AI asset and provides comprehensive governance capabilities, including access management, lineage, discovery, observability, auditing, monitoring, et cetera, in a single system. But Unity is more than just a catalog. It is an enabler of data intelligence for your platform. Whereas traditional catalogs are just limited to things like access management, tabular data, metadata exposability, Unity is in fact open to any external data source and can be accessed from any external tool or engine. Now, let's take a quick look at what Unity Catalog actually provides in terms of these three foundational pillars that we talked about, starting with unified governance. First, you drive operational efficiencies with a single catalog to manage all of your organization's data and AI assets, such as files, tables, machine learning models, AI tools, business metrics, et cetera. This works across any open format, so for example, Delta Lake, Iceberg, Parquet, and more. You can enhance compliance with a simple singular interface to be able to manage access policies and auditing across your data and AI. And you can also improve your security posture with fine-grained access controls and attribute-based policies for data and AI to meet your unique security needs. With Unity Catalog, you also reduce the operational overhead of understanding data flow and dependencies with real-time lineage for data and AI. Ultimately, this helps you improve compliance by classifying and tagging sensitive data and AI assets, for example, such as PII, PCI, PHI, etc. We actually come out of the box with things like quality monitoring alert capabilities that help you ensure that consumers ultimately have trust in the quality of your data. This is of course a very important part of data management and data governance as a whole, is being able to signal to other consumers of your data that these in fact are trusted data sources. All these capabilities help you build this unified and compliant view of your data estate. One of the features that we talked about earlier this year at Summit is Unity Catalog Metrics. And this is coming out hopefully within the next quarter or two, don't quote me on that. But it gives us the ability to discover, govern, and query business metrics. So what this really means is building a semantic layer effectively above your data and being able to inform unity catalog through the creation and the management of these metric objects what the correct version of data is in your organization and this then obviously informs the consumer experience so when you go into you know genie or any workspace in in databricks you have the ability to quickly find and understand the correct organizational view of data this is accessible from Databricks, you have the ability to quickly find and understand the correct organizational view of data. This is accessible from Databricks as well as external tools, and it also works with third-party metric providers as well, so your cubes and at scales, et cetera. Switching into open connectivity. Unity Catalog empowers your users with open source access, with open access and collaboration in your organization while actually avoiding vendor lock-in. First, it provides rich federation capabilities that allow you to actually connect to any database. So for example, Postgres, Azure SQL, Oracle, Salesforce, Teradata, or data warehouses like Snowflake, GCP BigQuery, Azure Synapses, other catalogs such as Glue, Hive Metastore, Iceberg catalogs, et cetera. You can keep the data where it is without actually having to copy it over. And you can still get full governance capabilities that we've discussed earlier. So for example, things like lineage or tagging, fine-grained access controls, auditing, search and discovery all in one place. Now you can access and join that data from different sources and run queries with performance optimizations. Now this actually helps mitigate the cost overheads you face with other solutions where you typically have to replicate or migrate data from one system to another in order to kind of get past the proprietary storage and system A and bring it into Databricks, etc. so you can work with it in an open format. This helps you get past all of that without having to worry about it. Secondly, Unity Catalog also provides open APIs. So we've actually just released new features in this arena as well. So teams can actually read and write data that's managed by Unity Catalog from any client or engine or platform across Delta and iceberg ecosystems. So this gives you the flexibility to choose the right tool for the right job and the tool of your choice, more importantly, with your data, and provide that broader interoperability across the heterogeneous platforms that might actually exist in your organization. Third, Unity Catalog enables you to be able to share data and AI assets across clouds, regions, and platforms, while still keeping the data in your own storage and without actually needing to copy it across clouds, regions, or wherever you're sending it to. This ultimately reduces cost because copying data is expensive, right? As in other solutions, you typically have to copy data from one location to another, out of a proprietary storage format into an open format in order to be able to make it accessible to customers, partners, vendors, et cetera. And your consumers, in many cases, also have to be on the same platform. So again, we view those as sort of a lock-in type system and that's something we're 100% trying to avoid building here at Databricks. Finally, Unity Catalog is open source. So if you're building a future proof governance model that mitigates the whole catalog lock-in concern that many organizations tend to have, building on top of Unity Catalog makes a whole lot of sense. But data intelligence isn't just limited to the data and workloads that you run directly on Databricks. You can actually unify your governance and your discovery by federating all of your data platforms with Unity Catalog. So also with a Delta sharing protocol, you can securely share that data with any system that supports Delta Lake without ETL, which is what I just mentioned a minute ago. So talking about built-in intelligence, Unity comes with an integrated AI that helps your teams improve productivity and build a competitive advantage. First, it actually provides AI-powered tagging and AI-generated comments at the column level and at the table level, or the asset level, by understanding the data and leveraging how that data is connected. This helps data owners with things like, for example, efficient documentation. And it also ultimately helps your consumers build a better understanding of data and drive clarity. Second, all of this AI-powered documentation is combined with our lineage information, which powers a context-aware search that helps users search more efficiently and be more productive. Users get out-of-the-box data insights, such as popularity, usage, frequently used queries, et cetera, to get started faster. Third, Unity Catalog's metadata, documentation, lineage, usage insights, and so on, also power the intelligent assistant that's inside the product that understands the unique context of your data and helps your users ask questions, generate, explain, and fix code using completely natural language. This provides, of course, a massive productivity boost for any user of the platform and helps a moderately SQL literate or a minorly SQL literate user suddenly become incredibly proficient in terms of being able to access data. Finally, Unity Catalog also provides AI-powered autonomous performance optimizations to keep costs low and query performance high. And we've seen this in talking to customers. We've gone and interviewed hundreds of customers. Our customers have been experiencing significant productivity gains with Unity Catalog. On average, they actually report a 57% improvement in overall productivity and a 49% faster delivery of their use cases. These productivity gains span across all governance capabilities, which actually showcases the value that Unity Catalog brings to streamlining data and AI workflows. It's pretty exciting to see what we're able to do for the organizations that use our product. There's about 10,000 customers that use Unity Catalog today across a variety of different industries, and they're all benefiting from this vision that we have for unified governance. Unity Catalog also works with your existing data catalogs and governance solutions, as I've mentioned. So you can continue to leverage whatever existing investments that you have and build future-proof governance models without necessarily having to incur expensive migration costs. All right. I think we're going to flip to a demo now real quick. So if you will bear with me, I'm going to hit the magic button. Hopefully this works. Voila, it works. That's amazing. All right. So what you're looking at here is a Databricks workspace. So I'm going to quickly go into full screen mode here. All right. So this is a Databricks workspace. If I look over here on the left-hand side, I've got a little icon here that says Catalog. If I click that, I can now open up the Catalog Explorer. This is effectively the main browsing interface of Unity Catalog. Now, I've got I can now open up the Catalog Explorer. This is effectively the main browsing interface of Unity Catalog. Now, I've got a whole lot of data in this particular workspace. But I did load up and connect a Postgres database. So I did that by actually going into the Connections tab here, clicking there. And there's a Postgres database that I configured. Don't look at my passwords or my URLs, but you get the picture. And what I did was I was able to use that particular connection that I defined to create a catalog inside Unity that I could actually run in query. So I'm going to quickly refresh this and connect my warehouse. Select the right compute. OK. Select my warehouse. Select the right compute. So this is my... Bear with me for one second. This particular warehouse is a little slow. Let's try this one. All right. There's try this one. All right, there we go. OK, so I went ahead and loaded a few different tables in here and I generated some data, specifically just so we could talk about it. And I went ahead and generated a customer sample. Now, kind of quickly looking at this, I have no idea what it is because I just connected this random database into Databricks. And as you can see here, it pops up an AI suggested description that I can accept. I can also do the same thing by clicking into a column here. So if I say, for example, couldn't guess what the address column was, I could actually generate a comment for it, right? And that will tell me what the address column is. And it's actually surprisingly really very good. We use it internally all the time to tag our own data sets. We're really happy with the way that our models give out really meaningful information. Once all that information is in there, I can start searching for things in the platform. So as a consumer, being able to search for that information is super useful. So just searching for financial fraud analytics brings up all the transaction tables, accounts, customer tables, and so on and so forth. So clicking back into that customer's table, if I wanted to go and browse the sample data, as a consumer of this, as I mentioned earlier, I can ask natural language queries of my data. So let's pull up our genie here. I click the Ask Genie button right there. And let's say, how many customers do we have? Let's give it a second to spin. All right, we've got 12 customers. We're not doing that great as a bank. We should probably get a few more customers, right? But yeah, you get the idea, right? The point is that a person who doesn't necessarily have the ability, like a non-technical user or a user who's perhaps only literate in a small amount of SQL, can quickly open up our platform, discover things, regardless of where that data lives, whether it's in a Delta Lake, Parquet, Iceberg, in a Postgres database, as I'm showcasing here, and quickly figure out and make use and sense of that data, all of being able to take advantage of the access control, monitoring, auditing capabilities that I mentioned earlier. So it's a pretty cool product. I don't want to spend much more time on it, as we have the lovely Fabian here to come and tell you about MasterCard's journey. So I'm going to switch back over to him. And thank you. Awesome. Thank you. Thank you, sir. Round of applause for Zeeshan. And I know it's not Q&A yet, but I wanted to ask you, any Cyber Monday deals on Databricks? I would be buying some if there was, yes. But unfortunately, there's not. So would I. So before we start, I wanted to ask you to remember when you started at the company that you're currently working at. Do you remember what it felt like to be brand new at that company? And you didn't know anyone, probably. You didn't know your company's products. You certainly didn't know the company's data. Now imagine if you had started and your role was to innovate with data. Maybe you're an analyst. Maybe you're a data scientist over there. Maybe you're a product manager down there. And your role is to use the data to solve problems. How do you find the right data? Are you confident that you could find the right data, the right quality? How do you gain access to the right data and in a compliant way, in a responsible way? And then once you've done that, you're not done. You still need to figure out, how do you use it? Can you do AI on it? Can you train an LM? So all of these things are what my team thinks about to enable for MasterCard and our data users. First, let me just give you a quick reminder of who MasterCard is. I'm sure you guys know, but there are some myth that we try to dispel. Number one is we don't issue cards. Only banks do that. That means we don't set your AP don't issue cards. Only banks do that. And that means we don't set your APRs. Banks do that. But what we really are is a technology company. We are a network company that connects you and your cards, your phones, your watches, whatever you want to pay with. We connect that with the merchants you want to do commerce we connect that with the merchants you want to do commerce with and with the banks to ensure commerce that's what we do so we're not really a financial services company we're a technology company the data and technology company and when we think about picking a partner, picking a technology partner, the first thing that we think about is that they must scale. Because everything we do at MasterCard involves, well, a lot of scale. 220 countries, 3.5 billion cards and credentials, 150 million merchants, 200 billion card transactions, I think this year, and those are big numbers. By the way, 200 billion transactions is a little over 6,000 per second. So still hard to figure out, what does that mean. So if we think that we're 200 people in this room and we clap at three claps per second, then you guys would have to clap for 10 seconds to reach 6,000. So maybe we can do that at the end when we're done. You guys can clap for 10 seconds. And that's a long time to clap, by the way. So now you might be thinking, well, why is MasterCard so interested in innovating with data, in using AI? Like, why does it matter? Well, it matters because we use it to prevent fraud. And do you guys worry about fraud? We just went through Black Friday. I'm sure a lot of you have done some online shopping. Do you worry about money leaving your account without your permission? We don't. We don't worry about that. Why? Because, well, we're not liable for this yeah if we don't authorize the transaction then we'll get reimbursed that's called actually zero liability protection it's a benefit from using your MasterCard but the other reason is because we really work hard to prevent fraud and to stop it before it spreads. And if we didn't do that, then it would cost a lot of money to banks and merchants, and they would eventually pass it down to you guys. So obviously nobody wants that. So it's really important that we stay at the forefront of data and AI so we can prevent fraud and we do that by using a large diverse and complex data set so we can understand if the patterns that are happening on our network are normal or if they're out of whack and we need to stop something. Now when I mentioned scale for example our data warehouses 15 petabytes of data so it's quite significant and we have been at this for decades. I recently found an article in the Wall Street Journal and I think the headline was MasterCard stops three different ATM attacks using machine learning. This article was from 2016. So we've been at it for a while and we need to keep working at it and stay ahead of fraud story, you could say. So when I think about data governance and when I think about the impact our team can have, when I think about the impact our partnership with Databricks can have, that's what's at stake. We need to be able to continue to innovate rapidly, prevent fraud. But in the same breath, what we also need to be able to do is use data responsibly. And as a privacy-minded European, sorry, that was mean. I'm sure you guys are privacy-minded too in the US. I was really proud when in 2020 we announced our data and responsibility principles and at the heart of it it means that the data we have is not our data it's your data right as individuals you own your data and you control it, you should benefit from the use of it, for example with reduced fraud, and we have to protect it and we have to use it responsibly. So again, that's what we try to do, enable rapid cutting edge innovation with data and AI, but do it responsibly so how do we do that so again like I said at the beginning we try and think about helping people find data for use case helping people gaining access to it helping people get started on how to use it and how to use AI. Right? Sounds simple. So data discovery, think about what that means. And think about the scenario where you just started at your company. And you're a data analyst or a data scientist. And you have to find data for use case. Well, looking for data is not unlike looking for a hotel in Las Vegas when you want to attend the conference. At first, you think, how hard can this be? And then soon enough, you're browsing through hundreds of options. You give up, and you just ask your colleagues where they're staying. A lot of that is happening with data as well. You look for data and in the end, it's just way easier to just ask somebody who knows. But that's not very efficient and also you don't really reuse the knowledge across the company. So at MasterCard, when I think about helping people find the right data, I think about what information I should give them. I should give them descriptions, definitions, okay, that's a start, but also quality metrics. Because you want to know, is this a good hotel that I'm staying at, or is this a one-star review or kind of a hotel? You want to show them lineage. Where is the data coming from? You want to get them to understand the data restrictions associated with data. So at MasterCard, a lot of this information we have it is just in different systems. And what's pretty exciting with Databricks is that it's all under one roof. And a lot of it is automated. So for example, lineage is calculated automatically. And if you've tried to do lineage in your firm on-prem that's very hard to do you have to talk to a lot of people and they don't want to talk to you about lineage so the fact that it happens automatically is is really cool so let's imagine you found data for use case now Now you just have to, you know, gain access to it. Again, how hard can this be? And here the analogy can be, well, it's trying to go past security at the airport. How hard can this be? Well, it kind of depends on what the rules are. The problem for us is that the rules around how to use data, how to use AI, are still being defined, right? You have all of these countries trying to think about what's right for them. They don't all think about these problems in the same way. And so you end up with a patchwork. And for us at MasterCloud, because we're global, well, it's hard to navigate. So we spend a lot of time thinking about it and designing for it. And because we don't want to get you jammed up with long security lines, where you have to get your data business case reviewed by privacy, but there's like 50 other business case right ahead of you. We have to think about creating TSA pre-check for your data, right? So you get automation and you don't have to wait. Now if we imagine, so sorry, the cool thing about Databricks is that we can automate these rules in the Unity catalog and the other cool thing is that it's a platform where the usage of data and the governance of data and the access control around data, everything happens in that same platform. Lineage happens in that same platform. Whereas for us, for the longest of time, our catalogs sat as a separate platform from the data warehouse, from the tools that people were using to use data. So with Databricks, that's really helping us also because we can see the transparency in how people are using data, and we can follow the thread. And I can tell you that our privacy attorneys are really excited about that. They're really excited about being able to ask, how is this person using data? I gave them access two months ago. What have they done since? We could pull that thread. It's easy to do. It's auditable. OK, last but not least. So now you have access to data. And all you have to do is solve problems with it. Well, good luck because it's not always easy to figure out what query am I supposed to write, how am I supposed to join these two data sets. Yeah, some of our tables have 500 fields. Okay, so again documentation, but it's hard to produce. People don't want to do it. I don't blame them. So what we try and think about when we think about data governance is we think about showing aggregating information in an automated way. For example, if you can see all of the queries that have been placed on a given data set you're interested in, that helps you. That gets you started. So you can see that in Databricks. You can see the queries that have been placed. And that gives you a head start. The demo that Zeeshan showed, that was pretty cool. I really like this. The genie, the AIBI where you just talk in English to the data, gives you information back. That's huge. That's huge. We have over 1,000 data scientists at MasterCard that are essentially playing that role between the sales team that talk to customers and the data. But what we found too is when we tried using it, well, it doesn't really work right away. You ask questions about your data, and you don't really get the right answers. I'm sure you guys have played with Judge GPT and other tools like that. So you can imagine what happens. But then what we also learn is that if you catalog your data, if you provide the queries that have been placed against this data set, then it figures it out a lot better. The last thing I'll talk to you about is that's very exciting too, is the fact that you don't have a separate catalog for machine learning. It's the same catalog. So you have your data catalog, your machine learning catalog. It's one catalog. I was listening to Ali, the CEO of Databricks, explaining that they had started building two different catalogs. And then they realized, well, it's actually the same functionality. So they merged both. And I think it's brilliant. Because when you have to comply with AI regulations, you have to explain how AI arrived at a decision. Much easier if you can see the thread from the AI model to the data, that lineage. So those are some of the things we're pretty excited about with Databricks. And although we're at the beginning of our journey, already I can offer some lessons that we've learned. OK, prioritize, it sounds like, of course, why would you not prioritize? So you have to prioritize based on business value, based on executive support. And the unintended consequence is that when you do that, well, it becomes a lot easier to ask people to do data management and data cataloging, these tasks that they really don't like doing. It's kind of like cleaning your garage. Nobody wants to do data management and data cataloging, these tasks that they really don't like doing. It's kind of like cleaning your garage. Nobody wants to do it. Everybody loves it when it's done, right? Same thing with data cataloging. So prioritize because it'll help you. Make friends because it takes a village. We work really closely with the teams that are moving data to cloud, with the teams that the privacy team, with the data users, with security. All of these teams must work together if we want to enable the kind of user experience that I talked about, where it's easier to find data, it's easier to gain access to data, it's easier to use data. And's easier to gain access to data, it's easier to use data. And last but not least, learn by doing. And what's really cool about Databricks Unity is that I saw the demo that Zeeshan gave. It wasn't you giving the demo, but I thought, well, this is really cool. We should try it. I talked to somebody on my team who had access. Within 30 minutes, she was trying it on a dataset. So it's actually much better to learn by doing and to see what doesn't work, to see what works. And yeah, that's where we are. So thank you very much. And I look forward to hearing about your journeys.. | AWS re:Invent 2024 - Unified and open governance for data and AI (AIM256) | 2,168 | AWS Events | 20241204 | Fragmented governance of data and AI can stifle innovation. This session highlights how Databricks Unity Catalog offers unified discovery and governance for tables, files, ML models, and AI tools while supporting any open table format like Apache Iceberg and Delta. It simplifies access management, facilitates data and AI quality monitoring, and provides column-level lineage tracking. Learn how Unity Catalog empowers you to own your data, connect to any data source, and access it from any tool or compute engine through open APIs—eliminating vendor lock-in. Discover how Mastercard effectively uses Unity Catalog to govern their data and AI assets securely. This presentation is brought to you by Databricks, an AWS Partner.
Learn more:
AWS re:Invent: https://go.aws/reinvent.
More AWS events: https://go.aws/3kss9CP
Subscribe:
More AWS videos: http://bit.ly/2O3zS75
More AWS events videos: http://bit.ly/316g9t4
About AWS:
Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.
#AWSreInvent #AWSreInvent2024 | 2024-12-04T21:37:04.396306 |
https://www.youtube.com/watch?v=wSxZ7yFbbas | Hey, this is Lance from Langchain. We recently asked 1,300 professionals in the AI industry what they think agents are best suited to solve. And the highest ranked task was research and summarization, as you can see here in this chart. Now, I've spent a lot of time working on different types of report writing and research agents. And today I want to build one from scratch and show a bunch of lessons I've learned along the way. Now before I get into all the code, I want to show you the output of the agent just to motivate this. So this is me just simply asking the agent, write me a report about different agent frameworks. LandGraph, CrewAI, OpenAI, Swarm, LOMNX, Index Workflows. This is the report I get out. So the agent does all this for me. You can see it kind of partitions each framework into its own section gives nicely formatted bullets this is all markdown file by the way provides sources for me same for crew sources open ai swarm laminx workflows at the end nice distillation clear table this is all done for me, takes less than a minute, and all this web research and report generation and writing is automated by the agent. Here's some other report examples. I asked about how did Replit recently use Langsmith and Landgraf to build their agent, and all this research is done for me. It gives kind of a nicely broken down overview of what was done the various sources details of implementation and again kind of a nice summary section on key technical takeaways here's another example where i asked for kind of news and recent events across various ai observability offerings we can see we get again some nice sources here. It goes into Langsmith, Braintrust, Recent and News, it talks about their Series A. Datadog, it talks about kind of their work on AI observability here, and Arise Phoenix, and some nice kind of summary comments here. So you can see that what we're going to build here has a few advantages. All I need to do is pass in an input topic. It can produce reports of different types. You can see the structure of the reports varies a little bit in terms of the section number, in terms of how the conclusion is formatted, whether it uses tables. We can all customize this. And it's very flexible. I can produce reports on many different types of topics in many different user-defined formats. Now let me talk about one of the motivations for kind of agentic report writing. So there's been a lot of emphasis on RAG systems, which are extremely useful, and actually a lot of times can be kind of core components under the hood for report writing. But the point is with RAG, you're really just saving time with finding answers, okay. Now, usually answers in service of a bigger goal, like an actual decision you're trying to make as an organization or as an individual, reports kind of get you closer to decision making by presenting information in a structured, well-thought-out way. So the leverage you get from high-quality reports is oftentimes a lot greater than what you get just from a Q&A or RAG system. RAG system only gives you answers. Reports kind of present information in such a way that actually it can accelerate a decision. Now I want to spend just a quick minute or two on kind of the state of the art. How to think about report writing, what's been done previously, and what do we want to kind of build and extend? So you can think of report writing in three phases. One is planning. There's been a lot of interesting works around report writing that first start with some kind of planning phase. In particular, GPD researcher, very nice work from folks at Tavoli, uses what you might think of as a plan and solve approach where it looks at the input topic and it basically breaks that topic out into a set of subtopics which each can be individually researched. Okay, that's kind of idea one. Now idea two is the storm paper in particular starts with a topic and builds a skeletal outline of the final report up front which has some advantage that we'll talk about a bit later but this kind of outline generation up front follows in that case a wikipedia style structure because storm is really meant to produce wikipedia entries so it follows kind of a prescribed structure, but the point is the outline is actually generated up front. So the two big ideas in planning are one, take the input topic from the user, fan it out into subtopics that will independently be researched to kind of flesh out the overall theme that the user wants researched. That's idea one. Idea two is kind of generating an outline up front of the OVAR research report. Now phase two, once you have a set of subtopics and or an outline of your final report, then how do you actually conduct the research? Now there's at least three different strategies that I've seen for doing the research itself. So one is just very simply taking the subtopics and creating search queries from each subtopic and then performing a query either on a web or RAG. Okay. So that's just kind of parallelized search and retrieval from some source, be it RAG, be it the web. Now another idea that takes it a bit one step further is what we see in the Olamour Researcher work which basically does retrieval and then kind of does a grading on the retrieved in this case it's web resources so it kind of looks at the retrieved web pages and determines does this contain information necessary to address the input question if not rewrite the question try again so it kind of has this iterative search and evaluation process. Now the most sophisticated approach that we've seen here is Storm actually does a multi-term interview between what they call an analyst persona and an expert, which is effectively your search engine. It could be RAG, it could be Wikipedia, it could be some other search service you're using. But the point is you basically have an AI persona asking questions, receiving answers from the search service, and continuing to ask questions until they're satisfied. So that's kind of the most sophisticated. The simplest case is just query retrieval, like we see with Gpd researcher there's also kind of iterative search so you do a search retrieve evaluate potentially search again and the most of us get is an explicit multi-turn interview between like an analyst and your sources now finally writing so you've done research using one of these approaches how do you take on the writing task? A few different ideas here. One of the more popular is just sequential section writing. So write each section individually. Storm parallelizes some of the sections, but uses kind of an iterative process where it has the outline generated at the beginning. It will generate sections all in parallel, add them to the outline, and then it'll do kind of a refinement to fill in, for example, introduction and conclusion. Now another process worth noting here is you can also take all the research you've done and try to write a port in a single shot. Basically, you have all these sources and Lennon and LM just write the entire report all at once. So LamaIndex shows this in some interesting examples they've shared recently. So that's another way to think about this. So one is kind of writing your report section sequentially, and the other is on the other kind of full extreme is kind of a single shot process of writing your entire report at once. Now let me very briefly cover what we're going to do here. And I'm going to give some general lessons I've learned from working on a lot of different report writing agents over the last few months, basically. We're going to break it up into three phases. Planning, research, writing. Now here's the first insight I've kind of learned. It's nice to do a pre-planning phase to lay out the structure of your report ahead of time for a bunch of different reasons. One is it makes it very flexible. You can produce many different styles of reports this way. So, for example, let's say I want a report that always has five sections, an intro and a conclusion, and I always want a summary table in the conclusion. This particular agent implementation allows you to do that. What if I want just a single section, kind of like a short form report, no introduction? You can also do that. So what's really nice is this planning phase allows you to take both a topic from a user and a description of the desired report structure, and it'll build the skeleton of the report for you up front. So it makes it very flexible. Two, it allows you to set up parallelized research and writing across different report sections. You can debug report planning in isolation isolation so you can kind of look at the plan really before the research is done. If you don't like it, you can modify the prompts and it's very useful for debugging as I'll show in a little bit. The other cool thing is you can actually decide what sections of your report actually need research up front. So let me give you an example. Let's say I have a report that I want an introduction and a conclusion. Well, those sections don't actually need research. Only the main body of the report actually needs research. And so in this planning phase, I can kind of create all the sections of the report and also earmark these need research, these don't. So it's a nice way that you can kind of do all your pre-planning up front, including and planning for what sections report actually need research. Now let's talk about research itself. So what I've found actually here is just simple parallelized search and retrieval is a very nice way to go. When I say parallelized, what do I mean? I mean in particular, multi-query generation is really good. So basically taking an input question for a given section, fanning that out into a bunch of sub-questions to help flesh out the theme of that section, that's a good thing because I can all be parallelized. It's very easy to kind of have an LLM call, produce a set of sub-queries, search for all those in parallel, get all those sources back all in parallel, and then do the section writing in parallel, which I'll talk about in a little bit. I've done quite a bit of work on more sophisticated style of kind of question answer. The storm work for example shows a very interesting example of kind of again like iterative question answer ask another question produce another answer answer, style of research. Basically, I found these approaches are really token costly and really can blow up latency. So I like to keep it as simple as possible. Fan out your search queries, do parallelized query generation, parallelized query retrieval, and then use all those sources for your writing. Now let's talk about writing a little bit. The lesson I've kind of learned here is it is often better to write each section individually. That's because you can get much better section quality. If you make an LLM write the entire report in one shot, I've seen that quality can really degrade. So I do like writing sections individually, but to save latency, I like parallelizing all sections that can be parallelized. So really all the sections that kind of have their own independent research, I parallelize writing of them. You can kind of see this in the diagram down here. So basically, this is the toy report structure is intro, conclusion, two sections. Those sections get researched in parallel. They get written in parallel because they're kind of standalone, right? They have their own topics. They have their own research. Now, what I've also found, though, is you should have a sequential final phase of writing on any sections that don't require research, for example, intro and conclusion, because this can utilize everything you've already written to tie together the themes really nicely. So that's another little trick I found in writing. Write sections individually, but then do it sequentially such that your kind of main body sections that require research are all written first in parallel and then you write any final sections in parallel as well for example introduction conclusion that distill everything you've learned from your research that's kind of what i've found works really nicely now let's build this from scratch i'm in an empty notebook here here's the outline of what we're going to do. And now let's start with the planning. So this planning phase is going to have two inputs. It's going to have a topic and a general outline from the user. And it's going to output a set of report sections that adheres to the topic and the outline that the user provides. Now step one, first I'm going to define a general data model for a section of the report. That's all this is. So it's a pedantic model called section. It has a name, the name for the section, description, an overview of the main topics covered in the section, whether or not the section needs research, you'll see why it's interesting later, and this is the content of the section which we're going to fill out later. And this is just a list of sections. Now why do we do this? We define a data model because this can be passed to an LLM and the LLM can know to produce output that adheres to this model. That's why this is really cool. Now, I've just added another data model for search queries, which you're going to be using throughout this entire process. And I've added a type dict, which is basically a class for overall report state. Now, what's this all about? We're going to be using LandGraph to build this agent. LandGraph uses state to internally manage everything we want to track over the lifetime of the agent. We don't have to worry about this all for now. I'm just setting it up so we have it. Now, here's where we're going to get into some of the heavy lifting and interesting aspects of this. So for this planning, we're going to use two different LLM calls, which require two different prompts. Now, the first is a little bit subtle, and I want to explain it, because I kind of came at this through a bunch of trial and error over the last weeks and months. Basically, in report planning, it is beneficial to have a little bit of external information in the form of web search to help with planning the report. So this first prompt just explains that your technical writer helping to plan a report. Here's the topic from the user. Here's the general report structure, again from the user. Generate some number of queries to help gather information for planning the report sections. Now the second point is this is really the overall report planner instructions. So this is you're a writer helping to plan a report. Your goal is to generate an outline of the report. You're going to generate a set of sections. Here's a topic, here's organization. Here is any information gathered from web search related to this topic so now we can see we're going to set a function here generate report plan it takes in the state which we defined previously we extract some keys from the state okay now we go ahead and first generate some queries to help us plan the report. We perform web search. We do plan format the sources, results from web search. And then we format the report planner instructions with our topic, the report structure from the user, and everything we got from Websearch. Then we generate the sections of the port. I'm using Claude here. We return that to our state as the report sections. I can try out this planning phase in isolation. Let's give it a shot. So let's say here's a structure. I can pass in a pretty detailed structure that I want. I want a report focused on comparative analysis. It should include an intro. I want main body sections each dedicated to an offering being compared. I can give a whole bunch of preferences here. I want a conclusion with a comparison table. And here's the input topic. Give an overview of the capabilities of different AI agent frameworks, focus on line graph crew, open AI swarm, all index workflows. Now we're going to go ahead and run. Now you'll see I'm actually using the Tavoli search API, which is pretty nice for a few different reasons. One is that it automatically retrieves the raw sources for you, so you don't have to independently scrape pages. It's free up to a pretty high usage limit and the iPad just works really well. So I've used it for a bunch of different things I find it be quite nice. Now what's cool is it actually has general search and also news search which is geared for current events which is can be quite useful for certain types of reports. I'm defaulting to general search, and this parameter days is not relevant for general search, only for news. Now given this topic and this structure, let's just test that function. So here we go. Now we can see these are the sections that are reported automatically generates them for us. An introduction, no research needed, description, overview, cool. One section on LandGraph, one on CrewAI, one on Swarm, one on LLAMA workflows. Conclusion comparison table. Very nice. This is all auto-generated and it's guided by this outline we provide. You can do anything you want here for the outline. It'll automatically and dynamically change the report formatting for your guidelines. So you'll see here in this outline we've laid out, one, two, three, four of the sections require research. So now we have to build that out. How would actually perform web research and write those sections? Let's do that now. Now I'm going to create a new state called section state. You're going to see this is something very useful that we're going to do in LandGraph. I'm going to set up each section to be its own independent state and subgraph, such that the research and writing of each section can all be done by what you might call like a subagent, all in parallel. I'll show you exactly how to do that right now. So here's the code that I'm going to use to write each section of the report and conduct research for each section of the report. Let's go ahead and just look at it quickly. So you can see right here I can compile and show the graph I'm going to use to do this. So what's going to happen is for each section I'm going to generate some search queries about the topic of that section. I'll search the web and from the sources I gather I'm going to go ahead and write that section. Okay so that web and from the sources i gather i'm gonna go ahead and write that section okay so that's the scope of work we're going to do here now what's kind of nice is that in these section objects we can see that we get a description of what the section is supposed to be about right so we can use this information when we a generate some search queries to populate the information in that section and then b do the final writing of that section so this outline you can see is very important it kind of sets the entire scope of work downstream for us now let's look at the prompts we use here so first i'm going to have this query writer prompt Now here the goal is to generate some search queries that will gather comprehensive information needed to write a report section. Here's a section topic which we get from the section description. Now this is all very customizable. These are some things I've found to be useful. Generating some number of queries, ensure they cover different aspects. You know, you want them to be kind of unique and not redundant, right? That's the main idea here. Now, section writing, this is where you can also be extremely, you know, customized in your instruction. This is kind of the way I'd like to set it up. So you're an expert writer. Here's a section topic. I give it a bunch of writing guidelines, technical accuracy, length and style, a bunch of points on structure. Now, look, here's really what I did. I went back and forth with Claude a lot to write these prompts. So really, I use Claude to help tune these prompts from a lot of iteration basically writing a section feeding the section back to Claude saying hey I like this I don't like this having Claude update the prompt and so forth so basically this is a very kind of personal process and really allows for a lot of prompt engineering and optimization this This is just what I arrived on, but don't limit yourself to this. You can modify this in any way you want is the point. The real big ideas here is I'm going to plumb in the context that I got from WebSearch. I'm going to give it here's a section topic, and I give it a bunch of guidelines that I want for the style of the section. That's really it. So the only functions I need in my graph, we kind of saw them below when we compiled the graph, basically generate queries. So again, like, you know, I have some number of search queries I want to actually generate. Those are that sort of state. Here's information about the section. I pass some of that into the prompt. I get the queries. I write those back out. Easy enough. Then I have this search web, where basically, in this particular case, I call the Tavli API again. Note that I'm using async, so basically that's going to do all these web searches in parallel for me, which is just faster. And then the section writing. It's really all there is to it. This is the source information I got from web search. It was written to state right here. I just get that from state. I get the section from state. Go ahead and format the prompt. Go ahead and kick it off. And again, I'm using Claw 3.5 for my writing. Now, this is, again, as a preference. You can try different models. I found Claw 3.5 to be very, very strong at writing. And that's why I kind of prefer using it for this style of longer form report writing. And there I get my section. Now what's also nice about this is I can test section writing in isolation. So let's test on one of the sections that we defined previously. So let's go ahead and actually just look at what section we want to work. So let's write a section on line graph. The description is detailed examination of line graph, core features, architecture, implementation, and so forth. Cool. So here's a report section. We can see these are all things that we kind of asked for. I wanted kind of a bold sentence to start. I added that to my instructions. Really, what's going on here? LandGraph enables creation of stateful multi-applications with LLMs as graphs. Yeah, that's you know about right it's open source yes yes enable cycles cool nice now another thing i i i asked for is the sections to have at least one structural element in this particular case like a list just to kind of break it up a little bit so includes that which is pretty nice persistence complex workflows integration with Langsmith for observability pretty nice cool example GBT newspaper project that's actually mentioned in one of the sources so right here pretty neat and kind of how it differs from other frameworks so and again a couple sources so look not bad you can customize this in any way you want with the prompts above but pretty nice it gets us nicely formatted sources a nice overview of what line graph is and kind of formatting that we asked for so that's pretty cool so now we have nearly everything we need we can do planning we can generate individual sections of a report and do research now the final things we want to do are we want to parallelize that research and writing for all sections that need it and then we want to do any final kind of writing of sections that didn't actually require research but are like introductory or concluding so for that let me just compile it so we can look at it this is what it's going to look like in the end so we're going to take what we just did generate queries search web write section this is now a sub graph as zone state, as we saw a section state. What we're going to do is we're going to use a very nice trick in line graph called the Send API, to basically parallelize writing and research for all sections of the port in parallel. The Send API we use in this function right here, initiate section writing. So let's look at what's going on here. The sections are created in our planning phase, which we talked about initially. Okay. All we need to do is iterate through those sections and see this send thing. This basically initiates a sub graph that will produce a research and write the section. We can simply pass in whatever inputs are necessary. Now note that we send these inputs to build section with web research. Let's look at that briefly. So that is just, you can see we use sectionbuilder.compile. This is basically creating a subgraph. So sectionbuilder was defined up here. This is where we defined all the logic necessary to just research and write a single section. This thing is pulled into our main graph as a subgraph and we kick it off in parallel for each section that needs research. That's really all we did. Now we can see in that section writing phase we kick this off for every section that requires research. So that's great. Now we can see in that section writing phase, we kick this off for every section that requires research. So that's great. Now we also have this write final sections function, which is going to write any final section of the port that are kind of summarizing. It could be an introduction, it could be concluding, but any section that our planner deemed does not require research. Those are done last. Now what's kind of cool is we can use the same send API to basically kick off that process right here. Now what we pass is simply the section we're writing and the result from all the research and writing we've already done. So that's the key point. We say here's a section to write, for example introduction, and here's everything we've already done. So that's the key point. We say here's a section to write, for example introduction, and here's everything we've written already. So you can use it to help build this final section. Now we can see here we're going to use this prompt, final section writer instructions. That's all defined up here. Very similar intuition we saw before, but in this case you're an expert writer, you're synthesizing information for the final sections of report. Here's a section to write here's the entire report context and for introduction i give some instruction for conclusion summary i give some other instructions again you can tune this any way you want and that's how we get to our overall graph we do the planning which we talked about in detail previously we do research and write each section that requires it first and we parallelize that. You can see that speeds this up a whole lot because we're basically doing all that research in parallel. We get all the completed sections. We then in parallel write any final sections, introduction or conclusion, basically any sections that don't require research and we find then we compile the final report so let's test this all end to end again we pass in a topic i'll just show you that again overview of a few agent frameworks i give a structure that i want there we go cool and we're done we can look at the whole report so again we get kind of a breakdown of land graph crew ai you can see it kind of preserves a similar style tries to maintain a structural element but it's not totally enforced some sections don't have one again each section has sources kind of a nice crisp distillation at the end here with a table pretty neat you can see this is all done pretty quickly and i'll show you the trace to prove it. So here's the trace in lagsmith. We can see it took 43 seconds for all that. Not too bad. That's because of all the parallelization we do. Now let's go ahead and look and kind of inspect what we did. So what was the process here? So first, we used Anthropic to generate some search queries to help us plan the report then we do web search so it's pretty nice you can see we get a whole bunch of information about the various frameworks then from all that information from the search results and from we provided anthropic will write the outlines you can see here's the sections and there's an introduction description land graph crew ai swarm workflows conclusion so there we go we have our planning done we've done web search to help kind of make sure our plan is interesting and up to date and we now have a very clear scope of work now what's neat here is you can see we kick off four of these build session with web research and these all run in parallel which is pretty nice. You can see each of these are 15 to 25 seconds so it takes some time right because you have to do web search you have to write the section of the report and then we can see we parallelize writing the final sections. We write the introduction and conclusion in parallel as well. So you can see it's only about 40 seconds to get an output that looks like this, which collates a lot of really interesting information with sourcing for you from the web into this nicely digestible form, which is entirely customizable. Now let me show you how I actually use this. I'm in the repo right now. Go to this ENV example, copy this to an ENV file and fill this out. That's step one. Now step two is I actually can spin up this project in LandGraph Studio, which is a very nice way to interact with my agent locally or it can be deployed for now i'm going to i'm going to use just the locally running desktop app so here i'm in studio here's the agent we just showed and we walked in gory detail about how you actually build this now how to actually use it so what's pretty nice is we saw that the input is both a topic and a structure if you go to the repo go to that report examples directory and I show a few different report styles that I like here's a cool one on a kind of business strategy so what I can do is I go back over here to studio I open up assistance and you can see I've created a few different assistants now let's open up one business strategy you can see I've created a few different assistants. Now let's open up one business strategy. You can see here I just paste in the report structure I want for this particular kind of writer type. So this is really nice because the report structure is just a configurable parameter of my agent. I can create many different report writing assistants that all have different styles different focuses and can write really nicely refined specific reports per my specifications here now this is what i like on business case studies so it's going to build a report type that's focused on business strategy per the specifications i have in that outline now i can give it a topic what do we want to learn about just say successful software developer focused marketplaces. Kick that off. We can see we're generating the plan first. Cool. It's going to do a search query about developer marketplaces in general. You can see this right here. Generate the plan. And now it's basically researching each section all in parallel. Neat. You can see there was the web search. Now it's doing the section writing. Now it's kind of hard to see, but this is actually happening for all of our sections in parallel. Neat. You can kind of get a clue what it's going to talk about. You see it looked up Upwork, Stack Overflow, GitHub. Cool. And here's our final report. So here's the report we just created. I just brought it over to the repo as a markdown file. So we can see it did a deep dive in GitHub's developer ecosystem and what kind of drove success there. Some interesting insights, a bunch of nice sources, Stack Overflow, and Upwork as well. Kind of summary table highlighting an overview of the research so this is showing an example of how you can create a customized assistant that writes reports of a particular type in this case focusing on business case studies looking specifically at kind of analogous prior comps to a current kind of business question or scenario but you can create as many assistants as you have general report styles or types that you want to write so overall this repo provides a kind of kind of nice way to do very customized report generation it's a result of a lot of trial and error that i have kind of worked on over the last few months and again it does kind of this planning up front generous report sections paralyzes our research on all the sections that require it and then a final synthesis writing any final sections like introduction conclusion based on the results of the other sections that you just all researched and wrote in parallel so you can see it's very customizable because you can kind of use user-defined kind of outline to create different report structures or styles entirely up to you in terms of optimizing and tuning the prompts all the research is done in parallel so it's pretty quick like 40 seconds to get a really nice kind of like long form report and yeah feel free to play with it leave any comments below and the repo is of course open source so feel free to add any contributions or leave any comments thank you | Report mAIstro: Multi-agent research and report writing | 2,085 | LangChain | 20241205 | Research + Summarization was the most popular agent use case in our recent State of AI Agents survey of 1300 professionals. But how do you design agents for high-quality research and report distillation? In this video, we share key insights and show how we built Report mAIstro, an agent capable of web research and high quality report writing.
Report mAIstro simply:
1. Takes a topic (and optional report structure)
2. Builds an outline
3. Parallelizes research + writing across sections
You'll see various report examples and learn how to configure assistants for different report types.
Code:
https://github.com/langchain-ai/report-mAIstro | 2024-12-05T10:08:24.221516 |
https://www.youtube.com/watch?v=9hNrosMqirQ | Hey everyone, how's it going? This is Trevor from the Arise team and today we're going to be talking about how to host Phoenix and data persistence with Phoenix. So I wanted to give a quick overview of what Phoenix is. For people who aren't as familiar, Phoenix is an open-source AI observability tool that you can use for your Gen AI applications, any computer vision models, and even your traditional ML models. Phoenix does have a lot of tools that are available for you no matter what phase of the journey that you're in. So there's tools for development phase, the testing and staging phase, as well as once you get your model or application into production. And we'll get into some of the more specifics of what Phoenix offers in some of these components during the demo, but you can also check out the Phoenix GitHub repository through this QR code. Make sure to give us a star if you haven't already. So Phoenix is super easy to get started with because you can run it locally, start iterating super quickly during the development and experimentation phases. However, once you're ready to actually move to production, or if you want to be able to collaborate with your teammates on Phoenix, you can deploy Phoenix for those scenarios. And there's a couple of different ways that you can deploy Phoenix. The first is we do offer a managed hosted version of Phoenix, but for today's conversation, we'll be focusing on the self-hosted options. So with self-hosted, we do recommend running Phoenix through images, and you can find all of the Phoenix images on Docker Hub, and you can also deploy Phoenix on Kubernetes. And before kind of diving in a little bit further, I wanted to give you a quick example of what this might look like if you do have a deployed version of phoenix so you'd have your lm application that's running on on a server that application is instrumented to collect all those traces and spans from the application and then you have another server that phoenix is running on and it's as simple as just pointing your instrumentation to whatever endpoint the Phoenix server is running on. I'll show you what that looks like when we actually dive into more of the specifics in a little bit. And then lastly, kind of before diving deeper, I just wanted to touch on what the data persistence options for Phoenix are. So you can either persist data via a SQLite database, or you can use a Postgres database. If you run Phoenix without any configuration, it'll use SQLite by default. However, you can also configure Phoenix to use Postgres instead, which is what we will do today. And for SQLite, it's just default mounted in the directory that's specified by the Phoenix working directory working directory environment variable and for postgres we'll need to set this phoenix sql database url to map to our postgres instance which i'll show you how to do so diving in a little bit deeper now for today's demo and kind of tutorial we'll be running phoenix via a docker container and so kind of the first option with doing this is just using the docker cli. So both of these commands should be pretty familiar. First is just pulling the Phoenix image from Docker Hub using the Docker pull command. And then we'll actually run that container or create the container, run the image using the Docker run command. Here we are mapping port 6006 on the host to 6006 to the container. Same thing with 4317. These IT commands basically just create an interactive terminal session for the container. And then this is specifying again that image that we want to run. So this is one way to get started using or running Phoenix via Docker container. The other way is to do it via Docker compose and specifying a Docker compose YAML file which is what we'll do and then once you have that YAML file it's really just running Docker compose up to start those services really quickly before we kind of dive into the actual code piece just to quickly touch on the ports that are used. So 6006 is going to be used for the web UI, as well as for accepting traces in OTLP format over HTTP. And then port 4317 is going to be used to accept traces, again, in OTLP format over gRPC instead of HTTP. So diving into the actual code of this tutorial. So first specifying our Docker Compose YAML file. If you'll notice, this is actually the exact same as the example in the documentation. So you can just pull this and be able to run this yourself as well. Um, but here we're specifying a couple of services. So the first is our Phoenix service. Um, and here we're specifying that Phoenix image that we want to use. It's going to depend on the database service. This database services are just Postgres database that'll be powering or being used as a data persistence layer. Here we're mapping those ports again. So again, mapping port 6006 for HTTP and then 4317 for jrpc. And then we're also specifying an environment variable. And this Phoenix SQL database URL is that environment variable that I mentioned previously that we'll need to use Postgres as our persistence. And this is just using that Postgres instance URL. And then for the database service, we're using the Postgres image, we're specifying a couple environment variables, mapping port 5432, and then specifying the volume. In this case, we are just specifying local as the driver, as our volume driver. Obviously if you are deploying this, not running it just locally, that would change as well. But for the purposes of this tutorial, I'm just going to leave it as local. And so once we have this Docker compose YAML file, we can actually just run Docker compose up. So you'll notice I'm already in the Phoenix demo working directory, which is where this Docker compose YAML file lives and we can verify that we just do ls right we can see that docker compose yaml file with our example notebook and then we can run docker compose up that's going to start create those containers start those services um and so while we're waiting for this to load uh we'll kind of walk through what the example notebook looks like. So for the tutorial today, it's a very, very, very simple Lama index application. Basically just uses a couple of functions that we define and probably is like maybe 10, 15 lines of code total. So here we're importing all of the classes that we would need for our LamaIndex application. And then here's where we import all of the functions and classes and methods that we need for our instrumentation. And so you'll notice most of it is just importing from the OpenTelemetry package. We also import the Lama index auto instrumenter from the open inference package, as well as resource attributes, which we use. I'll show you kind of what that means in a sec. I won't spend a ton of time on the instrumentation piece or the tracing piece for this tutorial. We do have a lot of other tutorials on tracing and instrumentation, so definitely check those out if this is kind of a new topic. But just for the sake of time, I'm kind of going to gloss over a lot of the details on instrumentation. The main thing I really wanted to point out is this collector endpoint. So in this case, we do have, since I did just run Phoenix locally, we can also kind of verify that. If you see over here, you can see that the Phoenix UI is running locally on port 6006 to log traces over http which is what we'll be doing we basically can specify this endpoint as localhost 6006 v1 traces that's what we did here if we were using grpc again localhost but instead using that grpc port 4317 and then you can also notice that our Postgres database is being used for storage, which is specified here as well. And so this collector endpoint, since this is running locally, that's why we're using this one. Again, if you had Phoenix deployed somewhere, you would obviously change this to map to whatever the URL of your Phoenix deployment is. Next, we're specifying a couple resource attributes, mainly just this project name. And I'll show you how to use this project name in Phoenix in a sec, or once we actually run this. But this is just specifying the project name as hosted Phoenix demo. I'll gloss over kind of these other instrumentation details. And then this is where we call our llama index auto instrumenter, and then actually call this instrument function that we just created. So let me run this first one that has all those import statements and then I'll run this instrument. These are the two functions that we give to our LLM. So in this case it's just an agent that has access to these two functions that it can call. So again super simple just multiplication a times b addition a plus b we'll run that. then for the llm we're going to use gpt4o from openai and then giving those tools specifying the llm and then in this case we're actually using the shakespeare writing assistant um prompt from lama index so hopefully we should get some some answers in kind of like a Shakespeare-like tone. So if I run this, and then our query in this case is, you know, what is 123 plus 2 divided by 5? If we run that, we'll wait for that response. And so we can see the sum of 123 and 2 is 125. Now to divide this sum by 5, we find that the result is 25. Thus, the answer to that query is 25. So kind of a Shakespeare-like tone. But then if we go back and we can actually load up Phoenix, it's running on loco 6006. And here we can see all of our projects. So again, since we specified that hosted Phoenix demo project, you can see I just have that one trace that we just sent in this project, but I also have a couple other projects that I specified in the past. If we click into this hosted Phoenix demo project, then we can see that one trace. We see our what is 123 plus two divided by five as the input. We can see the output that the LLM gave or our agent gave as well. And then we actually click into it and see all the steps that happened kind of under the hood when we submitted that query. So here we have our system prompt, and this is that Shakespeare system prompt from Lama Index. We can have the user, we see the user input, and then we can see both the function calls. So in this case, that addition function call is being called as well as the multiply function. And then finally, we can see that final LLM call with all of the system and user prompts, the assistant, the tool calls, the results of those tool calls, and then the final response from the assistant. The other thing I wanted to show is in addition to kind of visualizing all the data this way, since we are using Postgres as our database We can actually connect directly to our Postgres database as well and we can do that just using the SQL command line tool So if we do p-sql Specify localhost as the host again Obviously that would differ if you have you know a hosted version or deployed version of a Postgres database We can specify the port, 5432-username, Postgres, type in our password. Then we'll connect to the database, Postgres. And then we can list out all the tables in the database. So here we can have our, we see our projects database, or excuse me, projects table, the spans table, the traces table, a couple other tables as well. We can do a couple of select statements, select star from traces limits, for example, and we can see all of those traces. So we can see our trace IDs, the start and end times, what project they're a part of. So if you'll notice, I had five traces in my default project. I had one trace in that other project. And then this was the project that we just created. And so this is kind of meant to show you how you can actually access that Postgres database that's being used to basically process all the data that you send to Phoenix. And that pretty much wraps up the tutorial and demo for today. Definitely check out some of our other demos and tutorials on our YouTube channel, as well as in the Phoenix documentation and Arise documentation. And then make sure to check out the Phoenix repository and give us a star if you haven't already. Thanks for watchin. | How To: Host Phoenix | Persistence | 740 | Arize AI | 20240731 | With Arize Phoenix, getting started is relatively straightforward because you can run it locally and start iterating quickly during the development and experiment phases. However, once you’re ready for production — or if you want to collaborate with your teammates — it’s time to deploy Phoenix.
In addition to a managed hosted version of Phoenix, there are self-hosted options. Data persistence options for Phoenix include a SQLite database (default), or PostgreSQL.
This demo walks through how to host Phoenix and uses PostgreSQL.
🔗 Helpful Links
Arize Phoenix on GitHub (please consider a ⭐) https://github.com/Arize-ai/phoenix
Hosted Phoenix: https://docs.arize.com/phoenix/hosted-phoenix
Follow Trevor Laviale: https://www.linkedin.com/in/trevor-laviale/
More about Arize Phoenix: Phoenix is an open source tool that helps you trace, evaluate, and iterate on generative AI applications. The hosted version of Phoenix offers the ability to persist application telemetry data generated during AI development in order to better experiment, iterate, and collaborate in development or production. The solution has a foundation in open source and features a fully hosted, online, persistent deployment option for teams that do not want to self host. AI engineers can instantly log traces, persist datasets, run experiments, run evaluations – and share those insights with colleagues. | 2024-12-05T12:54:15.156630 |
https://soundcloud.com/theaipodcast/cleanlabs-ai | Hello, and welcome to the NVIDIA AI podcast. I'm your host, Noah Kravitz. We're coming to you live from GTC 2024 back in person at the San Jose Convention Center in San Jose, California. And I am joined by Curtis Northcutt, the CEO and co-founder of CleanLab, and Steven Gawthorpe, senior data scientist at Berkeley Research Group. We're here to talk about data. We're here to talk about GPUs, obviously. I think we're gonna talk about crime and corruption as well. This is gonna be a good one. Let's get right into it. And if I might, to ask you guys to kind of set up the story here. Curtis, let's start with CleanLab. What is CleanLab? What is the role it plays in all of the AI stuff that's happening now? Data feeds AI. CleanLab is all about good data. And then we'll get into how the two of you met and what Berkeley Research Group is doing with all that data. Awesome. Good to be here. It's a long story, and I'll do the short version. Great. Short version is I was at MIT doing my PhD, and they tasked me with the first – I was the first research scientist at edX, and they were building the cheating detection system. And as we know, innocence until proven guilty. Right. system. And as we know, innocence until proven guilty. So we assumed a bunch of zero labels for a bunch of education data. And I was doing this for hundreds of courses for all of MIT and Harvard courses. And I wanted to train machine learning models to look at educational data and predict if someone cheated or not. And discovered something fascinating. At that time, there were no labels that we knew for sure. So we had a bunch of people who were labeled zero, but they actually were cheaters. And we had some people who were one, but they weren't cheaters. And we tried to train machine learning models on that real world data, and they failed. So I went to the inventor of the quantum computer, Isaac Chuang, who's my PhD advisor. And I went to the head of AI at MIT, who's done, invented many of is common. Right. And I talked to them and they said, you know, AI is actually we haven't gotten that far. And this was 2013, 2014. Okay. So I spent the next eight years inventing a new field called confident learning. And then I went to Amazon and I went to Google and I worked at all these places. I worked at Oculus Research, at Facebook, Meta, I went to Microsoft, and I worked at Oculus Research, at Facebook, Meta. I went to Microsoft, and I worked at every big tech, and I saw how to implement this stuff for enterprise level, and that's where CleanLab came. Got it. And so how old is CleanLab the company now? The company incorporated about two and a half years. The research and the tech behind it, about a decade. Right, right. And so in a nutshell, what does CleanLab provide for customers? What CleanLab provides is an automated way to take every data point that's fed into data-driven systems, analytics, prediction, machine learning models, inputs and outputs, and be able to add metadata that tells you if you can trust the data, if it's reliable, what's potentially wrong with it, what's right with it, automated for every data point so you get the most value out of every data point. And so there's a lot in there. The labeling of the data, the trustworthiness of the data, determining if it's trustworthy or not, that's an automated process? It's automated now. That's 2024. Yeah. Okay. So how does that work? Yeah. So what we do is we look at distributions over the entire data set that that one data point is within. Right. And we learn, based on all the other data, what's typical, what's normal, what's the right label, what is the class, what is the type of class. We're learning advanced algorithms that we've invented in-house, and a lot of them came out of MIT when we did our PhDs. And we take those and we automate them on GPUs at scale. That's how we partner with NVIDIA. And it's incredible because what we have now is we have a workforce of data scientists who eat only electricity, which is 1 500th of the energy cost of vegetables, who work all night and they do exactly what our algorithm specified. And what they do is they check things like, is this data point an outlier? Is it incorrectly labeled? Is it correctly labeled and with what confidence? Does it have PII, personal identifying information? Is the image not safe for work? Is it blurry? Is it ambiguous? Would it confuse a machine learning model if it was trained on this data point? Would it make it learn faster and better? And we add all of this metadata for every data point. Are there humans in the loop verifying? Yeah. So what we do is we work with large enterprises that typically were doing this by hand. So you'd have 100% of your data, which in some cases is hundreds of millions of data points, fed to a human in the loop system where you have humans checking each one. And what we do is we take now 90% of that and we automate it with high trust scores, high confidence scores. And then the 10% of stuff is not able to be automated. We send that back into the traditional system. Gotcha. Okay. Is there a number? Is it for the accuracy of the Clean Lab systems at this point? So it's domain specific. Okay. So I'll give you an example. In terms of error checking, we have a minimum that we provide that we guarantee of 50%. Okay. And that's for finding errors. So that means if we throw out, say you have a data that we provide that we guarantee of 50%. And that's for finding errors. So that means if we throw out, say you have a data set, a million data points, and you throw out, say, 10,000 of those data points, or you fix, you correct 10,000 data points, what we can guarantee is that you're not going to ever throw out more than half your data, but you'll get very high precision. And precision means, say you have 5,000 total errors, you might toss out 10k data, which in the grand scheme of your data set is not that much, but you've eliminated 100% of the error. And what we're doing is constantly providing thresholds that allow you to say, look, if you look at higher confidence, then you're going to throw out less data, or you're going to correct less data. But that means that you're going to have more data still errors in your data set. And so if you throw out everything CleanLab flags, you'll probably get most of the error. But you might throw out a little bit of extra data. And that's the automated aspect. MARK MANDELAVYSCHIHLAMPANI- Yeah. And so this is for all kinds of data across a variety of industries, or are you specializing in one or a couple of sectors? JOHN MUELLER- Awesome. So Noah, that's a very key point. So I'm glad you brought that up. MARK MANDELAVYSCHIHLAMPANI- I want to ask you about popcorn and chicken, but people weren't in the room before we hit record, so they wouldn't have done that. So Noah's bringing up an allegory I use, which is that the microwave doesn't just cook popcorn. And I think it's easy to think of CleanLab that way. If you first learn about CleanLab and you hear, oh, this worked on image data, then you think, oh, this is a tool that helps automate and add value to image data. CleanLab is unique and differentiated in the fact that the way the algorithms work, the way the systems work, the whole software stack works, is we convert every type of data to the same representation. And that representation is then fed into our systems. So CleanLab is domain-specific, meaning it works for any particular type of use case but it's also data agnostic meaning it doesn't care the type of data it works for images videos texts tabular audio so forth right so for instance it could even work uh on data related to investigating crimes and corruption and all kinds of mistakes that's my very slick transition yeah steven how did you guys meet? How did Berkeley Research Group start working with CleanLab? Yeah, so I got brought on to Berkeley Research Group coming in from the United Nations Office of Drugs and Crime. When I came in, I took the senior role and wanted to kind of get our tech stack up and running. And the work that I'd done prior to that, my professional career had always been about data labeling, always trying to make sure that you take a good data centric approach and make sure that, yeah, you know, the ingredients that are going to go into the recipe, so to speak. So I had already done my homework, known what was out there, started to look into some other tools to help, you know, automate data labeling, model driven data, data labeling, and went to a conference, hosted by Snorkel, where I saw Curtis's talk on CleanLab. And I think you were just getting started at that time, right? So listen to and downloaded yeah your open source package that was that was still going on and concurrently it had some work that I was doing on investigating price manipulation and so I already had some data that I was looking at and it was kind of a bear to go through and manually label as you can imagine. I used Curtis's open source version and got immediate impact on the results. I saw that they were putting together an enterprise solution and I called them right away and said, hey, I want to talk, let's see what we can put together here. So this is when you were already at Berkeley? Yeah. Okay. Yeah. And so, yeah, I was trying to get our tech stack up and running. And so to take a step back, just to set the stage for the listeners, what does Berkeley Research Group do? Yeah. So our main focus is working on economic damages, disputes, investigations. If you see, you know, a large-scale scandal in the news, high-profile cases in health or finance, chances are we're involved in it in some way, shape, or form. So when I get those notices about a data leak from my health provider, you're the guy I call. I might be connecting some dots that aren't really meant to be connected, but gotcha. And so that was your, you were in a similar kind of work when you were at the UN? The UN, I was investigating drug trafficking, human trafficking, and smuggling of migrants. And I put together what's called the drug monitoring platform. So I created an AI solution that could web scrape all of the news in the world on drug trafficking and create a supplementary data set where you could see how much drug seizures are for every single type of illegal illicit drug in the world., I think, 30 languages. It's still going to this day. Yeah, cool. And so now you can see for free, publicly available, you can download all of the content about heroin trafficking, cocaine trafficking. A lot of that stuff is from the legacy that I put together. Gotcha. Very cool. So when you're investigating now, you work with Berkeley and you're investigating, you know, financial misdoings and that sort of thing without getting into too much detail, but where are you getting the data? And then, you know, kind of maybe before and after, you know, you met Curtis and CleanLab, like what was your process like and what's it like now? Yeah, yeah. That's a good question. So I get data two ways, either a company is in trouble, just to put it plain. Their entire data repository is seized either from servers or proprietary sources like Outlook files and all kinds of stuff. I need to stop and emphasize here that the type of data that we see is all shapes and forms, which is, I think, where there's overlap with what Curtis is trying to do. So I have to be able to address every single type of use case and data file type and create something out of it, something useful for an investigation. The other way that I get data is through open source intelligence, usually through web scraping. And yeah, I can talk about some examples on cases that I've worked for that. Yeah, I mean, juicy details are welcome, but I don't want to put you in a weird spot. No, no. Okay, again, and this is actually something I used with CleanLab on. I was commissioned for work from a cryptocurrency insurance company to try to identify the scope and scale of fraud in cryptocurrency, try to identify the mechanisms, assess the economic damage, how much people were ripped off and so forth. Right, right, right. So I scoured Twitter to try to scrape relevant content and then use natural language processing to identify the mechanisms. You know, such and such coin screwed me over for in this way. There was a fake website or there was a meta wallet attack or, you know, and then there was a value for the amount that they were saying that they were ripped off for. And so I put this together to map the risk landscape for the cryptocurrency. And so in doing that kind of work, how are you able, and again, maybe this is a before and after thing with CleanLab, how are you able to kind of determine if, you know, a particular piece of data you're looking at is reliable, is useful, is maybe, eh, this kind of sounds like, I mean, I don't know what, but, you know, maybe somebody planting misinformation on purpose or, you know, how do you go about determining the value of the data? Yeah, the veracity. Yeah, the data. Yeah, I mean, there's, there's all kinds of bots where they'll have multiple tweets on the same thing. Yeah, I think the way that I've always gone about it is just to try to identify who is the person that's disseminating that information. So if you can find reliable ways on the entity that's putting that out there, the more reliable the data. But, you know, at its face, you got to assume that someone claiming that they've been ripped off in a certain amount, you kind of have to go with that and assume certain levels of bias. There's bias in this type of, anytime you take on open source data and web scraping, there's issues with that. But yeah, so I would say that you got to take certain levels of acceptable bias. And then the way that you kind of refine is to look at who's disseminating the information and try to dig down further. That's how you target kind of the veracity of it. Yeah. And so is CleanLab able to kind of do its thing on, you know, the types of data that that Steven's talking about in terms of, I don't know if I'm approaching this the right way, thinking about it, but thinking about, you know, looking for outliers and the kinds of things you were describing before, Curtis, in a data set. Is it the same basic principle if you're analyzing data for, you know, an insurance fraud investigation? Yeah, it's a good question. Let me try to do some quick specifics. Great. All right. So say that you've got a data set, you've got a bunch of sources of information. Your goal is to figure out which source of information is screwing up your data set the most and screwing up the models that you train on that data the most. What you do is you just upload to CleanLab Studio that data. You already integrate it directly with whatever your VPC is, however you have it set up. You integrate with the data set. And then what we do is we'll say, look, here's all the errors. And now you have the column of the source. And you would just say, let's look at source one, right? How much error is in that? And then let's look at source two, how much is and so now you have an automated way to figure out of all your sources, which ones are generating the most type of error, what type of error, where's it coming from? And then you go back to your system that set up the data process in the first place, and you correct that. And so that's a quick iterative way. It's a quick example of how you can get to where's the source that they are coming from so we can do better. And then a second thing that's worth emphasizing, and this is something Stephen's brought up to us. He didn't mention in the example, but I think he can chime in at any time, is I talk a lot about data because it's a unique differentiating thing that we do. But what we're doing with that data is helping you with two things. One is better data-driven analytics. So that's like Shopify's, e-commerce websites, anybody who just has data that the better it is, the more likely they are to sell product. Yeah. And then there's another type of customer that just wants to train better models. So that's like your Mosaic MLs, your Databricks, your Big Tech, they're training LLMs, big AI models. For those companies, what we're doing is improving the data. And so I talk a lot about the data. But for smaller companies, SMBs, mid-size, mid-tier, we do something else that we don't talk about as much, but we do it really well. With one click of a button, we take that improved data set that we've created for you, and we let you just auto-generate a workable, reliable ML model that's deployed for you and that you can use. So that's another example, that's a concrete example that we know Berkeley Research has done, where instead of having to do a bunch of work and train a bunch of models and optimize them, they click one button on CleanLab Studio and they have a model and it can make predictions. Right. One of the things, Curtis, I would say that you don't take a lot of credit for here is that, yeah, you can help identify the anomalies, you can help with the data labeling. I think one of the other things is that you provide an interface to just simply look at the data, which is, you know, most people kind of shy away from. There's so many different instruments and mechanisms and diagnostic tools to help you look at, you know, your data distribution, everything. But at the end of the day, you should still be looking at your data. And there's quite a few cases where I didn't know what I didn't know. I didn't fully understand the scope and what the data labeling should be. And I think with the finding the errors, it kind of helped me better identify what my annotation policy is, what my labeling scheme should be. Just kind of helped me think about things by just looking at the data, simply put. I've got to add, Noah. Thanks for saying that, Stephen. Because when we started the company, and this is a decision every founder has to make. Everybody else in this space made the decision to go API. Because it's data, and data is a decision every founder has to make. Everybody else in this space made the decision to go API because it's data and data is a very technical thing. So you just build a back-end layer, an API, you call everything with code. It was a lot of work to build a whole front-end team. Sure. My first executive hire was not a VP of engineering, it was a VP of design. So I had to put a lot of work in and a lot of investment in order to actually make the front-end so that, because I believed in this, I believed that you cannot have of work in and a lot of investment in order to actually make the front end. Because I believed in this. I believed that you cannot have a data company and a data improvement company if you can't see the data. And yet everyone pushed from all directions, other founders, investors all over the world, saying, look, you should just do API. That's what everyone else does. And I'm really glad to hear that you're like, yes, that is what everyone else does. And that's part of the reason we buy your product. So there's a lot of front engineers at CleanLab that are going to smile when they hear that. Yeah, you get it. Excellent. Speaking with Stephen Gawthorpe and Curtis Northcutt, Stephen is a senior data scientist at Berkeley Research Group, and Curtis is the CEO and co-founder of CleanLab. We're talking data, we're talking investigations, and we're talking about the value of being able to see your data. I think for a lot of folks, I mean, that's interesting that it came up because for a lot of folks, it is scary, right? And this idea of these vast data sets and not even knowing where to start, let alone how to start making actionable sense out of these things. And so there's a lot of reliance on, you know, whatever it is, the business application that surfaces insights on its own and that sort of stuff. And so it's interesting to hear that about a data company putting that front-end interface first. You've both been at this for long enough, you know, that sort of before Gen AI started to become a big thing. And now, you know, we're sitting at this conference that it's Gen AI this, LLMs that all over the place. No disrespect to the robots, but, you know, we're talking about this stuff now. How has, I guess, Stephen, I'll start with you. How has generative AI and other, you mentioned NLP a little bit before, how have these newer technologies kind of merged with some of the more traditional data science techniques that you came up on? Or perhaps, you know, taken their place or sort of how have kind of the older technologies and the newer technologies come together in the work you do? This is one of my favorite questions. And I think I'm still trying to see how this landscape is taking shape. Yeah, we all are. You know, I came from this background when, you know, you didn't use the term data scientist, right? So that's kind of a newer term. And a lot of the work that I did in analytics is, you know, kind of, I guess now thought of as more old school, right? And a lot of the tasks that I took pride in, you know, in professional development and, you know, named entity recognition and all this kind of stuff is now kind of taken over by large language models, keyword extraction, all variety of tasks can be done so much faster and more accurate in certain cases. I think that for myself, I've kind of come full circle and realized that there's some areas that maybe I don't have to do as much sentiment analysis. I think LLMs do that quite sufficiently. But I think data is still kind of an important backdrop for all of this. So I've been doing lots of retrieval augmented generation, RAG pipeline development. Curtis and I were talking earlier this week on just some of the basic techniques of just parsing a PDF and just getting good quality content into a vector database. that's an old school technique. You know, just old school parsing and good quality data recognition, but it's paramount to get a good RAG pipeline up and running. So there are some areas where it's kind of give or take, and some of the old school still applies, but some of the newer techniques, I'm saying, okay, well, I just don't have to do that kind of work anymore. Right, right. What do you think about that, Curtis? Yeah, I think that's on point. I think there's been an evolution that has been happening and is going to continue to happen. If we go to like, let's go to 2005. Everyone's doing logistic regression, random forests. These are really simple machine learning algorithms that just take data that's like logistic regression, random forests. These are really simple machine learning algorithms that just take data that's like, here's my heart rate. It's perfectly curated. It's not arbitrary text. And then there's a label and it's like healthy or not. And then you predict if the patient's healthy. Then we entered this wonderful era where Jan LeCun was able to get convolutional neural networks to run on gpus which allowed us to scale and distribute image classification and that was one it was incredible time because that was the first time that people who didn't really understand ai didn't know what we've been doing could see yeah wait that camera knows that's a dog right that's different that gives me chills that makes me see that this is this. Cars are going to drive now. People started to think. That motivated a lot of money and research that was fed in. And then we saw another evolution, LSTMs. LSTMs were text models that could then take into account a history of tons of text and make predictions, whereas before it had to be very finely curated text. And then beyond LSTMs, there was something called GRU, which is very obscure, but it's actually pretty effective, gated recurrent units. And then transformers came out and the papers called attention is all you need. And those blew up. That was my first existential crisis with the transformers. Now I'm in my second round of existential crisis with the LLMs. Sorry to cut you off. No, it's good. It's good. I think it's worth fleshing out this history because I had several friends at MIT who started companies that were based on LSTMs. And I had other friends who started companies that were based on convolutional networks. All of their companies failed. And that's because they built companies that were model agnostic. It's a dangerous thing to do because we're in the fastest evolving market that's ever existed in humankind. AI and technology is increasing and changing now within a year. So where we are today is not something we can foresee, for example, even two, three years in the future. And what that means is if you're going to build technologies, if you're going to solve problems, what you want to do is you want to think, how do I do this in a model-agnostic way? The only other thing I'd add is that if we think about how does classical ML interact with now transformers and the modern LLMs, I think a good model to look at that has happened before is how did quantum computing interact with classical computing? And that's an example where you have a new technology that's not well understood yet by the public. And yet we're able to interface with that using old technology that is well understood by the public. And so this idea of having old things mixed with new things has actually happened many times in technology. And we can learn from those examples. Yeah, well said. So forgive me for asking this question on the heels of you literally saying, you know, it's moving so fast, we can't predict what's going to happen. But let's start with you, Stephen. The job of tracking down the bad guys, so to speak, right? Investigating fraud and corruption and all of these things. How do you see your line of work continuing to change and be influenced and hopefully getting, you know, faster, easier, more accurate for you because of these things, because of LLMs, because of NLP, because of being able to build rag pipelines and companies like Clean Lab who are able to, you know, help you do more with your data faster. Yeah, I think that, let's see how I can say this. I think that one, it's forced me and a lot of other data scientists that are part of my team to start thinking about more interdisciplinary I would have never considered like UI, you know, and caching, right, I've said the word caching more, you know, now than than I have probably my entire life just thinking about trying to set up the entire ecosystem to make some of this works. And so it's become more interdisciplinary, I think. And that's, that's a big challenge there. And then I think, like with some use cases that are very, very interesting, that's been very difficult, but I see this like moving very quickly with entity linking. So you'll have lots of ambiguity in persons names. And, you know, with large scale illicit funds being stolen and shifted abroad, it's hard to find and identify and link Mr. X with Steve or something like that. And I've been doing some stuff on my own with RAG pipelines in development, and it's extremely easy to now link persons together and show a sequential order of chain of events on transactions between multiple parties and just putting all of this information together. It's more at your fingertips, I think, than it's ever been before. And to do linking has always been a very difficult challenge, but RAG is making it quite simple. Yeah. Curtis, this is maybe not the right way to say it, but what's next for CleanLab? What are you guys working on now and you know kind of bigger than that where do you see things i keep thinking about i think because you were talking about uh steven you were talking about scouring twitter looking for you know people talking about being scammed out of out of crypto and trying to you know find clues clues and talking about bots posting. And so with generative AI, the bot problem, if you will, is getting worse in a lot of ways. And so there's a lot of bad data being generated faster than ever along with useful data. Curtis, what does that mean for the future of data, for the future of a company like yours, and for customers like Berkeley Research Group who are looking for ways to separate the good data from the bad and do more with it? Yeah, it's a good question. So there are a lot of things that are changing, but there are some things that are remaining the same. So some of the things that are remaining the same we can work with and we can guarantee. So one of them is that there is no learning without data. You have to have something to learn. And so we know that as long as we keep doing more AI globally, then we're going to keep increasing our dependency on the quality of the data that's fed into those systems. Even though the quantity of the data that's being fed in those systems is ever increasing. And so that's one of those things where the quantity is changing all the time, but the dependency on quality is always going to be there. So that's good for us. And it just creates a bigger and bigger market. Where I see us in the next three years is you have an idea in your head. You don't have to hire a bunch of PhDs. Steven is unique. He's been an incredible pleasure to work with because he's very smart. He has a PhD. He can think about these things. We want to get to a place where you have a team of people who want to get stuff done, who are creative. They don't have a PhD in machine learning and AI, but they do know I've got some data here, and I know CleanLab integrates with everything. And all I want to do is I want to click a few buttons, and I want to go from low quality data to high quality, reliable data-driven solutions. And when they think that problem, when that problem enters their headspace, they think one word, CleanLab. There you go. We usually end these conversations by asking for recommendations for listeners who want to learn more. And so we'll get to that in a second. But I'm going to put you on the spot, Stephen. For listeners who might want to learn more specifically about using data, data science, all of this stuff to fight crime, to, you know, all of that good work that you do, do you have advice, do you have recommendations for somebody who's, you know, a little on the younger end in university or just starting out their career and they're interested in the future of data-driven crime fighting? Yeah. Okay. So I would start with the Interdisciplinary Corruption Research Network. Okay. It's a group that I started when I started my PhD. It's a group of individuals that wanted, that didn't come from the legal background or economics background, which has kind of just been a mandatory requirement in the international space to participate in anti-corruption. We felt that corruption is everybody's problem. And so there's computer scientists, there's psychologists, there's engineers, there's people that have a stake in this. And so I would say, look them up, see the work that they're doing. And then I think if you want to get into, you know, some technical aspects of just learning the latest and greatest on LLM development, I did some work with AI Makerspace. So shout out to them. They're a fantastic organization, very practical. They get you up and rolling. And then maybe the last thing I would say is to read one of my recent publications. There you go, plug it. That's for your fun. Yeah, how to identify widespread corruption, geospatial, new insights from geospatial analysis. Excellent. So, yeah. And Curtis, you get the easy question. For folks who want to learn more about what CleanLab's up to, where should they go? Yeah, that easy. You go to cleanlab.ai, and if you want documentation, you go to help.cleanlab.ai. Perfect. All right. Well, we'll have to get together again and talk about the, was it the PhD wrapper, the wrapping PhD? Yeah, that's a whole other set. A whole other podcast for another time. But thank you guys both so much. This has been tremendous. Yeah, it's a, you know, we're sitting here at GTC and it's an audio podcast people can't see, but the way the room's set up behind the two of you, I'm looking out at the show floor and everybody milling about and there's robots and there's models and there's all kinds of stuff going on. And it really does feel like it's not not the start of something new but it's a moment so thank you for coming on the pod and uh taking the listeners with us thanks it's a pleasure thank you The End | Cleanlab's Curtis Northcutt and Berkeley Research Group's Steven Gawthorpe on AI for Fighting Crime | 1,968.562 | The AI Podcast | 20240410 | Talk about scrubbing data. Curtis Northcutt, cofounder and CEO of Cleanlab, and Steven Gawthorpe, senior data scientist at Berkeley Research Group, speak about Cleanlab’s groundbreaking approach to data curation with Noah Kravitz, host of NVIDIA’s AI Podcast, in an episode recorded live at the NVIDIA GTC global AI conference. The startup’s tools enhance data reliability and trustworthiness through sophisticated error identification and correction algorithms. Northcutt and Gawthorpe provide insights into how AI-powered data analytics can help combat economic crimes and corruption and discuss the intersection of AI, data science and ethical governance in fostering a more just society.
Cleanlab is a member of the NVIDIA Inception program for cutting-edge startups.
https://blogs.nvidia.com/blog/cleanlab-podcast/ | 2024-12-05T18:07:04.696165 |
https://www.youtube.com/watch?v=c-GqrwZ0Lt4 | hello everybody good morning good evening good afternoon and thank you for joining this neo4j live cognitive sciences and dynamic graph rack and i'm very happy to have my guests this evening for me vasilija markovic hi how is it going how are you doing this this day hey thanks thanks a lot actually pretty good i just talked to thomas from from neo4j oh nice so it's the neo4j day for me and i'm pretty happy to be here yeah great great it worked worked out we we had planned this um this actually session um i think in august a couple months ago uh but uh but there was uh you know sickness involved and you couldn't you couldn't come so obviously obviously we had to move the event to today. So I'm very happy that we can still, we're able to squeeze it in before the end of the year. We have a couple of weeks left before it all wraps up. So I'm happy that we can quite not really repeat, but we can, you know, redo the events that we were supposed to happen in august so and then you had a couple of great news as well in in between so i think maybe even better that we had to push it back a little bit uh so um to get the picture a little little little rounded up uh sure thanks thanks for postponing i got really sick i was trying to take a holiday after a long, long fundraising process, which as the news are, we managed to raise 1.5 in summer. And yeah, I got sick after all of that. I think it just came out of me, the tiredness and everything. So although I was in a really beautiful place in Croatia, my eyes were like all messed up. I couldn't see much. So now I'm back. I can see and I'm happy to participate. No, that's great. That's great. And yeah, congratulations again to the round. I saw your announcement on LinkedIn. When was that? A week ago, actually, when you posted it with a couple of investors into your company. And I guess we get there when we talk a little bit more about the topic today. But yeah, it's great to have you today. Great to be here. So yeah, I'm happy to share more about us, about the approach we are taking. I've seen a lot of your, let's say, talks in the past and I think they all kind of have the same type of direction and then some are more technical some less so i'll try to kind of keep this on on higher enough level and can explain what we do and but still give some maybe examples and show to people how they can find the the resources absolutely sounds good um hello to everybody watching live and chat i see andrea i see alejandro i see walter here guzi was from the us uh giving i i think i would say quite quite uh indirectly giving a thanksgiving note so happy thanksgiving to everybody watching in in the us um he's giving he's thankful for all the great graph database knowledge neo4j gives away for free so yeah we're happy to have you today uh join joining this this uh this session before the big uh the big long weekend i guess in in the states um yeah before we dive in um in into into uh you know cognitive sciences and the dynamic dynamic graph rag and uh and cogni a little bit i guess um vasily would you mind giving a a few a few sentences about who you are, what you do, and maybe a little bit about as well how you came to the world of graphs? How did you discover graphs? How did you make that as, I would say, a pillar even of your job life nowadays? So it's always interesting to hear a little bit about backstory. Sure, sure. So I'm Vasilij, 34, originally from Montenegro. I've been in Berlin now for 10 years working and living here. I graduated with a degree in business around 2013, worked in a crypto exchange, helped scale it from zero to 10,000 users and learned how to code and build as they flows and customer funnels. I realized I like data more than I like crypto at the time. You know, everyone has the preference. I like crypto in the beginning, but then I kind of, I got sour on it a bit. And as I was moving through my career, I became a data analyst and data engineer, data product manager, mostly managing big data systems. So I worked for a couple of companies here in Berlin, Zalando, Taxx, Omeo, and I've spent time pretty much, you know, doing data model, right? So in the end, it's the problem with relational databases, although the structure is relatively clear, everyone has their own interpretation of a session and what they're doing. So as I finished that, I became a consultant, worked for a couple of other companies that had even bigger issues with these types of problems. Then I got bored from moving data from place A to B. I went back to uni, started in cognitive sciences and clinical psychology, actually. And as I was doing that, LLMs showed up. The alarm showed up in full force, of course, they were there before. And I was working on some interesting projects for some clients, and then I decided to just start working on something on my own. So the road was rocky and interesting. Yeah, build a couple of B2C apps. I realized I know nothing about building B2C apps and then gave up on that one. But what I noticed is that when i was using tools like lang chain and when i was trying to resolve things uh there and have like some stable contextual memory i wasn't really getting far and i realized then as i was doing like these cognitive science courses and learning about human memory and like let's say dynamic entities and objects that we could how we use and how we use memory to represent things. I figured, okay, maybe we could kind of model this in a different way, have like this, let's say, more probabilistic entities that can be evolution of this relational model. And I thought, what could I use to do that? And then Neo4j popped up. So I just figured like, okay, I didn't know much about graph databases at that point. I figured, let me do this because it's the best way to represent and evolve the entities as they come along. So we built the first prototype, launched with the WhatsApp chatbot in production with around 20,000 users. And then we realized, okay, all of this is wrong. We couldn't really generalize things. We had like a lot of assumptions, and then we went on this road, uh, to build cogni as a Python library. And that's where we are at now kind of evolving step by step with the approach. Yeah. Cool. That's very interesting. Like, love to hear your thoughts on, on, on entity recognition with, uh, with graph database at some point maybe, but, uh, but let's not deviate too much from the topic of today. It's a top of mind thing that we're currently looking at for our going meta series that runs in parallel. So it would be interesting to hear your thoughts maybe if you've got some time at the end. But yeah, that's a cool story for sure. Thank you for sharing that. So we see see you you starting with with crypto then going into data modeling then you know now it's now founding your own company and um i guess graph has been kind of part all the way i mean even with crypto it's not really core core part but you kind of like you had an inkling i would say to to the to the technology still without maybe even realizing so that's that's always the funny thing with graph technology or for us at least, it's like how many things actually are graph problems or could be solved with graphs or are built on graph technology, even without you realizing it, you try to manage something and you do it with traditional kind of like technologies. And then you realize, oh, actually I could have done this with a graph database or with graph technology and this would have saved me so much work that comes sometimes later. About these concepts that you learn at some point and then you start applying and like externalizing, but until you know them, we are like kind of stuck in the old way of thinking and I think I'm super happy that I started working with graphs maybe a bit late but still yeah yeah cool now that's great um yeah um let's uh let's continue um and talk about uh cognitive sciences and dynamic graph rack so i don't know if you if you have any slides or if you want to want to do a demo or if this is more, if we can talk about it some more. But I just wanted to reach out to the audience if you watch this live and if you have any questions, any comments, and please feel free to put them in chat and I can read them out either in a break session or towards the end and we can cover a couple of the additional questions that come up. Sure, so how I was thinking of doing this, kind of explain things a bit verbally, I'll show a Python notebook, and then kind of go back through the things we've done, and kind of, again, explain we have some semi-slides in that notebook that I think would be interesting to stop at. That's great. And then we are we can actually uh yeah also talk and check the questions so um cognitive sciences right and and and uh what are we actually trying to do and how does this uh let's say approach evolved and i'll tell you the story of what we were inspired with what models we used and what we were thinking and where we are going with it now. And I think this story from the beginning to this day, I think it can actually, you know, we can introduce a few concepts there. So when I started actually with this problem of modeling of these entities, I'd say in the graph store in September last year, and how can I actually create a context for LLM that can be useful and that shows the relationships between entities a context for LLM that can be useful and that shows the relationships between entities and make the LLM get to a better decision to increase the accuracy reduce the cost by already pre-processing that I was basing a lot of my thinking on these models from cognitive sciences such as Atkinson-Schiffrin and all these, let's say, computing paradigm that was in use back in the 60s, 70s in psychology. So what's Atkinson-Schiffrin model and how do these things look like? So when we talk about human memory and it's important to distinguish between the storage and then the memory processes right so the memory storage or the memory domains is something that we use every day to store certain type of information so when we hear some word or see something we usually get 100 of that information stored in in our system but it's for a very short period of time because we can't just keep all the things, the audio, factory, and other signals that we notice. We just kind of have 100% sensory memory that we get. But unless it's really activating other parts of our system, we just let go of that. That's one piece of the memory. Second piece is the short-term memory. You can usually play with that or heard about that. memory second piece is the short term memory you can usually you played with that or heard about that it's like how many you know words can you remember or numbers uh in a sequence and you know how many can you keep in your short-term memory when you're trying to copy from one paper to another the shorter memory has a bit longer span but it usually uh can't also translate into anything you're going to remember after a couple of years unless it's really eventful and it gets then stored in the long term this is what that kids and schaefer pretty much said so you have the sensory memory you have short-term memory you have the long term long-term memories where you know you actually remember things and you know what's happening and you can recollect the childhood events so this is called the episodic memory so episode of your life you know when you got married you got the kid you went to the first day of school and then uh we have this what's called semantic memory semantic memories where we are actually um remembering some facts or information about that so this is a whiteboard behind me the concept of the whiteboard concept of the wall all of these are pretty much stored there so when i was playing with the the llms in the first moment i was like okay um i need to store the data somehow and like pass it to the vector stores but because the vector stores only have the embeddings they don't have any type of conceptual mapping where would these embeddings belong i'm just searching you know and getting the data back so what i was thinking like okay how about i hard code these these models i could i put like i have a short-term memory i have long-term memory i have episodic semantic then there was a princeton paper about koala that that just came out at that point when i was exactly doing that i think i posted like two weeks before they they published the paper i was faster than princeton and probably the last time and i was like okay you know i had the same idea they just did it much better of course and uh then when i was implementing this i was implementing that in neo4j i was creating certain nodes that would represent the the memories and then i was attaching the data and decomposition i was doing with the llm structured outputs to each of these domains so what that led to is as you can imagine the the that is you know we had i had these super nodes this problem of super nodes that had like too much data associated with them and then i had like a lot of let's say these memory components that over time grew and i couldn't really do a good retrieval so as we implemented that for these whatsapp users in a chatbot and they started storing their memories and searching for them we did marginally improve their let's say reliability of vector stores by giving let's say a way to navigate through certain types of memory domains but these memory domains were still too static so the next thought I had was like, OK, because I know that in the cognitive sciences, we have different, let's say, processes, memory processes that we use every day. So we have forgetting, which would be decreasing over the amount of time, the importance of a certain event until it disappears from the memory. And these types of things could be relatively easily coded into the system. So the next attempt was like hey how could i maybe alongside each of these information points i was loading to each of the memory domains how could i actually add things about recency frequency of use of this domain you know if we so in the psycholinguistics in the cognitive lexicon which represents all the knowledge about all the words we use there is this hypothesis that all the words that are on top are there because they are frequently so frequently i'll use much more than uh oxymoron you know or some not something i see yeah this was kind of the approach like let's kind of code that into the system and kind of re-update the system so this is where starting to get the dynamic approach right we are having like adding these weights and kind of using these weights and making these relationships not like vasili owns uh uh i don't know this or um like this is happening there which is what most of these uh lm simple projects do but having these numerical relationships that are kind of increasing in size decreasing in size and that we can quantify yeah it was a lot of fun meant that's right yeah you you you add like you you store the facts but you also give them like a soft uh additional information to it that is uh you know you know maybe this is importance I think something like that so that you can add it into to afterwards say okay this was a really important event so I guess it's 10 or something or this was not as important so maybe it's a one and then it gets forgotten or maybe it gets removed even I don't know but um okay that's cool it's a it's an interesting idea yeah yeah yeah I mean like mean, like it's known in psychology and what I thought like, okay, this is not known in computer science in this world. No. We just focused on these transformers and this- The internet never forgets. So I think like, yeah, exactly. And yeah, in the end, what I was thinking about is like, how can I actually make this scale, right? Because what I saw as a biggest issue with these types of processes is we would need to process let's say millions of nodes we would need to update uh you know sub graphs of information deal with like very complex edge cases that you know require let's say some stability that you know most of these approaches that i was doing at the time i couldn't really reach right and um that kind of pushed me and my co-founder boris at the time to to think about how can we actually make this solution a bit more horizontal and a bit more let's say mature in terms of the technology stack so we can kind of come closer to production processes and doing this at scale uh we were pretty aware that like cost of open ai was kind of forbidding at that point but we assumed it's going to go down like the price of the tokens was was going to reduce and we're going to kind of move with that so we released the python library we created the first cognify pipeline that did pretty much this and then um and use neo4j as the the the store used the open ai's lms use structured outputs created these graphs and these artificial domains of memory and we even with these updates with these let's say numerical properties we would add to the to the nodes and the edges and then try to kind of quantify them we were still running into a lot of issues? Because we simply needed like to be two levels above in terms of the performance and operation to actually, you know, be able to process one terabyte of data because with 150 megabytes, you can do anything, but like with compute, you know, for big companies, it becomes already difficult. And I think one of the reasons why Neo4j is one of the rare companies that managed to solve the graph database problem is because it works at scale. And even Neo4j has issues at certain point. So you need to be careful on how you deal with that. So that led us to kind of think about the product a bit more. And we thought that we should simplify the whole interface because this now is was getting a bit too complex to a set of abstract components so one of the components and maybe i will go through the through the example now because it's maybe a good time to start introducing these elements So you can run this yourself. We have also here a Colab notebook. I'm not gonna run it now, but I'll kind of show you what we had in mind. And we will share the link. Maybe you can share the link with me afterwards and I can put it in the video description so if you just search for cogni on github you should find us so um effectively we are here introducing a couple of concepts which i now mostly talked about and i think it's already clear but this demo is effectively just talking about the some series and some job descriptions and we are trying to extract entities and match these series to this job description so in this context yeah you can see the strings we just used to kind of illustrate the point and how we try to resolve that is that we effectively have this concept of Cogni.add. So Cogni as a Python library has brought to the simple interface because we didn't want to over-complicate things. Add command takes any type of data, so PDFs, images, audio, and stores that inside of the system for query processing so we saw that a lot of issues that happen with the systems like graph track systems were not enough data engineering right so we use this library called dlt behind that creates does the json normalization standardization creates the merge ids so we have all of that data that we need to actually be able to replicate and rerun the pipeline when we need to because this is one of the these guarantees that we can run something in production and once we do this first step we can do it here as adding a list of all of these job positions, but we can also add paths to any file in the system or S3 buckets or whatever is needed. And then this data is gonna be processed, normalized, and we can actually create a graph out to it. So what we decided to do in terms of the architecture and how to approach the system was to create a set of tasks that are tying in into pipelines. So I worked a lot with Airflow in the past and in this data engineering deck space, so these graphs, the central theory graphs, we saw that this type of a use case is relatively flexible and relatively useful. Haystack is doing something like this similar, but what we try to do with these tasks is we give them a state and then we are allowing the every task to inherit from another task so let's say this classified documents task is going to pass the information about the classification to the permission checking which is going to pass that to the chunks so only the chunks with the permission are going to be extracted and we can pretty much structure and organize the data in this way so this is nothing to you but for us this was one of the key things to unlock further experimentation with this cognitive approach because we could then finally play around with different approaches without having a hard coded static system that you know probably won't work in three months because yeah we just assume something that that might not be true so cogni as such is now effectively a pipeline that is combining a set of tasks that are running in parallel in batch or sequentially depending on how you set it or async and And then what we did also is we recognized that everyone has their own use case. So we created a set of infrastructure adapters that you can just connect to at any given point. So you can use multiple databases, relational ones, vector databases, you know, LensDB, PG vector, Pineclone Quadrant. We have here also Filecore DB adapter. We have Neo4j drivers, NetworkX. So you can run this locally, but you can then also have it running in production with one of these bigger players, and it should be relatively straightforward. Yeah, we also support multiple nlms so once we did this and once we kind of build this type of a system what happened was that we figured that well this is not really working that well not in terms of like technology that partly felt confident more confident about them than just the graph generation. But in this case, we had like, let's say we were trying to do retrieval and this context, we are trying to, we can do the search of the vector engines. We can do search on for maybe summaries and then just search for the notes in the graph. But all of this is still a bit too granular for an average user, right? And it's going to take a lot of engineering to get to the good results. And we spend a lot of time ourselves doing this. So you're thinking like, okay, this is probably not gonna just work like this forever out of the box. So we needed to add again, another layer of automation. So when we talk about the, and I'll switch back to that to the screen when we talked about the approach of uh kind of storing the data in the metadata um that is effectively like so these let's say numerical classifiers or or or metrics we thought about actually a different approach. Why not actually extract all of the relevant things we can optimize for in the pipeline, outside of the pipeline itself? So I'm talking about chunking, I'm talking about which LLM we are using, which vector store, which data model we are generating in the graph and how. And how about we actually create a system that kind of automatically evolves this. So the approach there was, and this is something we'll release relatively soon, is to have this pipeline be an evolutionary pipeline, as we call it. So a dynamic pipeline that just fixes itself based on a test set it is evaluated on so let's say are you following me or like should i maybe clarify a bit no yeah talk a little bit more about that it sounds interesting but um yeah that is an interesting approach here dynamic um you know updating dynamic uh almost like a um you know, living knowledge graph or living knowledge base, and that it's constantly changing, a little bit evolving almost. Yeah, we always talk about a graph is similar to the brain because the neurons and the synapses are like nodes and edges, but we didn't think it as literally as you were thinking. So it's an interesting approach. Yeah. I mean, the nature of systems is to evolve, right? And I think the approach here was like the initial understanding is like that, you know, you come in a workplace, in two years it's a different workplace. Like 30% of people left, 30% of people like you don't know so i think like every type of a system is an evolving system you can try to kind of keep a system from changing but then you know things tend to kind of end up in fortran in a bank for 40 years but do we want to be that so the way we're thinking about this is like, hey, we have static data warehouses. There is 140,000 companies using data warehouses. I did some analysis with Google Cloud and Redshift and whatnot. How about we have a probabilistic data warehouse? How have entities that are evolving in a data warehouse where schemas and relationships continue changing based on how good of the answer they are providing to the problem, right? And in this case, this is the first step for us is like to create these data pipelines in such a way where I add like a test set that, actually a train set in the traditional ML approach with some question-answer pairs. I pull that through the pipeline um evaluate it at the end see what kind of a grab did i get what kind of a combination of search searches did i get them and how they're evaluated then i say okay this is not good enough let's loop it back right and then i just loop it back loop it back or i can launch it powerly i can do many things with that you know i can do all these like packs of computer science you know uh searches you know optimize and start from the middle start from the ends whatever like that's all these interview questions from facebook i never got that so you know i'm not gonna try now to pretend i know too much about it but like let's say that we can we can apply different methods and with these methods what we would get to is effectively a self-improving pipeline. The self-improving pipeline is then gonna stop at a certain stage when it passes the test. Now, maybe it's over-optimized. Maybe we just over-train or over-fit that on the train set. So we then pass it through the test set. If it passes the test set, then we know that this data representation that we made, this, let's say, dynamically improving data representation that's where we are adding new things, interpreting them in different way, deleting things that don't matter, is one that answers the questions from our stakeholders. And this is the way we could avoid tweaking the chunk size, tweaking all these factors tweaking the uh all these like factors in the in the the graph rack pipelines that are really time consuming and ineffective and then in the end we could also create another pipeline which is feeding the feedback of of the stakeholders to kind of create a layer above that is kind of again influencing how we are communicating results maybe you can do an interview with the analysts that are using the outputs of this data maybe you can create a chatbot that's just feeding the data back but you're pretty much again you have a feedback loop and with this feedback loop you have an improvement you have a measurable improvement and then with relatively good evals, you have, let's say, a way to move forward. So this is something that, yeah, for us is, I think, the first step towards, you know, this, if you think about a snapshot of human memory, which a current, let's say, Neo4j graph would be, moving towards this slowly moving dynamic memory, and then we are assuming, like, okay okay cost of tokens are gonna be i don't know a few cents per million tokens in a in a year and they're gonna be doing that in like you know you could produce millions of outputs millions of outputs in like a couple of seconds so we're probably gonna have ever increasing speed of these things and we're gonna move like we did with like data engineering from batch to streaming over time and things are gonna get done magically. But yeah, for us right now, like even if we can prove this as a proof of concept is already like a step forward. Yeah. That's, that's, that's, that's very cool. And you, you always have complete view on, on, on each of these, these, these processes. So that's helps a lot, I guess, to, to, uh, um, understand a little bit better what is actually happening because you are updating the data, you are reconfiguring the data, and then you're outputting the data, and then the data becomes your living and breathing almost knowledge graph or knowledge base. And then you put that data into your chatbot or other, you know, generate AI-driven systems, but you always know, okay, this has happened because of this and this changes happened because of this. And this is the evolve, evolvement. So you, you, you can always follow that. You don't have no, a viewpoint into something which, you know, you can say, yeah, something happened, but I don't know exactly what happened and I cannot explain to you why, why it happened this way. But I just know that this is what I get here as a result and um that's sometimes you know gonna be hard especially for you know more complex data i guess and more more data where it really makes is really important that it is correct and true um you're opening up a good question um what what is you know the day it's harder for the data that needs to be true. I think we've changed the paradigm, right? So we moved from like relational data that I could map one to one, and I know always like what's true or false to probabilistic systems that, you know, spin out some tokens. And that is, I think, gonna lead us to, we can probably only give a certain high percent chance of accuracy but we can never be 100 accurate so most of our job now becomes not only building these things because it's relatively simple to chain them together right and there is all these chain tools out there i think it's about like making sure that they work because when you hit 70 percent and you need to move to 90 and then we have a way to actually avoid you know your chatbot giving free coupons to to to customers the wrong reports this is the difficult yeah yeah that's that's that's good um I I was wondering because I had had a session recently on on agentic AI and I was wondering if this the approach you just outlined and with with this you know more agentic driven systems I think this is the the current trend I see a lot a lot of people talking about this but your approach is is is is to some extent it sounded to me but correct me if I'm wrong sounded as like a similar idea but you're giving it a more you know you you keep the process with you so you don't give it away as much because with agentic air just say okay i have this idea of a process this needs to be done so ai do this then i have another idea of a process i do this and then i do this and then i combine them in the end and then i have something when i have processes in between which i have no control over almost something happens and then something gets gets gets gets back to me and your approach seemed to be a little bit more you are trying to structure it and trying to give it a more of a yeah more of a controllable feature feature set exactly and I think we're trying to reduce the complexity by reducing the number of variables that could change and then in that much more in line with the data engineering and data processing practices versus like hey can this agent do something broadly undefined and solve it for me without specifying the exact needs that I have so i think um for us the major part of the work is on defining the evaluations and on making sure that the system works at the beginning and the end as it's expected where this other parts can be automated whereas i think with these agents a lot of it is um defining the the middle right this set of steps that you expect the agent to do one by one, one by one. For us, that's not as important as having a more reliable data store that the agents can write to, read from, versus just having some data store that won't work even in these use cases. So, yeah, I would say we are more like the accuracy and scalability of this data to a degree we can guarantee for versus just having a complex vertical use case that, with the advancement of these systems might be outplaced or replaced in a couple of, in a year or two or whatever's happened. Yeah, cool. There's a question from Jan, maybe I i read that out this do you have any any slides or diagrams describing which data you actually store in your system or which data you are using with with this dynamic graph rag approach uh and you know how do you come from let's say jobs and cvs or or from something else to to the knowledge. I don't know if you can show a bit of that or explain. Yeah, sure. Let me just check. I think I have another example somewhere or... Just a second. So effectively, I think the example, how we actually do it is the structured outputs. So what we do, we would give it, we use pidentic models everywhere so um you probably all have seen this um we just provide the pidentic model via instructor to open ai's api with the data and then we pretty much try to fit that in so uh let's see where the models are um but i think yeah this you can pretty much, I'll share the link here. So I would say that this is the, yeah, I'm sharing this in the chat. Perfect, yeah. Yeah, so I think for you, Jan, yeah, it's effectively just using the instructor that you see here under the hood with some Pydanti class. And do you share your screen, by the way? Because I... and how do I? Here we go. So yeah, this is an underlying thing, so I don't go through our code because there is a lot of stuff there. Yeah, effectively, let's say you define some class called user, then you pass some data to it, and then effectively you're gonna get generated, you're gonna get the pedantic class back with the data filled in based on the data you've had. So if we, let's say, feed to all these CVs, we defined the graph object, which has some nodes and edges, and that gets filled in. So this is not something that we want to continue doing in this way so we have a new project that we're working on now that's called code graph reg generation so let me see i think i would yeah i'll stop sharing the screen i'll try to find it somewhere and show it where we are kind of combining more deterministic methods and llm methods on top so for example in this case what we're trying to do is help our coding copilot build a graph representation of github repository and then enrich that and then help actually answer any questions on that github repository to this coding compile it this person is interacting with it the current way they do it is like they just create embeddings store something in the vector store and then they hope for the best with this right now well it is true with this we are trying to pretty much narrow it down a bit and give them a space for search that could make sense and then again evaluate that test that adapt that a bit more so um this um so the grab generation uh in itself is let's say a relatively complex problem and for different use cases we are thinking about different uh potential approaches that could work and that could actually make sense. I think for us right now, this is a place where we are kind of experimenting the most and where we are really trying to do as much as we can on the topic of, let's say, of the graph generation and graph retrieval. So I think I found it, so I'll share a bit more on this. So this is code graph pipeline, it's called. And then in this case, we have pretty similar set of tasks to what you saw before. But then here, here for example we have this class called source code class that can effectively represent the information about the graph that we can extract from the LLM but also generate more deterministic methods that is going to contain nodes that can be a class class instance node function function call so this is all the things you find in the typical repo and then we can do this on on different levels so as you see here we have expressions function calls we have class instances operators variables so so all of the parts of the actual code base and then we also have the files and then relationships with these files between themselves so this gets pretty complex the trade-off here is where do you actually make the how granular do you want to go we want to see where is always connecting between itself or do you want to kind of stay on the file level so the answer is be somewhere in the middle uh but that piece you know we don't want to find you we want to automate the tuning of that and then we'll see how that kind of works with these evil sets and then the things we are building right now so is this is this almost like um ontology creation where you define a bit of a structure first and then you said, okay, this is my structure, this is what I want to see, and then you populate the fields accordingly. Yeah, pretty much. We're thinking of ourselves as, you know, DBT of the the future so you want to create like data models but you want them to be ontologies you want them to to change evolve and then you want the system to fill in the blank somewhere but you also want human in the loop yeah that's cool um where did you did you would you say is is a what's a great example uh or a great application i would say even where where your system or the the concept of company where that is particularly successful or particularly good good place to to to try or to use in in a in a real as a use case or as an application setup. So I'll preface this with an explanation of how I'm seeing the space now. So who are the people using these types of tools, right? I think back like seven, eight years ago, data science was an esteemed prestigious thing. Everyone was trying to be a data scientist. Then not many people were and not even less people had like the required mathematical background i'm not a mathematician but like i know when i see one you know like a good one who's a bad one like i do have some some training in these things but i would not call myself one and then um because it was so difficult to productionize these data science systems um it was relatively a niche. And it was these types of systems were used to improve, let's say, a certain part of a company and increase revenue from 3 to 5 percent if they are doing well or decrease it by 5 percent if not. But you wouldn't see data science become a mainstream thing. I feel with yellow lamps that changed because now everyone's a data scientist. You know, you have an API and you can just do magic that you couldn't do like seven, eight years ago. You would require like three, you know, lead engineers and like data engineers, two data scientists, one analyst. So it would be much more computationally resource intensive. So with this, let's say, openness now of the system, we have a lot of people more moving into the LLMs, into graphs, into everything that's related. And they need a bit more hand-holding than people in the past, because they had to kind of go through these, let's say, complex problems and scaling issues themselves and now people still just simply assume things will work with yellow lens because you're going to throw something at the api and work do magic but then the rest of the system is probably going to collapse if you don't really maintain it or build it properly so uh what we are trying to keep to do and how we are looking at this is we are looking at people who are moving into the space. We're trying to give them tooling to find the use cases we can't even imagine. So I know a guy, he built a teleprompter tool. So when you're doing an interview, you have a teleprompter running and then he's going to analyze what both people are saying. So I'm not using that. I would be listening to you and responding to Jan's or other guys' questions. And then it would just suggest responses. I could just read. I would be like, you know, American president. I don't know. I wouldn't have to think for myself. And not saying that they don't, but I'm just saying it might be easy. And effectively uh this teleprompter tool is now making was making 1.5 million in revenue in may this year oh wow two guys you know just building a teleprompter tool and like they did a good job marketing it but i think like ai is opening things i couldn't even imagine before if you could tell me that ai is going to analyze our discussion and give me a prompter to have better responses i would be not expecting that yeah so to answer the question uh now a bit more granularly we're seeing interest from different types of verticals uh construction agriculture a lot of people who have a lot of data and are trying to kind of merge this data and analyze it and create and have better insights into the things they don't have time to read or understand and with the llms it's either unstructured data or it's a combination of structure and unstructured data so text to sql then pretty much code generation is one big use case we've seen and we've seen a couple of others so i think um yeah in in practice practice we're seeing let's say two groups one is the people moving into this space so we want to give them an easy tool to use and that's where i try to simplify it and kind of abstract out the data engineering behind because that's a difficult point and the second group is like these companies that have a lot of data a lot of pdfs or unstructured or semi-structured data they just want to put somewhere and make it useful so kind of unlock this data access yeah yeah yeah i've been talking about it yeah no that's great that was that was very informative yeah and i think that's uh that's a huge space i mean we we talked a lot about but unstructured data these past uh months i would say everybody everybody was talking about, oh yeah, I have my data set of huge PDFs or emails or some other stuff I cannot really access in my dataset, but I think we tend to forget that there's more probably on data that is actually structured, but it's still sitting somewhere in a system and it's been used been used and if you if you then enable people to actually combine the the huge amount of unstructured data which can be messy and the huge amount of structured data which also can be messy in truth so um and if you combine them together and then make something that is actually accessible and that is that is actually giving giving something um to the user to the to the developer that can then be worked with in the chat pod is, as somebody at Node said, so 2023, but it still comes to mind as one of the first things to think about in a, let's say, first stab at it, but more complex applications will come around soon and will be there. Like you said, code generation, there could be something as well that is really, really beneficial. Looking at history, looking at maybe something that has been conversational. Imagine like this teleprompter app, you and I, we talk about an idea we have about something we want to develop, but then we didn't transcribe our conversation but we maybe forgot about it and then this teleprompter app prompts it again and says hey you know you talked about this cool idea and then maybe it spins it first so yeah and my question to you is what have you seen based on all these interactions with people i'm sure you're talking to many, many people in the space recently. Yeah, lots of people, lots of conversations. So more or less these conversations here are back-end driven. So it's lots about GraphRack really. So that's the top content everybody wants to, like I said, everybody wants to put their unstructured or semi-structured or structured and unstructured data in some kind of AI system and do something with it. And then the trends follow along. So that's why I mentioned agentic AI. I think that's currently where I see a lot of conversations. People talk about this a lot and put something in and outsource almost like things you have, you would have to think about yourself or you would have to do them into a AI driven system and then gives you offloads lots of your work. Yeah, okay. That matches pretty much to what I was saying. I think the space is gonna continue evolving pretty rapidly. Exactly. And I'm super excited to see all these graph rags come to play. I've seen another five projects in the last week. I think like when we started, this didn't have a name and you know, it was just like a concept. So I think like, it's super exciting to be able to kind of participate and to do these things and especially do them if we can, you know, at that level and that scale that uh you know it's actual production systems and and we change this dynamic of like snowflakes and redshifts of the world towards something more exciting and evolving it's like i think it's a it would be a good contribution yeah definitely so that that's that's what i what i what i what i was surprised at the beginning or what i was what really thinking is unique about your approach is the dynamic part of it so you you're still even with with most of these agentic ai systems you set something up and it is relatively static at least that what it feels to me to be the case and then you know when but but things are evolving so fast and things are moving um a lot and then if you have something that can change and is is adaptable to certain different kinds of requirements so you could you could have your data in but then one user wants actually code generation because they develop something and then you that helps them do it but another user might have some completely different questions and that it's still um you know in some cases it might need a complete refractor of the of the whole data store because you need you have different questions that ask different things of i lost the audio. Is that just me or? Yeah, I can't hear anything. No, sorry, maybe it's on my side. Should I try and rejoin or? Yeah, Alex, it seems others can't hear either. I'm sure he's going to be back soon. In the meantime, how about the commercial break? We can talk about the sponsor of this today's podcast. I'm joking. I don't know. I'm just trying to, so do you guys have any questions I could answer until he fixes his audio? So, so you guys that are not stuck here. Okay, since I'm fine. Yeah, I think I see this personal knowledge graphs question. Maybe I can answer that in the meantime. So we created this concept of a user, graphs, where you can actually, I've hosted the podcast now, so, you know, I'm the captain. Welcome. So yeah, the personal knowledge graphs, we create the concept of the user, so you can store for a particular user, you can create little graphs or sub graphs that just have their information and then for particular use cases you could also do that on organization entity level so you would avoid having that these types of issues with the the graphs that effectively are too broad and contain maybe some personal data you would not want to share with others in the system. Any articles I would recommend? Yes, many. I think there is a list on our documentation. I'll double check that. If you go to Cogni and then to Cogni's documentation page, you should see a list of articles if not I'll make sure to add that page back again after the call but we had a list of around 20 30 articles that inspired us in the past any other questions or hopefully Alex comes back soon. So yeah, I think like Koala paper and then, I mean, to name a few, there was a recently one about generating graphs with the four and SQL with graphs. That was, I think uh pretty popular in the enterprise settings um so yeah i think um um yeah are you back i think i'm back my internet connection dropped no worries sorry i took the lead just okay great yeah thank you very much sorry for for that everybody sorry for dropping off all of a sudden. It was really weird. I could still hear you, and then I couldn't say anything anymore, and then I don't know what's happened. Yeah, this happens sometimes. It does happen, yeah. Yeah, so it's like I think... Yeah, would you like to go through these questions? How much time do we have? I don't know. Yeah, we have a couple more questions. I don't know. Did you discuss anything interesting? I see. Just let me quickly follow up on the wallet concepts from Don. Yeah, I think you probably talked about this. Alejandro says, I understand that there are certain parallels in cognitive neuroscience. Have researchers evaluated the impact of forgetting on learning and how could this be effectively implemented if useful yeah i think that is an interesting idea because i was wondering about this myself as well as because you mentioned the forgetting part and and i you know i joked about internet never forget so and we all want to you know store as much data as possible everything needs to be saved everything needs to be put in some kind of database, you know, data lakes, data, everything goes in. But, you know, sometimes, you know, it might make sense to get rid of some of the older stuff or some of the irrelevant stuff and get rid of some ballast to streamline the actual, the important bits or the relevant bits or the things people are really interested in so there is so much research on this topic on the forgetting on how fast do we forget what type of information do we forget how much emotional uh attachment to a certain type of information impacts it this is also very connected to the trauma research so for example you know if you see the vietnam war vets from the movies when they remember they're back in saigon in 1968 that's actually happening for them they look back to that point in time and their brain is kind of rotating in one old memory that they couldn't forget that they can't really move forward so what I'm saying is the forgetting and the research there it manifested itself already with like this time decay information, like time decay metrics that you could add to the nodes. And there was some research. Yeah. But there is many more strategies in the ways we remember and the ways we forget. And I think it's a very, very open field to kind of go through and research and bring back the learnings. I'm not too much of an expert here but definitely something that we need to check on our side a bit more and i think um in terms of the yeah um effective implementation i think it's a quantification of the relationships in the graph and then actually having a particular type of of model have something actually work. And I think, yeah, that's what I would say on this side. And I think in the context of summer internships, I'm seeing a question here. No? We are not looking for those because we simply don't have the resources now to also manage interns. But what I would encourage is you can always book a call with us if you want to, like some onboarding session and our rep is open source. So any contributions are welcome. We're going to review them, the merge into the main branch and you can definitely showcase the skills there. And then, yeah, I think that's everything I've seen. Super. Yeah, if people want to reach out, the website is probably the best place to check out. I'll post that in chat again, so you can find that, or, you know, like you said, go to the GitHub repo and put in your comments there. Yeah, what can I say? Thanks a lot for the chance to participate. If anyone's interested, give us a GitHub star if you like what we do. Try the library open. As many back tickets as you can, I think, and if you're busy, so we are still new to this, and we want to have a really good system down the line. So more than welcome to the feedback. We have a Discord channel that's not too active right now, but we'll make change that. And join talk to us ask questions we're happy to add features and take the community feedback we have a community roadmap too so you can just uh you know you're going to be prioritized there and you can see the progress and the things you open cool that's that's great thank you very much for for joining us today Vassili, and for presenting this interesting topic, interesting approach, and interesting graph idea. So I would encourage you all to check it out and have a go at it. Let us know how you come around with it and what you do with it. So that definitely is great to see. In terms of next things here on this channel, I'll be joined by Jesus again on the 3rd of December for the final going meta of this year, going meta episode four of season two. So we will talk about semantics and knowledge graphs, again, similar concept. So on the Tuesday, the 3rd of December. So if you're interested in this, then please join us and let's continue the conversation here. We are also looking at your feedback. So if you are a Neo4j developer and want to give us feedback on how you develop, how you code, how you learn new things, where you're from and all your thought processes in going into developing with a graph database. And please let us know it's a quick survey. Shouldn't take too long of your time. And it would be helpful for us to prioritize on where we should be, what we should develop and what we, what you need and what you want from us. And yeah, in general, what to do next, Graph Academy is a great place to learn. Neo4j Aura 3 is a great place to start as with a graph database as a service for free. And then we have a community page. We also have a Discord server, so you can check that out as well if you want to, while you join Cogniz Discord server, you can also join the neo4j discord server and and say hi and hang out with the community there and yeah with that everybody thank you very much for watching i hope i see you on uh tuesday if you're in the states have a happy thanksgiving enjoy your long weekend don't buy too many things on friday do, I don't know, as you like. Thank you very much again, Vassili, for joining today and for the interesting conversation. | Neo4j Live: Cognitive Sciences and Dynamic GraphRAG | 3,552 | Neo4j | 20241129 | Explore the intersection of cognitive sciences and dynamic graph-based Retrieval Augmented Generation (RAG)!
This livestream delves into how cognitive theories can inspire new approaches in machine learning, optimizing data storage, retrieval, and personalization. Learn about the latest advancements in combining graphs and vector stores to create semantic layers, enhancing the in-context learning pipeline, and shaping the future of AI tooling.
Guest: Vasilije Markovic
https://www.cognee.ai/
Github: https://github.com/topoteretes/cognee
Instructor: https://github.com/instructor-ai/instructor
0:00 - Guest Introduction
2:05 - Overview of Today's Topic
5:04 - Cognitive Sciences and Dynamic GraphRAG
8:45 - Introduction to Dynamic Graph Memory Models
10:01 - Human Memory Models as Inspiration
14:02 - Dynamic Relationships in Graphs
18:29 - Demo: Cognitive Python Library
28:14 - Self-Improving Pipelines in Graph RAG
34:02 - Evolutionary Knowledge Graphs
39:10 - Code Graph Pipeline Example
47:02 - Future Trends and Dynamic Graphs
51:00 - Q&A Session: Personal Knowledge Graphs
57:05 - Closing Remarks and Upcoming Events
#neo4j #graphdatabase #graphrag #rag #genai #llm #science #cognitivescience | 2024-12-06T19:38:54.382447 |
https://www.youtube.com/watch?v=kwRG7cnqSu0 | Hello, my name is Elias Eki. I'm an ML specialist in Google and welcome to this world through where we're going to demo the end-to-end GNI AppStarter Path, a resource we created to accelerate the path to production for building GNI applications in Google Cloud. So let's start with it. Imagine you are in a scenario where like you are you build a collab, a notebook, and you build your own agent. Yeah, in this case, for example, I'm using Langraf and I built my own really complex Langraf agent with a Gentic rug to power our Vertex documentation. In this case, it's the system instruction documentation page. So then the agent works well, it's working well in my Colab, and I'm looking at how I need to professionalize it. So this is where the starter pack can help, essentially. So the idea is I can navigate to the Generative AI repository, and it's under Gemini, Sample Hubs, and to NGNI starter pack. There are also short links you can use to reach the same page. And then essentially the way to consume it is by downloading this zip file here and unzipping it. As part of this demo, just to reduce time for the demo, I already downloaded it, opened it up in my IDE, and I already created a GitHub repo so that we can use it to store the code. And I created a set of three empty GCP projects to deploy the full solution end to end. So the idea is really rich production in minutes here. So I'm going to open my ID and with the starter pack already downloaded. And let's have a look quickly at the architecture you see here. Hopefully you can see. This is a really high-level architecture, but hopefully it gives you the full idea of the end-to-end. First of all, we have an application in Cloud Run. This application in Cloud Run is hosting FastAPI, which is a framework to do low latency serving, and our agent. Our agent can be built with Lanchain, Vertex SDK, Lama Index, and so agent. Our agent can be built with land chain, vertex SDK, Lama index and so on. Coice is yours in the way in the SDK used to be the agent. All the code around the application is stored into this application folder you see here. We also have the infrastructure as a code and CICD. This is installed in the deployment folder, along with a good remit to explain the end-to-end process. We also have like a vector store and data storage analysis. Again, the Terraform stuff will build everything from scratch for you. And the chain or agent you use will use an LLM, and that LLM is defined within your chain or agent again. We also have a notebook folder with a getting started notebook, which hopefully will help you in developing your first agent or chain, should you not have already an agent or chain developed. So this is more like a way for you to start familiarizing with the starter pack. And at the bottom of it, we even showcase how to do evaluation. So this is more like a placeholder for evaluation, but really we showcase how much evaluation is important before you reach production. So in this case here, we can actually start testing the service locally. Yeah, so as I was mentioning, we use FastAPI and as part of the FastAPI application, essentially we offer different endpoints so that the client can reach those endpoints. One is to stream events. This is the endpoint used to power the chat application. And the other one is for feedback collection. The streaming events endpoint, if you see it here, will call a chain. So this chain here is nothing more than a really really really simple lunch in chain in this case here which is essentially a recipe bot planner so it's essentially saying you're a culinary assistant you need to provide recipes for the user yeah and so fast API is invoking this chain here we need to do point reinstall and install the right packages. Okay, now we can run the playground. Okay, so for example, this is like a fully fledged 3-minute application. Of course, it's for prototyping purposes, it's not production already, this part here. But you can do many things. For example, you can ask a question and it is a multi-modal so for example you can upload images so you can ask for example give me a recipe based on this image it supports any kind of modality you can for given conversation, you can provide feedback here and this feedback will go in the server, in the FastAPI server. So you can kind of replicate what the user will do in production. You can also save chats. This is pretty important when you want to build a set of ground truth conversation that went pretty well. You want to use it for evaluation or tuning purposes. And so, yeah, it's essentially a way for you to really test the service before it goes to production. But what I wanted to show you is that as part of the application, right now we are using the chain.py I showed you just now, which was the recipe assistant. We also offer a set of pre-built other kinds of application. One of them is a RAG application to showcase how you can create a RAG application with dummy data. And the other one is a dummy LandGraph agent. So what you can do is to really like just replace the chain you are importing and this will automatically, you will be able to test the other agents. So for example, if you want to use the LandGraph agent, just need to replace the importing line here. And then we go back to the playground, open a new chart and we can ask like, I watch the weather in New York. First time we'll take a little bit more. So you can see actually this because we are using a LandGraph agent, this is performing a tool call. So we use function calling in the scenes with LandGraph to power a tool call. And this is essentially being forwarded to the Streamlit UI and everything is streaming back. So essentially in this case here, we're capturing the event. So imagine you have a real application, you might want to give citations to the user and things like this. Of course, you cannot even like edit the message. Like you can say here like San some front and redo it again. So something similar to like the UI from Bart. Yeah. Okay, so, but in this case, we are just using dummy use cases, but I do have my own chain, right? So I might want to use that. Yeah. So how do I do it? I mean, it's pretty simple. Essentially, you will just need to, in this case here, the code is not in a great state, but the idea is that you will be able to copy paste this code here, go into, let's comment for one second, this line, go into chain.py and paste the code you develop in the notebook, yeah? So this is the land graph we used before. So like server.py, it's importing chain.py, which is still running. You can see the application is restarting from scratch. Yeah, and it's loading up. Okay, so for example, now we can go back to the playground. And if we ask a question like, what models do support system instructions in Vertex? It should be able to respond to it. So you can see it's actually performing one tool call. Yeah, nice. So essentially, this tool call you can see is actually powered by the documentation we provided there. So essentially everything was downloaded on the fly and showcased to the user. So we can say yes, very nice. Okay, perfect. So we can see the application is working locally, fine. Now, as a developer, there might be additional steps to build confidence in the application. Of course, you need to do evaluation. But let's say I already done the evaluation. I already have confidence in my application, and I want to deploy to production. So how do I do it? So let's go back here, and let's close for a second. Streamlit. So I'll need to go to the deployment folder. And now everything I'm going to do is actually pretty well documented, even with videos, into the Regni file to the deployment folder. And now everything I'm going to do is actually pretty well documented, even with videos, into the README file in the deployment folder. So everything I'm doing, you can replicate it yourself by just looking at the README. So we document how the CICD will work, and I'm going to explain it to you as well. So the first thing we'll need to do is to deploy all the infrastructure we need with Terraform. So this is required because we might need to set service accounts, IAM permissioning, artifact registries, enable the right APIs, enable things like log sync so that every login we do lands into BigQuery. So the way to do it is pretty simple. You will need to change this file here. So this file here will ask you for a project ID for production, a project ID for staging, a project ID to run the CICD. Then you will also need to provide a link to your own repo. In this case, we use Cloud Build to do all the CICD. So as part of the, if I show you here, as part of the CICD project, I already have a repository connected to it so that I'm accelerating the demo, but ideally you will need to do this step here and pointing Cloud Build to the right repository you want to use for the process. In this case, I'm using this repository here in GitHub. Okay, so just to accelerate the time, we are going to replace all the variables in one go. So we paste this line here. So you can see I'm using the project ID I just defined earlier, staging project ID I just defined earlier, the staging project ID I defined earlier, and the CICD one. Yeah. That's good. The only thing I need to do is to navigate to the deployment folder, terraform, run terraform init, initialize the environment. And then I will run terraform apply and pass the variable file we have there. So this is going to create 71 different resources for me. So it's a lot of stuff. So it will take 30 seconds roughly. Again, doing this infrastructure, if you want to do it manually, it will take you maybe days and it will be full of errors. So Terraform is a great way for you to accelerate this process and build and use code to define all the infrastructure you need. Okay, you can see cloud service accounts are being created, of course, like users will be able to take the Terraform code and extend it for their needs, right? Imagine a user might need a Cloud Armor or a user might need a load balancer. It's pretty easy to extend this Terraform code for their needs and their system. So you can see we created 71 resources. Perfect. Okay, so now everything is connected together. Cloud build will list them for everything that we do in our repository. So for example, if I'm a developer and I'm going to make a change in my chain, so in this case I have this agent, and I'm going to make some edits, like I'm going to write some edits here. And then I want to make a commit to Git. We can see we just changed the file. What I will typically do is that I will open up a pull request. So when we open up a pull request, we are going to make a commit and push it. Oh, lost. Nice. So we can actually see the pull request here in GitHub. And as you can see, GitHub will trigger Cloud Build. So as a developer, I will be able to know if Cloud Build is having a successful launching test or not. So what's happening in Cloud Build? We can actually see directly in the UI here. You can see that there is a new run happening in Cloud Build. And it is essentially running for me unit test and integration test. So we are checking that the application runs fine before we merge our code into the main branch. Now, let's accelerate the process. Once you deploy it, it will look like this. So unit test and integration test will be successful. And of course, you can run it also locally. But let's accelerate for a second and let's merge directly our PUT request. When we merge our code into the main branch, you will see another process being triggered by Cloud Build here. The pipeline is called CD pipeline. So this pipeline is a little bit more complex cause it will will have the responsibility to deploy our application towards production. So the steps we will do here are the following. First of all, we build a container containing the fast API server, all the right dependencies, and then we push this container to artifact registry. Then we take the container and we push it to Cloud Run. And then finally, we do load tests. So we test the application in the staging environment. Once the load tests are successful, we are able to trigger a production deployment where we ask a user to confirm or not the production deployment. So an example of a successful load test, sorry, an example of a successful CD pipeline will look like this. And as part of it, you can actually, as a user, before you reach production, you will be able to inspect what's happening in your application and see a report of, like for example, the load test of the application. So for example, we're using Locust in this case here, which is a framework to do load testing. And you can see that as a user before reaching production, I will be able to see what's the average latency of my application, what's minimum, maximum milliseconds. You will be able to see like total number of requests over time. And as many as you like total number of requests over time. And as you increase the number of requests, what's the performances of your application. Yeah. So these things can be super helpful before you reach production. Now, the application will run for like three minutes. So let's wait for it because I want to show you like the full end to end, but I want to also unveil a little bit of the magic behind it. Yeah. So essentially if I show you here, this graph here, the idea is like whenever a developer is triggering a pull request, we use Cloud Build and the Terraform code was instrumental to do it. We use Cloud Build to trigger trigger this pipeline you see here, which is in the deployment folder, CI PR checks, which is a really simple pipeline, which is installing the packages, running unit tests and running integration tests. And the tests are defined in the test folder. So we have unit tests, and we have a pretty comprehensive unit test layer, integration tests, same thing here. We test everything end-to-end to check that the service is running fine. And then finally, we also have the load test here, which everything you can run locally, by the way. So you can always test before you deploy. When it lands in main, we do the same process I showed you before for the CD part. So you can see now that the WorldCA CD was completed. And as a user, the last step I will need to do will be to approve or not the deployment to production. So I may say good to go and approve the build. This will only run the last cloud build pipeline, which is pretty simple, is just triggering the deployment to production. Yeah, so nothing really, nothing complex, let's say. So you can see this is deploying the container in Cloud Run for us. We can actually have a look at deploying the container in Cloud Run for us. We can actually have a look at what we use in Cloud Run. So in this case here, and of course, like Cloud Run here, it's the default, but you are free to diverge. In this case here, the deployment is pretty simple. We are just deploying Cloud Run and we are saying it needs to have only a minimum of one instance. So that the service is always up and running for the users. So we have a good latency. And we have a concurrency of 40 requests. So 40 users in parallel can send requests to one service. But of course the service will auto scale over time. So yeah, it's almost finishing. I can also, in the meantime, I can also show you what's happening on the observability side, because as part of the FastAPI server, we are logging every single request that goes to the application, yeah? And so like if I, oh yeah, it just deployed, so I can actually show you in real time. So if I copy the URL of the application we have in production, we can go back and trigger make playground. So in this case here, we have a remote URL. So instead of using localhost, I'm going to paste my remote URL, and I'm going to authenticate the request. So models supporting system instruction in vertex. We will see the production model responding to it. Yeah, perfect. So you can actually see like end-to-end kind of scenario the production model responding to it. Yeah, perfect. So you can actually see like end to end kind of scenario where we deploy the service to production and the production endpoint was able to respond with the right tool call. So the agent is finally deployed to production in around 30 minutes. We can give feedback again and things like this. And the thing I wanted to show you is what happens once we use the model from production. So once the model is in production, we can navigate to the production project. And in there, you will be able to see the Cloud Run endpoint up and running. And you will be able to see something pretty, let's say special, which is all this stuff is happening here. So essentially what's happening is that the model is giving us full transparency of every single step happening in our application. So if I go to Cloud Trace, like for example, we'll be able to see these dots you see here here which are essentially let's reload it, which are essentially the full trace of our agent. So you can see these are actually the reasoning step of our agent in the production endpoint. So you can see the agent first interacted with the task agent and method, then he started the retrieving documents, then he started grading documents, rewriting them and producing the response to the user. And for every single request, I'll be able to see input and output of what happened. Yeah. So I'll be able to see, you know, which document I use and then everything will be logged. Because cloud log in will expire in 30 days, everything is going to BigQuery. So everything is landing in BigQuery. And again, the Terraform here was pretty important to set up this in seconds. So this BigQuery table here will contain everything we are logging from the application. So if I run it, I should be able to see all the logs we just produced by me invoking the endpoint. Yeah. And also same thing for the feedback side. We have a table hosting the customer feedback. So also these will contain everything from that now the final part the final part is that this stuff doesn't make any sense for us right it's pretty ugly to see a table like this i want to have a dashboard so we are also offering a dashboard as a template to view what's happening in their application so for example in, in this case here, I'll be able to see that, what requests are going through the application, what's the feedback over time, how many conversation I have, average feedback score and things like this. And for every conversation, I will be able to see what the human and the AI are responding. So this concludes our world through. Please don't hesitate to reach out for any feedback or question you should have. There is an ideas here that you can use for that. We are actively looking at your feedback to improve the starter pack over time. So stay tuned. | E2E Gen AI App Starter Pack - walkthrough | 1,338 | Elia Secchi | 20241029 | Thanks for watching! You can reach out to [email protected] for any question or feedback. | 2024-12-07T09:10:56.555940 |
https://www.youtube.com/watch?v=r09tJfON6kE | All right, so let's go and get started. My name is Jonathan Larson and today I'm excited to come here today to talk to you about GraphRAG. GraphRAG is an LLM-derived knowledge graph for RAG, and just to give you a small teaser, what you see over here on the right-hand side is actually a picture of Kevin Scott's Behind the Tech podcast as an LLM-derived knowledge graph, an LLM memory representation. We'll be covering this in about the middle of the presentation here today, so I just wanted to give you, though, a teaser of what's to come. So what is GraphRag? GraphRag is really a two-step process. It is first an indexing process that's run over top of private data to create LLM-derived knowledge graphs. These knowledge graphs serve as a form of an LLM memory representation, which can then be used by subsequent steps to then do better retrieval, which leads us to the second step. The second part of it is an LLM orchestration mechanism that utilizes those pre-built indices that I just talked about, and then those indices can be used to construct much, much better, much more empowered RAG operations. And that really leads us to the key differentiators of what GraphRAG allows us to do. The first is it allows us to help enhance search relevancy. This is because it has a holistic view of the semantics across the entire data set. Second, it helps us enable new scenarios that would today require a very large context. For example, doing holistic data set analysis for trends, summarization, aggregation, things like that. So if you haven't had a chance to read it yet in the pre-reads, I'd encourage you to take a look at the blog post, take a look at the archive. They have a lot more technical details and measurements and evaluations in this space. But the one thing I want to ask of you today as we go through this presentation is how should we best drive impact with this technology. Okay, so let's go ahead and explore how GraphRag actually works. The way that we explain GraphRag is first to explain, of course, how baseline RAG works. So in baseline RAG, what you do is you take a private data set, you chunk it up using embeddings, and you start into a vector database. Then you perform nearest neighbor search, and you can use those nearest neighbor searches to augment the context window. GraphRag is a parallel process to the way that baseline rag works. What we do with this is we actually take the same text chunks and then we take those sentences that are being extracted and we ask the LLM to perform reasoning operations over top of each sentence in a single pass through over all of the data. So let's take this example sentence here. We have the POK leader, Sylvia Marek, who took the stage with Lucio Jacob, founder of Sea of Our Wildlands. Now, the major differentiation here is you can see there's some named entity recognition done over top of the sentence. And that's pretty typical for this type of text analysis. However, the major differentiation here is we're not just looking for the named entities, we're looking for the relationships between those entities and the strength of those relationships. And this is where GPT-4 really comes in to play a very strong leading role in the capability of this technology. For example, we can actually see that Sylvia Marek is very strong and really related to the POK because she is the leader. We can see that she is perhaps weakly related to Save Our Wildlands, because she is apparently taking the stage with its leader, but she's not the leader of that. And that's a major differentiation here, is that GPT-4 can understand the semantics of these relationships, and that allows us to create weighted graphs from those relationships that are far richer than just like co-occurrence networks, which is where traditional NER would typically take this type of problem. So once we create these knowledge graphs, let's say, for example, we took all these sentences across this data set, we create a knowledge graph here, what you get are a series of nodes that are connected to each other via these relationships. But that's not all we can do. Once we have the graph, we can utilize graph machine learning to do semantic aggregations and hierarchical agglomerations over top of those structures. So if you take this graph right here that we just created, which has no labeling on it, no colors, if you will, we can then create a labeling on that at one level, and then we can hierarchically create subpartitions and subpartitions until you get down to individual nodes. This allows us effectively a granular filter that allows us to ask questions at any level of granularity across the dataset for a semantic topic. Once that is built, we can then take that into a variety of different end use cases. Just to list out a couple, we could do dataset question generation, we could summarize Q&A, and a variety of other methods that we can talk about here later today. So let's go ahead and jump into some demonstrations here and show you this technology in action. So as I bring up the screen here, what you're gonna see here are three different columns. In addition, I need to explain the dataset that is behind each of these different columns that we're going to be analyzing. So we have three different RAG systems that are implemented over top of one data set. This data set is about 3,000 articles in total. It's articles that originate from both the Russian and the Ukrainian side of the war, of the conflict. And we're going to ask it a question to all three different RAG systems. We're going to ask it a question that to all three different RAG systems, we're going to ask it, what is Novorossiya and what are its targets? So it's actually two questions. Now, if you're not familiar with what Novorossiya is, Novorossiya are the Russian occupied portions of Ukraine. It's also a political movement. And then the what are its targets are looking for what Novorossiya might be looking to destroy is basically. We know that there's some information inside this data set. Let's see how we can actually use RAG based retrieval to find those particular targets. So using traditional lang chain based semantic search, this is just using baseline RAG right here, we can see it actually failed to answer either of the two questions. Sometimes it can answer the first part of the question okay, but today it decided it couldn't come up with anything. The second column here is a much improved version of RAG. This is a, as people who've been using RAG know, it oftentimes requires a lot of tuning, prompt engineering, and other improvements to actually make the RAG work more effectively. And we can see that here. So the left-hand side here is looking at baseline RAG. The middle column here is a much improved version. It's a supercharged version of RAG. Now if you read through this text right here, I'm just going to scroll through here real quick, you will actually see that it actually does an okay job of actually talking about the first part of this question, what is Novarosia? But it actually fails to mention anything about the specific targets that Novorossiya are looking to destroy. In comparison, over here on the left-hand side, I'm going to highlight this first paragraph. This first paragraph here actually addresses the what is Novorossiya part of the question. That's good. But the second paragraph right here gives us a list of very highly specific targets that Novorossiya was looking to destroy. For example, the national television company Ukraine, the radio station, the cannery, looks like also some private bank and Roshan properties, and they're also planning on terror attacks in the city of Odessa. And so this is exactly what we're looking for. These are the specifics that we were seeing recall failures for on the baseline RA reg operations that we performed in the left two columns. Now, when we perform GraphRag, one of the nice features about it is it also allows us to actually look at the underlying provenance. So you'll see inside the text here, it refers to relationships. We can actually take a look at those relationships. I'm going to move to a second tool here, which is this is a purely GraphRag tool. So I have two columns here. One is doing a local search. One is doing a community based search. We'll get into the details of what each of these two are later. Both of these answer the question correctly, but I'm going to focus on that question about Novorossiya has targeted several entities for destruction. Let's go and click into those relationships. And if we open up the raw text text we can get the English translation because I can't read Russian or Ukrainian and actually get to the originating text chunk that was used to make the specific claim made above right here. So this can really help aid in understanding if there are hallucinations being made in the system and detecting them and then also providing grounding and evidence which is critical for analysts trying to use this data for their production purposes. The other thing we can do in here, of course, as well, too, is we can use a second agent to really help reduce hallucinations by providing a verification score. In this process, what we can actually do is we can also use the information that's provided in the context and the answer that was provided and use an independent agent to evaluate the two of those together to ask it if anything was hallucinated in those results. And so the score can really help provide an after-the-fact analysis as to whether the information was correctly grounded or not. Now let's go on to some of the capabilities where GraphRag can show new opportunity spaces where regular RAG struggles. I'm going to ask it the question, what are the top five themes in the data? Now, when you ask this question to a typical baseline RAG system, it's going to take this phrase. It's going to vectorize it using like an ADA embedding. Look for the nearest neighbors. The problem here is there's nothing in this query that would indicate any specific filtering over top of that data set unless someone inside of this data set already wrote what are the top five themes of the data, which is highly unlikely because each of these articles are independent of one another and they don't have any knowledge of one another. So what does baseline right give us in this case? In this case it gives us state of Russian economy. It actually did in this case also come up with one example here about acknowledging that there seems to be a war the national rating the investment climate improving the quality of life and the meaning of Vladimir Putin so I want to just take a step back for a second and kind of emphasize this is a data set that's primarily about a conflict like 80 of the articles about a conflict and you can see about 80 of the bullet points here don't have anything to do with the fact that there's a war going on. In contrast, because we can use the semantic and thematic and agglomerative approaches that we've built over top of the graph machine learning parts of GraphRAG, we actually have a holistic understanding of what's happening in the data set. So on the GraphRAG side of things, you can see front and center, the first thing it's talking about is the conflict and the military activity and how that plays through on each of the major themes that it then gives you in context. Now I do want to draw your attention to one other thing on this. With BaselineRag, it's using about 5,000 tokens of context and it returned in about eight seconds. GraphRag here is a lot more expensive. It took about 50,000 tokens of context and took about 71 seconds to respond. But the important piece here is that though GraphRag is using a lot more resources, it's providing a much, much richer and correct answer, which we've seen for our customers is the crucial piece and what they really care about. And they're willing to actually pay the extra cost to get these much better answers returned. Next, let's take a look at this data set from a network map perspective. So this is actually a visualization of the entire graph network map of this data set, which is called the Vena data set over the Russian-Ukrainian war. Now again, I said about 80% of it's among the war, so the main core of this is going to be about war topics. On the periphery, however, we'll find lots of normal topics, and one of the ones I'd like to look at in particular is this group over here on the left. It turns out a lot of these entities here are about soccer, and you can see that they're semantically grouped next to each other in the embedded space, which is also great because it starts showing us that there are a group of entities that are co-related with one another. And then these colors here serve as a way for us to actually look at the communities that define these semantic boundaries. So for example, we can actually choose one of these colors. I'm going to choose a color for community, it looks like number 450. I can extract the subgraph for it, and we can also extract out pre-generated reports that the LLM has already generated on each of these community structures. So in this case here we can see clearly this is the Nova Rocia community that was being used to answer some of those queries that I made earlier in the demonstration. All right let's go and switch context a little bit to a different data set. So this is a data set where we took in all of the transcripts from the Behind the Tech podcast, which is, of course, hosted by Kevin Scott. So we can actually use the same type of methods here with GraphRag and ask it holistic, thematic, and trend-type questions over top of this data set, where, again, regular RAG would tend to fail. So the first question I'm going to ask it are what are the top 10 technology trends in the podcast. And so you can see here it has some good breadth and diversity. We'll dive more into this here later too when we go to compare this with Gemini. So I'll ask you to maybe comment on some of these later because we'll actually be able to see it in side by side with Gemini. The next question, and this is an important one because we have a lot of other demonstrations tied to this one, or what are the most odd conversations discussed? Again, this is a question where if you imagine baseline RAG being run over the top of this, it would perform very poorly. It would effectively pull back random chunks, and if those chunks had randomly picked up something with some odd conversations, it would comment on them, but it's not going to have the comprehensiveness and diversity that a graph RAG approach will in this case. The last example I'm going to show you here is actually a side-by-side. As I mentioned before, we have orchestration that runs over top of the indices. We have two of those methods that we're highlighting here today. On this orchestration, we have one that does a local search of the knowledge graph. It looks for, in this case, nodes that discuss artificial intelligence and looks at the knowledge graph. So it looks for like in this case, nodes that discuss artificial intelligence and looks at the nearest neighbors. Or we have a global search, which is going to be a more expensive operation that looks at the community summaries. And you can see the comparison of the depth and breadth between these two. So in this case here, you can see that very clearly the global summary here is providing you a much more comprehensive and diverse view than you are in the local search. So let's take a look at Kevin's podcast, Knowledge Graph, an interactive graph visualization tool here. Now, each of the nodes here represents an entity that the LLM extracted. And again, I just want to take you back for a second. This Knowledge Graph did not exist before the LLM was exposed to all the transcripts. It read through all those transcripts and created this from nothing. And that's pretty cool. So if we take a look at the colors here, those colors are semantic partitions that represent a high level topic. Now, one of the things I should mention, because I can only show one level of that hierarchy at a time, is that we're looking at the root hierarchy of this graph right now. So if we zoom into this node right here, let me go ahead and just pull it up here. We can see that this node right here represents Kevin Scott. Zooming out and looking up a little bit, we can see that this node is Microsoft. That's another very highly connected one. And another one right here, which is a major landmark, is Christina Wharton, who helps on the podcast as well too. Now if we zoom into one of these colors in particular let's go ahead and take a look at this green section over here. One of the things we're going to notice is we can see all the entities that are being pulled out. So we can see things like RNA virus, spike glycoprotein, SARS coronavirus 2, some things like synthetic biology, computational biology. And what's interesting here is it's actually grouped two episodes together into the same semantic topic. We have one with Drew Endy in episode 22. We have one with, oh, yeah, one episode 33 of David Baker. I looked these two people up. It looks like they're both in the biology field, so it totally makes sense. And it actually goes to show that GraphRag is actually working in the sense that it pulled out all the entities and then actually grouped them semantically together, which is exactly what we want because that will help us answer those questions. So if we get biology type questions on the orchestration side of the house, we can then come to this portion of the graph to then help better answer and augment those questions. | GraphRAG: LLM-Derived Knowledge Graphs for RAG | 939 | Alex Chao | 20240504 | Watch my colleague Jonathan Larson present on GraphRAG!
GraphRAG is a research project from Microsoft exploring the use of knowledge graphs and large language models for enhanced retrieval augmented generation. It is an end-to-end system for richly understanding text-heavy datasets by combining text extraction, network analysis, LLM prompting, and summarization.
For more details on GraphRAG check out aka.ms/graphrag
Read the blogpost: https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Check out the arxiv paper: https://arxiv.org/abs/2404.16130
And follow me on other platforms so you’ll never miss out on my updates!
💌 Sign up for my free AI newsletter Chaos Theory: https://alexchao.substack.com/subscribe
🐦 Follow me on Twitter https://twitter.com/alexchaomander
📷 And Instagram! https://www.instagram.com/alexchaomander/
🎥 And TikTok! https://www.tiktok.com/@alexchaomander
👥 Connect with me on LinkedIn https://www.linkedin.com/in/alexchao56/ | 2024-12-07T21:00:43.888607 |
https://www.youtube.com/watch?v=iGQLG0bWDxE | Hello community! A very fast video on a beautiful new mechanism to combine the knowledge of LLMs with the knowledge, the structured knowledge in Knowledge Graph. And this new methodology is called GIF. Now, you know LLMs like OpenEI01 are nothing short and transformative, but today we're not just gonna do a simple knowledge retrieval from the knowledge graph. No, we will have new innovative ways how we combine here the structured knowledge and the intelligent reasoning together. Now, both methodology I'm gonna show you, maybe you know Sync On Graph, T-O-G, we have it now for months, and a brand new GIF methodology by UC Berkeley. Remarkable advancement here in the integration of LLMs and Knowledge Graph. So, let's start and let's explain here the simple one, the old one that we already know, Think on Graph. Responsible reasoning of LLMs on Knowledge Graph. Reference work we have here. Idea Research, University of Southern California, Xiamen University, Hong Kong University, Microsoft. Beautiful. Now what they do? Instead of just fetching here some simple facts and data from a knowledge graph, the LLM in this particular case could actually sink its way through the knowledge graph, exploring here different routes to take, which is beautiful, and it works like a detective gathering the clues. This is exactly what Sync On Graph does here. How does it work here? Well, it couples the LLM with a knowledge graph using here a simple beam search algorithm. So this helps you the LLM to explore multiple possible reasoning paths at once and then choosing the best one. If you want a detailed explanation of beam search, I did this here in my video where I showed you Stanford and OpenAI code and intelligence field because there they used a similar methodology. So this means the LLM doesn't just retrieve facts from the knowledge graph, it makes a dynamic decision doing so. I will show you in a minute a beautiful example and the sync on graph supports also multi-hop reasoning which is beautiful because this is exactly what we need for a more complex task. So simple example, we have a question. What is the majority political party now in the country where Canberra is located? Now, if we have only an LLM and even if you say, hey, chain of thought and whatever, let's think step by step, you know, the model will fail. If we then have here an LLM plus a knowledge graph, and we have here a Sparkle query like we do normally for our RDFs, you see here, this is the response that we get. So we retrieve here. Canberra is located in Australia. Majority party in Australia is however not found here in Sparkle. So we do not have the link that Australia has this political party called whatever. So the prompt now responds here, sorry based on my query result from the knowledge base, I cannot answer your question. Because there was no direct link in the knowledge graph available and now this beautiful new thing is you're looking for triplets related to canberra so on the graph you go here canberra is the capital of and whatever you build a sub graph if you want. And then you kind of think. So you see the most relevant one is Canberra, capital of Australia, looking for dribbles related now to Australia. And on the graph, I look here for a subgraph with a high probability. And I see here in Australia, then for example, the prime minister. Then I do another thinking here. The most relevant one is Australia Prime Minister and this is Anthony. Congratulations. So you see I go step by step thinking and building here whatever I need. And then here the system is able to think hey I know that Anthony is from the Labour Party. This is enough information collected to answer here this particular question. And finally, with Think of Graph, Think on Graph, we succeed. Isn't this beautiful? If you would like to see this in a slightly different way here, we have the same question, of course, and then we have different search depths. we have here again Canberra continent territory capital of part of whatever is our subgraph we have here then the second if object if you want Australia all the subgraph and then we have Anthony again with whatever Anthony is connected with in the knowledge graph and then having here this mighty powerful subgraph, we start to prune it. We look at the most important information, particular in order to answer here the user query. So you see, you prune the graph. Now you look only at the most important probability links, for example. If you have not enough information no problem you look now from canberra to australia you see here the network of australia you prune here and give me a probability and here the pacific query not enough information you go on with anthony and exactly like i just showed you this is just another visualization in the official publication if you would like to have this in a little bit mathematically introduced way this here is a simple introduction beam search for multi-hop reasoning path pruning and traceability and the code of course since it is now month in the making here we have a beautiful github repo this is here the link for you and you see you're just a python main free base or if you want you can go with main wiki base whatever you have you define here your query and your llms and whatever you have and it is simple the code is now after month quite stable to use for your example have fun with this but now knowing this rather month-old model let's look at a brand new model GIF and GIF stands here for a graph inspired veracity extrapolation so the truth let's have a look. While the think-on graph excels in Scenario, where a well-structured knowledge graph exists, give now tackles here a different but equally important challenge. How do we reason here in this dual system way when the knowledge graph is sparse or incomplete. I indicated this here that we have here in yellow here a partial knowledge graph, incomplete, sparse information. How can we use for example here the parametric knowledge of the LLM to build here a knowledge graph that will enable us to answer the human user query and the answer is gif we have here a new publication october 11 2024 university of pennsylvania university of california university of california berkeley uc berkeley beautiful let's have a now a more detailed look here exactly at this one so gif starts here by breaking a query into key concepts. So if the question is, is melatonin effective for insomnia, the model identifies, of course, melatonin and insomnia as key entities. And then in the internal reasoning, GIF now builds group of related concepts. And I will show you an example in a minute. So not just using the knowledge graph, but also leveraging here the parametric internal knowledge of the LLMs. So for melatonin, it might put in all the related ideas like sleep, hormones, supplements, and whatever there is in your LLM. But GIF doesn't just look for known facts. GIF can extrapolate potential relationship, what we've more called here as a hypothetical link, that could exist based on the patterns in the data. But careful, this is not a crazy hypothetical link. This is a link that exists here, but with a lower probability density in our distribution. So hypothetically means more or less not as dominant imprinted here in a knowledge graph but maybe it exists or maybe there's an indirect link and I will show you an example in a minute. And the beauty is and we know that we can improve the reasoning capacity of our systems if we generate counterfactual reasoning examples. So the system doesn't know only 100% yes, yes, yes, yes answers, but it also knows, hey, no, this is the wrong answer. This is a false augmentation. This is the wrong path to argue. So if we have the positive augmentations and the negative augmentations together, we know that our reasoning experience improves significantly. So this approach now of a guided extrapolation, and I will show you this in a minute, allows GIF now to tackle here extreme complex multi-hop reasoning tasks, even when the underlying knowledge graph is incomplete. And you might remember just some days ago, I showed you here that Harvard presents a new knowledge graph agent for medical AI systems. Check out this video. You will find a lot of similar ideas, a lot of implementations that are quite following some similar ideas. So you see it all comes down, it all converges now to a new solution, how to combine NLMs with knowledge graphs. Yeah, here you see Berkeley states here the problem here in the publication. So we have now a new question. Traumatic aortic injury, my goodness, does the anatomy of the aortic arch influence an aortic trauma severity? I have no idea, but just go here with the official example, so chain of thought. And we get here, chain of thought, lacks of internal knowledge. Prompt, let's think step by step, yeah, beautiful, but gives us the wrong answer. If we used here reg, a text-based reg with semantic similarity, you know, the cosine similarity here in the vector store, in the mathematical vector spaces here, they are beautifully semantically similar. The traumatic aortic injury and the aortic arch and the aortic trauma. Beautiful semantic correlated but absolute irrelevant information. So therefore we fail with RAG. So if we have now our sync-on- graph methodology, what do we do? Fail to retrieve here on a sparse knowledge graph. It also gives us here the not correct answer because our knowledge graph is not complete. And it's not complete to an extent that think-on graph simply fails because they are missing links the system is not able to build in a multi hop causal reasoning chain but wait we have now the new idea we have now the new methodology and you're not gonna believe it they give us here exactly this answer what a coincidence no so let me make this one here. So we have here terms. In our questions, we have terms like aortic injury. So we take this aortic injury. And then we have what goes with this here in our knowledge. Injury, poisoning, whatever. Or we have here aortic trauma. Okay. We go here. We have here aortic trauma okay we go here we have here a new group aortic trauma injury poisoning clinical attributes beautiful or we go here with anatomy anatomy abnormalities structure tissue we built here this semantic group or the last one what i missed here i missed here the aortic arch yes this one here so you see you take here all more or less all the elements that you have from the user query and the system tries to understand here the context of everything so we have here all the knowledge graph concepts and we have here one, two, three, four different concepts. And now what they do here with give, it first builds here an entity group for each query concept and then induce inner group connection using its internal knowledge. And then we will use cross group connection contained in the knowledge graph so you see what we do we build here our complete understanding and this is what i meant here with hypothetical links because those links are real but they are not in the knowledge graph for this particular domain augmentation chain so you see an autonomy is part of here and influences this and this affects this one and here we have a location of and the self function goes here is an interrelated functions and you see this system tries to build here new stepstones to be able to have a multi-hop reasoning. And if one of the stepstones, like the cell function, is missing, the system tries to find in the vast knowledge array exactly this stepstone that would connect here in a logical causal reasoning way and build this stepstone for multi-hop reasoning yeah that's if you want to have a look without my things great what made this click is here this single sentence by the authors and they say hey we introduce an additional intermediate node group by picking the multi-step reasoning plans of the LLM that are most helpful for the ultimate questions. So whatever they are missing in their sparse graph, intermediate node groups that are related to the user query, they select this from the LLM's multi-step reasoning plans. And those plans have a multitude of possible pathways forward through the knowledge array. So you see, it is not a hypothetical link, it is just a not yet chosen link here to build these new intermediate node groups. Highly sensitive to your domain knowledge, highly sensitive to the parametric knowledge of the LLM, and of course to the structural information, to the structural data representation in the knowledge graph. Yeah, for my green grasshoppers, short explanation. Now this graph-inspired veracity extrapolation, now you understand it a little bit better, is a knowledge extrapolation framework for structured reasoning of LLMs on sparse knowledge graph and give neither focusy on explicit information retrieval like our RAC nor reliesy on improving the internal reasoning ability of LLMs by a pending triggering statement to the query. No, not at all. I just showed you what we do. Knowledge extrapolation. We build new stepping stones for our multi-hop reasoning. Now it is interesting that those two frameworks, Sync on Graph and GIF, they connect somehow, no? no so first both integrate llms with knowledge graph a new innovative phrase to enhance here complex reasoning both tackle the problem of multi-hop reasoning and while sync on graph achieves this by dynamically searching with the beam through the knowledge graph give does so by extrapolating here this new relationship and filling in the gaps of incomplete knowledge in our sparse knowledge graph. Second, they both aim to reduce the hallucination. This is a common flaw still in our LIMs. TOG achieves this by ensuring traceability and correctness in the reasoning path of our knowledge graph, while GIF adds here, as I showed you, an extra layer of counterfactual reasoning, which increases the overall reasoning capacity. And both frameworks are designed with the idea here that we have smaller LLMs, like JET GPT 3.5, as the author shows, and they, with this new methodology of sync on graph and give can outperform larger models like gpt4 in specialized tasks so this is nice that here our smaller models maybe our open source models are able to outperform with this new methodology how to combine in an innovative way the LLM with the structured knowledge of a knowledge graph, this is really nice. So key insight because it shows that by integrating structured knowledge and reasoning, here we have the knowledge graph, here we have the LLM, we can dramatically improve the model performance without needing to scale up the model size. This is a nice result. But of course, you can use still here the Sync-on graph and give here together or in particular user cases. TOG is ideal when you have a rich, well-populated knowledge graph. You don't need anything else. If your knowledge graph is more or less complete for your particular query on your particular domain and fits in your complexity level, you are good to go. However, on the other hand, if you have a sparse knowledge graph, if you say, wow, the complexity of my queries by the human or by the students is so high, I better go with here a more powerful methodology, especially when your structured data are incomplete and gives us here some real beautiful results. So, both methods create flexible, powerful toolkits for enhancing the LLM reasoning. Independent, if the domain is well-defined, then you go with train on graphs, or the domain is still evolving and sparse, then you go with give. Great. This is the official end, but I want to show you something else. I found out something that amazes me. | ADD LLM TO Knowledge-Graph: NEW GIVE Method (Berkeley) | 1,168 | Discover AI | 20241015 | Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework designed to enhance the performance of large language models (LLMs) in knowledge-intensive tasks by integrating sparse external knowledge graphs (KGs) with the LLM's internal knowledge.
The main insight of GIVE is that even when working with incomplete or limited KGs, it's possible to improve the reasoning capabilities of LLMs by using the structure of the KG to inspire the model to infer and extrapolate potential relationships between concepts. This approach facilitates a more logical, step-by-step reasoning process akin to expert problem-solving, rather than relying solely on direct fact retrieval from dense knowledge bases.
The GIVE framework operates in several key steps. First, it prompts the LLM to decompose the query into crucial concepts and attributes, extracting key entities and relations relevant to the question. It then constructs entity groups by retrieving entities from the KG that are semantically similar to these key concepts. Within these groups, GIVE induces intra-group connections using the LLM's internal knowledge to explore relationships among similar entities. For inter-group reasoning, it identifies potential relationships between entities across different groups by considering both the relations mentioned in the query and those present in the KG.
Additionally, GIVE introduces intermediate node groups to facilitate multi-hop reasoning necessary for complex questions, effectively bridging gaps in sparse KGs. By prompting the LLM to assess and reason about these possible relationships—including counterfactual reasoning where the model considers both the presence and absence of certain relations—GIVE builds an augmented reasoning chain. This chain combines factual knowledge from the KG with extrapolated inferences from the LLM, enabling the generation of more accurate and faithful responses even when the available external knowledge is limited.
Great insights by @UCBerkeley and @penn
00:00 Integrate LLM and Knowledge Graphs
01:06 Think on Graph (ToG)
05:58 ToG GitHub code repo
06:30 GIVE Graph Inspired Veracity Extrapolation
09:33 GIVE vs Harvard Knowledge Graph Agent
10:33 Why RAG fails in Knowledge Graphs
11:40 Example of GIVE in detail
16:16 Compare ToG to GIVE
All rights w/ authors:
THINK-ON-GRAPH: DEEP AND RESPONSIBLE REASON-
ING OF LARGE LANGUAGE MODEL ON KNOWLEDGE GRAPH
https://arxiv.org/pdf/2307.07697
GIVE: STRUCTURED REASONING WITH KNOWLEDGE
GRAPH INSPIRED VERACITY EXTRAPOLATION
https://arxiv.org/pdf/2410.08475v1
Towards Trustworthy Knowledge Graph Reasoning:
An Uncertainty Aware Perspective
https://arxiv.org/pdf/2410.08985
#airesearch
#aiagents
#harvarduniversity
#berkeley
#knowledge
#llm | 2024-12-07T21:24:11.636276 |
https://www.youtube.com/watch?v=f6pUqDeMiG0 | Hello everybody, welcome to our Raghack live streams. Raghack is a free global hackathon happening from September 3rd to 16th. RAG is a technique to get a large language model to answer questions based off your data. We have more than 25 live streams teaching you how to build RAG apps with different Microsoft technologies plus 10 cash prizes for the best RAG apps in each category. So we're super excited to have you join us the best RAG apps in each category. So we're super excited to have you join us for this RAG Hack. And now a word from our friends at Reactor. Hi everyone. Thanks for joining us for the next session of our RAG Hack Intro to Graph RAG. My name is Anna. I'm an event planner for Reactor joining you from Redmond, Washington. Before we start, I do have some quick housekeeping. Please take a moment to read our code of conduct. My name is Anna. I'm an event planner for Reactor joining you from Redmond, Washington. Before we start, I do have some quick housekeeping. Please take a moment to read our code of conduct. We seek to provide a respectful environment for both our audience and presenters. While we absolutely encourage engagement in the chat, we ask that you please be mindful of your commentary, remain professional and on topic. Keep an eye on that chat we'll be dropping helpful links and checking for questions for our presenters to answer our session is being recorded it'll be available to view on demand at the youtube channel for the microsoft reactor with that i'd love to turn it over to our presenters thanks so much for joining us today. Thank you, Anna. Hi, I'm John Alexander, and with me is... Orvamody. Sorry about that. That was a little awkward. Today, we are going to talk about intro to GraphRag. And I'm going to take the first part of this and start off with a more traditional way to leverage knowledge-based retrieval augmented generation and we're going to highlight using python with um lang chain and neo4j and then apura what are you going to talk about thanks john i'm going to talk about microsoft's implementation of Graph Rag, the research that we've done within our MSR org, and how you can use that via a Python package or via an Azure Solution Accelerator. Very excited to share. That's great. And so we're going to jump in now and talk about what's next. So I'm going to start with some graph fundamentals, and then we're going to get into how we query the graph and walk along it using a language called Cypher from Neo4j. And that language really is kind of akin to ASCII art, the way it displays the different ways that we relate to entities entities things like that so you might find that interesting then we're going to jump into a little bit more about specifically how lang chain and neo4j work together and some of the extensions that neo4j has created to work with these with laying chain and then aprva is going to take you on a magical ride to Microsoft Graph Rag and how that can just change your life for the better. Is there anything else you want to add to that or we good to go? Let's get started. Let's rock it out. All right. So when we think about, let's go here, let's talk about graph fundamentals. And so, you know, graph retrieval augmented generation is gaining momentum, and it's really becoming a powerful addition to traditional vector search retrieval methods. And you know, this approach leverages the structured nature of graph databases, which organize data as we'll see as nodes and relationships. And this helps to really build in the depth and contextuality of retrieved information. And so let's start off with nodes and talk about those a little bit. And when we're thinking about this, these describe entities, really nouns, person, place, thing, the discrete objects of a domain. we're going to use tom hanks a little known actor and we're our first example is going to be using a graph movie database which is kind of cool now when we're working with nodes they have zero or more labels to define what kind of nodes they are. Tom is a person and an actor. He could be a director, you know, whatever that is, it's what kind of nodes they are. And then they can also have in Neo4j properties, which are key value pairs. In this case, there's a name property. And then when he was born. So in it, so nodes aren't very powerful by themselves. We need relationships, and everybody needs relationships, right? So relationships connect a source node and a target node, and they have a direction, one direction. And they have, actually, the direction is the ingoing, the outgoing, not the type there. But they have one direction, they're ingoing or outgoing. And then they have a type. In this case, the type is Tom Hanks acted in a movie. And they have properties. In this case, the property for the relationship is the role for that. And so the relationship defines, well, this defines how those two nodes work together or how they're related. And so if we combine this together this is a graph database and it uses these graph structures these nodes and relationships and edges and properties to represent and store data and when we think about complex relationships and connections it really gives us a way to express that a little bit better or actually better than traditional relational databases and this is a screenshot of um a tool called bloom in neo4j that allows you to see the graph itself. So really it's a knowledge base that has everything. The model is a graph structured model that represents and operates on data that way. Or we can also think of it as our entities are interconnected and then defined by a schema. And so the power of knowledge graphs gives us the ability to capture those conflicts or relationships and concepts that we've talked about to give us more semantic meaning about what's going on. And it makes it easy-ish to combine multiple data sources because it's schema-less. And then the query language that we have, Cypher, as we'll talk about, is specifically created to help us find the patterns inside the knowledge graph. And really, there's a term that helps us, and it's called ontology, and it provides the foundation, foundational blueprint that guides how everything is represented in the knowledge graph, how the entities and the relationships are pulled together. And so now let's take a look at this. I'm going to grab a quick drink here. And we're going to start with the cipher and kind of look at the graph and kind of see some query types, things like that. So I'm going to drop out of the slides real quick. And so I've put some prereqs in here. There's also a GitHub repo. The link to it is in the resource part of the slides, which will be available. But we're just going to be working with what's called the default movies database in Neo4j. And I'm going to do it locally, but you could use a cloud instance to do that. We've loaded up our libraries here. And let's talk a moment about what Cypher actually is. It's a query language specifically for climbing, walking, traversing the graph, and it really, it's similar to SQL, but it's designed for graph databases, and it, like I said before, it's really designed to express those graph patterns that we can then use to really kind of tease out what we want to find in terms of the answers to the questions about our data. So the first thing we'll do is connect to the graph database, and we do that using the Langchain Neo4 Graph Connector. And now with Cypher, notice that we have a couple of keywords here that we're using. In this case, I want to match all the nodes in the graph. I want to grab those and return those. And so let's run that. And we can see through our query of our graph that we have 169 nodes. Well, that's okay. But now we want to match certain entities using the node label, find all the movies, everything that represents as a movie, not as an actor or anything like that, and return the count of those movies as the number of movies there. So we can see that we have 38 movies in there. And then we can also take a look at people in there. And we have a number of people in there, actors, directors, writers, things like that. And now we can also match specific entities based on their properties. Look for this property inside of an entity and we'll use a friendly name in this case tom to return that and so we can see here we have our entity of person as tom and our properties there So we're walking through that graph. So we can match a single movie using the title property, kind of the same way there. Tom Hanks acted in Apollo 13, but we don't know that yet. So we can also then start doing things with ranges, just like you would in any other data, but it's kind of good to see how we would do that there i'm matching all the movies and i'm calling it 90s where they were released between 1990 and 2000 so let's see that query and notice that we're returning the 90s title there, we could also say, and let's rerun that. And so we can see when things were run there and all the 90s movies, you've probably seen some of them. Now we can also do something interesting and here we're showing actually the relationship where we're wanting to say look for this person with this property and we want to find out all the relationships acted called acted in which is our type and what is um the movie what is the what is the appropriate movie where the person acted in that movie with tom hanks and so we'll see that n Lane, Jim Broadbent, all sorts of stuff there. Julia Roberts, Helen Hunt. Okay. Now here's even something even more that we can use to say traverse how far out in the graph we want to go with this range here. And you may have heard of the game called Six Degrees with Kevin Bacon. And basically the thought is Kevin Bacon has been in so many movies, you can pretty much find anyone within six hops of him. Well, in order to save time, I'm going to just use three hops. And I pulled this from the NeoGraph examples repo, which we've also linked to, which is a good place to get started if you want to know more. So we can see here, and they've basically also got a mechanism in here called Hollywood, that anybody that's within three hops, whether it's movie or actor, that have worked with Kevin Bacon, we can see that, and we can then use that as well. So it brings all of that back okay so what we've seen here is that we can use cipher the query language to be able to traverse the graph and to also use start using lang chain to be able to well actually um to be able to do that okay and this once again this notebook is in in the repo and also there are several links to web pages or web resources including the the repo but then also i've got or we've put in the link to a little bit more about cypher and walking through that and the more about the graph database concepts because we've you know only got a limited amount of time but all sorts of different things here and then we've also got um relevant links for microsoft implementation of graph reg as well, let's keep going. One of the other things that is a great resource is in the Medium blog, in the Neo4j developer blog, Tomas Frontag, I probably said that wrong, has a great, really great series of blog posts and then also some really interesting datasets and notebooks that you can take advantage of and the one that we're gonna be working through I've kind of modified but we're gonna take an example of that and walk through line chain and neo4j to just kind of see what's going on there so let's keep going. But these are some really good resources for further learning that are in the links. I just wanted to touch base with that. Okay. So one thing I do want to show is this is Neo4j desktop real quick. And we've got our example project here and this is our movie dbms and if we want to look at the link itself there are several apps that we can use here including the browser where we can interact just on the web and put cypher in there directly the one i want to take a look at right now though is called neo4j bloom and what that allows you to do is view the graph itself and then you can also work with it and play with it and you can also edit it right online as well and here it comes Let's pull that up. And now I'm going to go ahead and see our graph here. And notice we can see, like, for example, Keanu Reeves, how he's acted in the matrix reloaded, the replacements, all sorts of different things there. We can hop around. We can see when it comes to the matrix, the Wachowskis. I've said that. Oops. There we go. There we go. Okay. Ah. Here we go so we've got a few good men here we've got all sorts of different things in here the matrix there we go and you can see that there are relationships that are going in both directions some are going to movies acted in directed produced and so we've got a lot of different relationships that we can work with and that's a little bit about bloom and the example project in neo4j desktop i also have links for installing this either in the cloud or or on the desktop and then how to get started with it as well okay so let's go back over to the slides and let me start a different database real quick as well. Let's start up the slides. Here we go. And so one of the things to think about, let's go back to the other slide. There we go. All right. Is the knowledge graph, and this is really, I'm just going to go through this quickly. It's for more reading. If you want to come back to it, I thought I'd put it in here. It's a little wordy, but we have we have concepts categories or types of information within the domain you know healthcare if that's the case we might have disease symptom medication patient and specific entities that belong to a class might be diabetes might be an instance of the class disease aspirin might be a class instance of the class medication we have properties and attributes for example in medication has dosage and then we have the relationship and they could be either hierarchical relationships like is a or more specific association like treats headache or symptoms or things like that. And then we also have constraints that can help us bring things back like something distinct or every, and it also, excuse me, it also allows us to set rules. For example, like every medication must have a dosage so we can set that in there. Okay. So when we think about Langchain and Neo4j, we've got some interesting additions to the capabilities of Langchain. And you've probably seen all about Langchain in various sessions that you've gone through during Raghack. And so now we've already seen that we have a database connector that we can work with our graph with. We have a graph document that allows us to have the types and relationships or entities and relationships in there and in the form of a graph. Then we also have the new graph transformer, which allows you to transform documents into graph-based documents using a large language model. And we'll see that as well. And then Neo4j allows you to do graph-based keyword searches and allows you to do vector searches on the same data as well. And so we're going to take a look at the vector store, the Langtang Neo4j vector store, but it also requires the Neo4j Python package, which I've included in the requirements file on the repo as well. Okay. So let's jump into this. Come in here. And notice here we've got our Neo4j database. We've also got a utilities Python file here for configuring And then connect to our Neo4j driver. And then we'll go ahead and load our libraries. And then connect to our Neo4j driver. And then we'll go ahead and load our libraries. And then connect to our Neo4j driver. And then connect to our Neo4j driver. And then connect to our Neo4j driver. So we'll go ahead and load our libraries. And then connect to our Neo4j database. And then we can, the way this works is we can, through an empty database, we can use the Wikipedia loader from Langchain to go out and grab raw documents, much like we would with any other way we're using data within our retrieval augmented generation application. And then we wanted to find our chunking strategy, split the documents into chunks based on that chunking strategy. Now, this is where it changes. We're going to construct a graph based on those retrieved documents. And we're using the LLM graph transformer to do that. There's also an online version of this that you can just drop your documents in on Neo4j as well. But this allows you to use the same type of thing right from within LangChain and so basically we use we set up our LLM here and then we create our LLM graph transformer and then we use it to convert the documents that we just chunked out to graph documents and through this then we then add those documents to the graph through the add graph documents method. And the base entity label assigns an additional entity label to each node. So we have further metadata about it. And then the include source parameter, it links the nodes back to their originating documents. So also right now the LLM graph transformer only supports open AI and Mistra function calling models. So keep that in mind. Now, once we do that, we can then inspect that graph with a library called y files and it's a neat little library you may have used it before we can show the graph there and we are looking for 50 mentions of our relationships between people and movies so let's run that okay it's live radio so that's always good let's make sure this loaded I forgot to actually run that, so that's okay. Let's take a moment to do that and what i used to tell um clients when we were working with things the slower an application is the more powerful it is so now we have our documents defined defined there. Create the graph based on that and take a look at it. Oops, I just ran that again. Okay. Should be okay. And we can see our graph here and we can see all the relationships from Queen Elizabeth we pulled out a Wikipedia and it automatically created the relationships, pulled out the entities on many different events and people that were around Elizabeth. So now that we've created the graph, let's go ahead and work with, see some more sophisticated things with Neo4j. And in this case, we're going to use hybrid retrieval. And so when the user asked a question, our rag retriever is going to use both keyword and vector searches to search unstructured text and combine it with knowledge graph information and then that collected data is given to the llm as the new context and it's going to generate that answer so let's take a look at that and so the way we can create the vector index the vector index we can use the from existing graph method and that's going to add both of our keyword and vector retrieval to our documents and then we can use it for a hybrid retrieval approach run that and then we have a graph retriever here that we're using with a query and we're going to use that to identify relevant entities in the input because it's important to be able to pull those out to then extract that and then create the relationships from and so we're using the lang chain um expression language with the with structured out method to achieve this and so in this case we're creating a class called entities and then a description there and our chat prompt template telling it to extract organization person entities from the text use the given format to extract information from the following question okay let's run that that. And then now we're going to test it. And I know that Amelia Earhart's in there, but really what we're just trying to do is to make sure that in the question, we can use the retriever to detect entities. So let's do that. And we can see that, yes, it detected the entities. So now that we can detect entities in the question, we're going to use a full-text index to map those entities to the knowledge graph. And so we're going to define a full-text index first, and then a function that's going to generate full-text queries that also will allow some misspelling and then once we have that our structured retriever is going to collect the neighborhood of entities mentioned in the question and so we're grabbing our query nodes we're gonna say with node we're gonna match those with the neighbor and then pull that information back we're gonna limit that to 50 so let's go ahead and do that and then we can use that to print out our entities and then retrieve the neighborhood of relevancy there. So we can see Elizabeth had relationships with Lady Jane Grey, James VI of Scotland, Mary I, member of House of Tudor, and then also Queen Elizabeth was a symbolic object, and courtiers commissioned paintings for Queen Elizabeth things like that so we can see those relationships they're all structured there and now we're going to take these these two retrievers and create a final context that we're going to pass to our llm and we can see here here's our search query there and our structured data for our structured retriever our unstructured data and then our document we're just going to go ahead and concatenate that and join it and then now we've we have ourG retrieval component for the hybrid search. And then we're going to go ahead and do a more advanced RAG technique, which is query rewriting. And notice here that in our template, we're saying, given the following conversation and a follow-up question, rephrase the follow-up question to be a standalone in its original language so we have our chat history here our follow-up input and then we're using that as a constant called condense question prompt here we're setting up our chat history and then we're setting up our search quiz our search query here and checking to see if it includes chat history. We're going to condense that with the follow-up question. So now, once we've done that, we want to introduce a prompt that leverages the context provided by that integrated hybrid retriever we just created to produce our response to complete our implementation of our rag chain. So got a quick prompt in here, use natural language and be concise. If you don't know the answer, you can say, I don't know. And we're going to run that through our chain. And so let's go ahead, pull that in, and then which house did Elizabeth belong to? Elizabeth first belonged to the house of Tudor. Now that put some history in our chat history, and let's see if it will rewrite this query and get the context from our chat history and let's see if it will rewrite this query and get the context from our chat history so we did that it did our query rewrite and it shows that elizabeth was born on the 7th of september in 1533. so what we've seen here is a way to use neo4j and lang chain to not only use traditional vector search capabilities within rag but also use full keyword searches um to be able to transverse the graph that way combine that and then send that to the llm and bring back the information, the pertinent information. Okay. So let's go back to the slides real quick. And there we go. And now it's time to take it to the next level with Microsoft's implementation of GraphRag. And so do you want me to run the slides or do you? Yeah, so we can just talk about this real quickly. So and thank you, John, for that awesome background. I think for folks who aren't that familiar with knowledge graphs, that's a great way to get started. And the key that we want to kind of take home here is the relationships matter, right? In life and in data, right? So knowledge graphs gives us this ability to get insights and reasoning through relationships. So when we talk about graph rag and knowledge graphs for rag, there are a few things that we see as distinctions between that and baseline right so if we think about baseline rag it's a great way for an llm to be able to go out and find information typically on specific questions and bring it back to reason over and give you an answer, right? And it's using vector similarity search. So you have your corpus of data. It gets vectorized. The embeddings happen. The chunkings happen. And usually it's just a semantic similarity search that's coming back and giving you some answers. So if you had three pictures of cats and one is a live cat and one is Garfield and one is an abstract Pablo Picasso painting of a cat, before we had semantic similarity search, before we had the ability to have LLMs identify these sorts of things, they might not have come back as if you did a search of tell me all about, or show me different pictures of cats. What we're doing with knowledge graphs is now taking that to another level and basically saying relationships between things. So you can tell me what three different cats look like, but what if I have this enormous corpus of data and there's all these interrelated portions and entities that John talked about and the relationships between them. What if I want to find and have reasoning over higher meaning in that, themes in that data, large topics, narratives in that data. That's something that even semantic similarity can't do. Semantic similarity, if you ask what the top five themes of this data are, it's going to look for something that sits in that vector space, but it's not going to be able to find an answer. That's where GraphRAG and Knowledge Graph or RAG kind of come in. So now I'll switch over to my screen here. Okay. And I'd like to give you a little bit of background. So Microsoft GraphRag is a project that was started out of our Microsoft Research Organization by a group of very smart researchers who deal in knowledge graphs every day. And basically it was to answer this question of, well, if traditional RAG is not giving me the answers that I want, and I have these relationships in my data, how do I get reasoning over that? So unlike traditional RAG, what's happening in graph RAG is this. The corpus that you're interested in reasoning over is pre-processed. And by pre-processing, I mean it is sent to an LLM, and an LLM looks over that entire corpus of data and given some instructions, it's able to create the knowledge graph for that data. Okay, so, and we're going to go through an example of this, but basically what's happening is the LLM is very smart, right? And it's able to look at this entire corpus of data and based on what you as the humans want it to extract it can say i'm going to extract these entities i'm going to extract these relationships i'm going to extract these nodes i'm going to create a knowledge graph all by myself you don't need to do this human let me take care of the hard work and based on that i'm going to be able to give you the ability to query on that knowledge graph and get this higher level reasoning that we talked about, right? And that higher level reasoning can be the things that we discussed in terms of thematic sorts of things, looking at much bigger picture sorts of questions that even semantic similarity can't bring back to us. So you'll, we'll have a can't bring back to us. So you'll have a number of resources that are available. So you'll be able to get to these websites that we show here. Let's switch over and let me run a little bit of a demo here. So everybody can see my screen. What I'm going to do is I'm here on my on my local PC. And nothing special about it. It's it's just running Windows Subsystems for Linux. So this is this is Ubuntu now and there's a couple prerequisites and I hope they're relatively easy ones that everybody has. You need to have Python. 3.1.0 or higher and you need to have some access to an LLM. In my case, I'm using Azure OpenAI. You can also use OpenAI. Once you have those two things and your laptop here, then we're good to go. So the first thing we're going to do is we are going to create a virtual environment just so I can show you like how we would run through this entire thing, right? So I will create this virtual environment. And once it happens, we will activate it. Now we are in our virtual environment and real simply we are going to pip install GraphRack. And you will see it is going up, collecting dependencies, installing all of this. And you will see there is a number of interesting modules that are working on the back end here. Some are data science modules, some are visualization ones, some are customized modules. But GraphRag is an end to end process. So the indexing part of this, the knowledge graph creation and the querying are all included in this one big package. And truthfully, that's a misnomer because it's not that big. It's a very portable sort of package. So we're gonna let all of this install. And once that's complete, give it another minute or two here. We'll get started. So while this is completing, I'll tell you a few things about the models that we use within here. So typically, the types of models that we use are GPT-4 class models, right? We've tried different models and things like that, but the best ones have been the GPT-4 class models and in particular we're going to be using GPT-4-O-Mini, which is just as fast as the big GPT-4 models, but much cheaper, somewhat more efficient, but it reduces your token cost by quite a bit. So we're done with our installation here. We're going to make a directory here, GraphRack, that already exists. So we'll make a directory called GraphRack2. Okay, so let's go into that directory. And the first thing I'm going to do to initialize this is just give it a simple command. And this is what's happening for me to initialize my graph rag installation. Okay. So you see I'm using graph.index in it and my root directory is right here where we are. And then in a minute here, what it will do is tell me that it has initialized this in the local directory. Alright, so initialized project, if I do a quick LS, I see it's created a prompts directory and a settings.yaml. I need to copy a different settings.yaml in here and let's take a look at what's in that file. Okay, so at the very top of this file, I have my pointer to an API key right when I don't expose an API key I use my API key actually my Azure context because I'm logged into Azure with my account and it can pull the API key without me showing anything here I'm using Azure open AI chat and the model I'm using is GPT for a mini if we looked further into this file, you'd see an embeddings model, text embedding small. You'd see some other things as well, but there is definitely some flexibility here. So I have my settings set. Now I need to initialize some environment variables. So these environment variables are telling me, okay, where is your OpenAI base? What API version are you using? Your deployments, et cetera, right? And my key is blank there, which is fine. So now what we want to do is get some data in here. OK, so I'm going to make an input directory, which GraphRack typically looks at to look for files. And we're going to bring in some data in here. And what we're going to do is bring in a book called, A Christmas Carol. So for those of you who are familiar with Charles Dickens, classic English author, he wrote a book called A Christmas Carol. It's about the main character, Ebenezer Scrooge, who's a little bit stingy, gets visited by some ghosts, has a big change of heart, and becomes a very good person at the end. Okay, so there's lots of interesting things going on in here. So we've brought this in here. So if we kind of look at this, we're going to take a look at what our encoding is. So GraphRack is best on narrative data and typically in UTF-8 text format. So if you have narrative data in UTF-8 text format, this is kind of what you want it to look like. And if we were to look at, if we were to kind of cat this whole book out, let's just take a quick peek there. You can see this is the entire, you know, well, we're seeing a lot of Project Gutenberg stuff here, but you can see this is A Christmas Carol, right? The entire book in its entirety because it is in the public domain. All right. So now what we're going to do is we have our data that we're going to look at. Now we want GraphRack to look at that data and to start going over with the LLM. The LLM is going to look at this book and it's going to start extracting stuff. So I'm using this graphrag.promptune and I'm using this this this flag called no entity types. So what I'm doing is letting graphrag do all the work. The LLM is going to do all the work for me here. It's going to extract a whole bunch of stuff through here. So let's let that run. Okay. So we see here it's generating domain. Let's see what it comes back with. Literature and social commentary. Wonderful. So if this came back and it told me the generated domain was finance and accounting, that doesn't really make sense for Christmas Carol, right? But LLMs, like we said, are smart. They're good at looking at text. Our research team did a great job of making this very easy to extract things from there. So now we have written a bunch of this information out into another directory here. So let's take a quick look. We have this prompts directory directory so if we look at prompt we see some interesting things in there right and what we're going to do real quick is look at this these summarized descriptions right and you'll see that like a lot of this is um you know looking at the way the prompt that the LLM use. You're an expert sociologist specializing in literature and social commentary, analyzing community dynamics. This is going to look at this entire book in this sort of way, and it's going to create a whole bunch of, the knowledge graph gets extracted and created, it's going to be based on all this, right? So, so far this has been very, very topical to our domain here, which is literature. So the next thing I'm going to do is let GraphRag run through and index my entire document here, right? And this is, again, another interesting part. We're going to see some cool stuff happen here. So you can see all these chunks that are, as the vectorization and embeddings happen, all these chunks are being extracted. We have these base text units and extracted entities that are happening. This is all happening right now on my laptop. I can't even open 10 tabs on my laptop, and I'm running an entire LLM extraction and knowledge graph creation on here, right? So as we go through, we see that this is very, very, it's starting to summarize descriptions. A lot of this is part of the knowledge graph creation and then taking those chunks and entities and putting them in certain areas. And you can see we have a lot of, again, interesting stuff going on. If I scroll up here, you can see we have some things that are the relationships that are being created here based on the things in the text. And we're going to let this go through, create its community reports. And again, these community reports are artifacts that are created alongside our K files, which the index actually sit in, and different sorts of vector entities and things like that, right? Currently, this is all sitting on, like I said, my laptop, right? All workflows completed successfully. So all of a I said, my laptop, right? All workflows completed successfully. So all of a sudden we're done, right? This entire book, let's ask it a question. So we'll ask it a pretty simple question to start, right? And this is something that semantic similarity might come back with, but let's see what happens, okay? So what's happening here is I'm querying using something called the global scope of query and it's sending this question to the LLM and based on the indexes and the other artifacts that I have right here on my laptop, it's going to answer this question. OK, so let's see what it came back with. So key relationships, the Cratchit family. So Bob Cratchit and his family, Jacob Marley, who was one of the ghosts as deceased business partner, the spirit of Christmas. This seems like a lot more of a in-depth and contextual answer than you would get from traditional rag, right? Let's ask a weirder question, right? Let's ask, what are the most, most odd occurrences in the story. Now there's no way that semantic similarity is going to do that unless there's a paragraph or section that says odd occurrences. Our smart LLM is saying, you want me to bring back odd occurrences. I know what odd is, I know what that means in semantic space. Based on the indexing that I've done and what I've extracted from this graph, this is what I want you to bring back. Right. So let's look at it like odd occurrences in the story. Several odd occurrences that challenge traditional expectations and highlight deeper themes. Okay. Supernatural elements. There's probably nothing that says supernatural in the book. I'm not sure, but I'm sure it's not said that much. Symbolization and characterization. I mean, geez, this thing is practically writing a book report for you here, right? Absolutely. Here we have these the data reports. So this is telling you what chunks it used and the attributions that it used to come back with. So as you can see here in, you know, the period of four or five minutes, we went through an entire exercise of getting, you know, setting up graph reg, getting a data set, an entire book, having the LLM run through that, creating a large graph, and then querying it. So there's some huge opportunities here. I showed you a very small example. There is also a much bigger implementation of this, and we have an Azure Solution Accelerator that's associated with this as well. So if you need to do this at scale, the good folks in our SMT org, which is strategic missions and technologies, have done this and they've worked with a number of scientific and research customers. They've run work with intelligence agencies. Imagine being an intelligence analyst or a researcher and you have all this data out there, right? Like tons of papers, tons of different news articles and things like that. And being able to look at that, consolidate that and bring that back into something. And we can do that through our solution accelerator. And again, you'll get all of the links for this. I'll just show you a little bit of kind of, we have notebooks that are associated with this that you can see, and we're running a little low on time, so I probably won't go through this entire step through here because you've already seen it. But imagine this at scale. If you're part of an enterprise organization, you have pedified worth of data, right? right anybody. Do data discovery on on research or even kind of within your corporate history and things like that. There's huge opportunities here, right? And this is where the solution accelerator comes into play to really make it something that's a little bit bigger. We use things like Azure Kubernetes service on the back end, so it's a little bit more horsepower. We use Azure AI Search as our vector embedding store, so you can use AI Search as a high horsepower embeddings engine. But just like John showed, we also create a knowledge graph that comes out of this. I'm sorry, this is a little bit messy. It's not going to be as cool, but we can kind of see in here, we see some interesting things, right? And it might be a little bit difficult to read, but we see, let me see if I can bump this up a little bit. The green place, mansion house. In the middle, it'll be all the characters like Ebenezer Scrooge and stuff like that. And outside the graph, we see that it's still picked up some other things, right? Project Gutenberg, so all the legal disclaimers, all the other things were picked up. But that's outside of our main graph, right? So we know that this graph construction, it was smart, right? It kind of knew the difference between those two. So it's very interesting in that regard um so i think we have five minutes left here so we can i don't know if we do a do we do do we do live qa here do we answer questions do you want to wrap this up in some sort of nice way john sure um i'm actually if you can talk for just a moment i've created an aka ms link and I'm trying to get it to the proper authorities, which has the repo and the slides in it. Yeah. So let me talk a little bit more and maybe what I can do is show a little bit of what my Azure subscription looks like and some of the things that get spun up. So again, as I said, we have the easy one that we showed, right? The Python implementation that you can run on any random laptop. And then we have our bigger one here, which is my Azure deployment, which has an API front end. It has AKS. It uses Cosmos DB and storage AI search. It uses managed identities. If this is something that you want to play with again all available in our open source repositories um you'll be able to hit the graph rag uh repo and see both the python and the azure solution accelerator portions of this and we welcome you to try those and we would love to get feedback from you you'll see a email alias on when when you get the links for the graph rack site aka ms forward slash graph rag you'll also be able to add issues to the github issues on on each of these repositories right hey um we have a live question if you want it. Would you recommend data from different domains and input during the graph indexing process? Yeah, so actually to get started, we recommend having data in a single domain, right? So when we think about knowledge graph creation and these relationships that get created, what we don't want is a noisy graph. And a noisy graph is one that has information that might not be relevant to this particular domain or contextual to the questions that you want to ask over it. So GraphRank does have this idea of different communities and different layers within the graph. But to start, it's always good to start in a single domain. So let's say if we look at a corporate or enterprise sort of perspective, look at finance, look at marketing, look at those separately. Look at your research separately. Look at those separately to start. Start small. So start with 50 to 100 megabytes create a graph see what that graph looks like you can use the auto prompt tuning you can use manual prompt tuning i would start with auto once you're able to iterate get bigger add more stuff index again but start small see how it works out for you iterate over that and then go from there. And really quickly, Ashish asked, what if there are images in the PDF book? Are we only indexing text? Or can we handle that? Great question. So currently, we have a roadmap item to implement multimodality. So right now, we really work best with narrative sorts of text. JSON is supported as well. CSV for sure. But when we talk about images and video and other things like that, it's possible, but typically, at least for now, we have to kind of transcribe those over. Multimodality is coming on the way. We've seen some great improvements to the llms they're able to do a lot more with with multimodality right gpt4o is a great example of that um so i would say stay tuned right um follow the repo you'll see updates that come there and we'll we'd like to move to more structured learning more structured understanding and getting getting uh more on that roadmap there okay um can we switch over to my screen real fast so this is the repo for intro to graph rag and this was put the there's an aka link in the in the chat and basically you'll have you have the notebooks there um the pdf of the slides and with all the links at the bottom but then also um and a readme i have a little setup here recommended order of notebooks and then all the resources there as well. And so thank you all very much for your questions. And I'll stay on a little bit longer. There are a couple of questions to still answer. So I'll stay on a little bit longer behind the scenes and answer questions. Thank you so much, everyone, for your time today. Yeah, same here. Thanks, everyone. Hey, everyone. Hope you enjoyed today's session. If you aren't registered yet for RAGHACK, you can register now at aka.ms slash raghack slash register, and you can introduce yourself on our forum. You can start hacking either by looking at one of our code samples or just starting from scratch. If you have any questions along the way, feel free to post them in our GitHub discussion forum and you can also attend live office hours in our Discord. Once your project is ready, make sure to submit it before September 16th at 11.59pm PT to be eligible for our cash prizes. And of course, make sure to join our upcoming live streams to learn more about RAG. Thanks for joining. Thank you, everyone, for joining. Thank you to our speakers. To learn more about this series, watch past episodes and register for future sessions. You can follow the link at the bottom of the screen or in the chat. We're always looking to improve our sessions and your experience. If you have any feedback for us, we would love to hear what you have to say. You can also find that link on the screen or in the chat. Today's event code is 23432. And we'll see you at the next one. | Intro to GraphRAG | 3,665 | Microsoft Reactor | 20240907 | In this presentation, you'll gain an understanding of GraphRAG, a technique that enhances document analysis and question-and-answer performance by leveraging large language models (LLMs) to create knowledge graphs. We'll explore how GraphRAG builds upon Retrieval-Augmented Generation (RAG) by using knowledge graphs instead of vector similarity for retrieval. You'll learn how to set up the environment, prepare data, and implement GraphRAG using LangChain, with practical code examples. Additionally, we'll explore some advanced features and customization options available in LangChain to optimize and tailor GraphRAG to your specific needs.
Presented by John Alexander, Content Developer and Apurva Mody, Principal Program Manager
** Part of RAGHack, a free global hackathon to develop RAG applications. Join at https://aka.ms/raghack **
**📌 [Check out the RAGHack 2024 series here!](https://aka.ms/RAGHack2024)**
#MicrosoftReactor #RAGHack
[eventID:23432] | 2024-12-07T21:31:04.519513 |