How AI Avatars Engage & Drive Sales Conversion with getitAI
speakers

Michael is a product leader with a decade of experience building, designing, and shipping Digital Assistant and Avatar products in Silicon Valley. Sharpening his teeth at Google, Meta and Samsung, he is a Webby Award winning designer, holds half a dozen patents, and has a successful track record leading teams building products on the cutting-edge of innovation. With degrees from UC Berkeley, Michael has a strong connection to the best of California’s innovation.
getitAI is revolutionizing the way brands sell online with AI-powered video sessions that merge TikTok-level engagement with personal shopping magic—converting 10x better than any old-school landing page. Founded by an exited entrepreneur, the engineer behind Django CMS, and ex-FAANG AI vets, getitAI is on a mission to make every DTC brand unstoppable. Their clients include a wide range of innovative brands such as True Classic, obvi, Positive Grid, LYFEfuel and more.



SUMMARY
Discover how to unlock the power of HeyGen’s Interactive Avatars in this eye-opening demo featuring special guest Michael Greenberg from getitAI. Watch and learn how his team is designing immersive, high-converting custom retail journeys with HeyGen. In Michael's live demo, he built a conversational shopping assistant using getitAI’s platform, powered by HeyGen avatars, to simulate real-time product recommendations for brands like True Classic to drive conversions, scale personalization, and reshape online sales beyond the limits of traditional marketing.
TRANSCRIPT
Welcome from Wayne Liang (Co-Founder and Chief Innovation Officer, HeyGen) 0:03 [Music] 0:12 we have a pack agenda today um I'll get into that shortly but for now I want to just welcome our co-founder and chief 0:18 innovation officer Wayne Leang to the stage to give you all a warm welcome from 0:26 Hen oh no okay great thanks Edison hi everyone i'm 0:33 Wayne i'm a co-founder of Hen it's really great to see so many faces some new some like we know each other before 0:39 to our uh second community event and this is going to be a great event we 0:44 have Michael from Getty AI and who has been revolutionizing you know e-commerce 0:51 stuff with our technology we would love to hear him talking about how this has been very impactful to their business 0:57 and we have Eddie who is our tech lead for interactive avatar that we just came 1:03 back from South Spy where we are one of the finalists for the innovation award 1:09 and have been showing our intera lots of you know global audience there and it 1:14 was great and this product has been actually worked on for over two years in 1:20 backing our team and uh we've seen lots of progress we've seen lots of customers 1:26 using that deploying that into their production with their users we believe it's the one of the future we are 1:32 bringing to the business world with our technology so very excited to be here 1:37 with everyone and happy to answer any question to discuss anything but let's 1:43 look forward to like the next few um sessions that's coming soon thank you yay Intro & agenda with Allyson Ortega Toy (Community & Education Lead, HeyGen) 1:56 today video is everything you need speed creativity and consistency with Hey Genen you can 2:03 create the most lifelike avatar videos 10 times faster than traditional production scale across 175 different 2:11 languages looks and platforms all in 4K your clients want more more videos more 2:18 engagement and more results pen gives you the tools to deliver need personalized videos that feel custom 2:24 done launch interactive engaging campaigns that deliver more for your clients and do it all without blowing 2:31 the budget no expensive studios no costly talent just beautiful effective 2:36 videos that drive results and here's a secret we used Hey Genen to create me 2:42 join the future of storytelling 2:48 okay oops all right guys so this is what we have on the schedule for today so we 2:54 just heard from Wayne um and next up we're going to hear from my esteemed colleague Eddie Kim hey Jen's 3:00 interactive avatar technical lead who's going to give us some product updates as well as the latest and greatest on 3:05 Interactive Avatar and then we will go to the main event which is a demo and conversation with GetIt AI co-founder 3:12 Michael Greenberg right here um where we'll dive into his unique journey and just show you all how he and the team 3:19 leverage interactive avatar to scale personalized sales experiences for clients of all kinds and then afterwards 3:25 we're going to get to your questions i know you guys have a lot of them some of you have already asked them we're going to do our best to answer as many as we 3:31 can um and then at the very end we'll just leave some time for everybody to hang out please eat our food please 3:38 drink the drinks they won't drink themselves and before I pass it to Eddie lastly I just want to say um beyond 3:45 today's event please continue connecting with us in the Hen community hub which is 3:51 community.hen.com keep connecting sharing ideas and learning um you can visit our forums where you can join beta 3:58 testing groups share your work add product uh post feature requests you can head to the events tab to RSVP for 4:05 events like these join our webinars and whatnot and also don't miss our resources tab um where you can get a 4:11 great look at Hen use cases and stepbystep guidance on how to use Hunen's many features and with that I'll 4:17 hand it over to Eddie Hen's interactive avatar technical lead for a run through go Eddie HeyGen Product Updates & Interactive Avatar Updates with Eddy Kim (Interactive Avatar Technical Lead, HeyGen) 4:23 hello hello cool let's do some general product updates um what's the latest uh we are 4:32 adding motion in studio so within our studio if you guys are familiar with our editor um you can now add motion to 4:40 those and preview those very quickly we have voice recommendations 4:45 for our photo avatar uh so as you uh create your avatars you can add a voice 4:51 and it'll kind of recommend uh the best voices for those uh based on your prompt 4:56 and it turns out voice is a critical um component to the visual quality of your 5:03 avatars we all have a bias about uh how someone looks and how they sound and so 5:08 this will kind of make that process a lot easier and we have PDF to video so if 5:16 you guys are tired of making presentations um upload your PDF we'll make it for 5:24 you so um if you're not familiar uh what is interactive avatar Interactive Avatar use cases 5:31 uh if you are kind of keeping tabs on the space right now the big rage right 5:37 now is conversational AI so if you've talked to your chat GPT on voice mode it 5:43 is amazing um it's like the movie her it's it's already here um and when 5:49 you're calling your customer service you're actually probably talking to a AI 5:54 agent at the end but what's next after 5:59 that digital humans that you can talk to in real time and see at the same time so 6:07 uh we uh like Wayne said we've been working on this for the last two years and uh our interactive avatars are can 6:15 be clones of yourself or you can use um our public uh avatars as well and we've 6:21 come across many uh use cases that our customers have been using it in um I've 6:27 highlighted just a few here and uh some of them will be customer support we have a few clients in Japan live using this 6:34 in uh Japan for their customer support uh we have uh AI Alec Alec call out to 6:41 you u he's one of the uh top salesmen at our company he sells 6:47 himself it's uh quite interesting and uh and some of the other use cases at least 6:53 online you know you can think of uh interviewing job training coaching maybe 6:58 you need to practice for your interview maybe you need to have a practice for a difficult conversation maybe you want to 7:04 learn a language instead of watching videos on how to learn a language you could talk to someone real time at your 7:10 pace at your cadence and we were uh recently at South 7:16 by Southwest um yeah we were showcasing this to lots of people and they have really good 7:23 barbecue and some updates on interactive avatar so uh what we've been working on Interactive Avatar Updates 7:29 the last few months is um a new architecture behind the scenes so it'll be transparent to most of you but it'll 7:37 in terms of reliability and speed this is where uh you'll see those numbers and 7:43 uh in addition we'll uh be releasing uh re our real- time API so for those of 7:49 you who don't want to use our conversation stack but you want to bring your own uh ASR 7:55 LLMs we'll just be the video layer you bring whatever you want and so with our 8:00 real-time API uh you can bring your entire conversation stack you got maybe a secret sauce or secret workflows 8:07 agentic workflows you want to use just plug us in give us the agent audio we'll 8:12 give you the video back and that will be uh coming up and our new public avatars 8:19 uh so this has been a feature uh that people have been requesting is avatars that you can have a green screen so that 8:26 you can place the avatar in front of any background you want so that uh you can 8:31 just kind of chroma screen uh the background out um some things on the road map um we're going to be building 8:38 towards where the developers are so terms of the some of the most popular conversational AI frameworks we have 8:45 Pipecat uh Livekit's agent framework and we're in talks with Agora as well too um 8:51 we are looking to integrate with them to where our uh developers are open 8:57 realtime integration so you'll be able to chat with a demo of this in the back later uh if you've talked to your chat 9:04 GPT voice it's the same one um and we have uh customers from many 9:11 international audiences so um Gladia is a very popular multilingual um uh like 9:19 speechtoext model uh so we'll be integrating with them uh fairly soon 9:26 and it can be uh kind of difficult to record 2 minutes of video for our 9:32 avatars you have to record a very certain way we want to make that even easier what if you could do that with a 9:39 photo so we're currently in the works of trying to figure out how how to make that much easier for uh our people to 9:45 train and we got a bunch of uh revamps to our website uh we're building brand 9:50 new SDKs we know uh they're not the best right now so we're we're bringing a 9:55 whole new team in to uh build out all our SDKs there so that's it for my 10:00 updates oh and one more thing we are making the interactive avatar cheaper so 10:13 uh now if you want to create a clone of yourself it used to be $89 to be able to 10:18 create that clone now we're dropping it to 10:28 29 all right um All right 10:39 everybody give it up for Eddie again 10:45 all right bear with us just a moment as we get set up here we ne 10:54 Oh okay all right while Michael gets set up 11:01 I'm just gonna give you a quick introduction so here you go bear with us 11:07 so Michael is a product leader with a decade of experience building designing and getitAI Interactive Avatar personal shopping demonstration with Michael Greenberg (Co-Founder, getitAI) 11:12 shipping digital assistant and avatar products in Silicon Valley um sharpening a seat at Google at Meta at Samsung 11:19 quite the logos there my friend he is a Webbby awardwinning designer uh holds 11:24 about half a dozen patents has a success successful track record 11:30 leading teams in building innovative products that use emerging technologies so with degrees from UC Berkeley my 11:36 friend Michael has a very strong connection to the best of California's information so really fun fact Michael's 11:42 also published a book as as a photographer so a man of many talents so everybody if you could please give your 11:47 hand put your hands together for Michael 11:52 all right hello hello hello i'm sorry get that away from me sorry 12:00 hi there we go hi everybody okay give me a minute to finish getting this set up 12:06 and test the audio because we want to be able to hear what our avatars are going to be saying also I like to listen to 12:12 music when I work and this is going to be like a live building demo so I'm going to try and get a little bit of 12:17 music as well if that's even possible all right that's good just background okay 12:25 so there we go hi I'm Michael uh I am a 12:30 co-founder of GetIted AI i'm gonna first tell you a little bit about what Get AI is what it does what we build and then 12:37 I'm going to live build one of our experiences with hopefully some participation from the audience so it's 12:43 not you just watching me for half an hour talk about something um and at the end we will get to experience what it's 12:49 like using our studio and some of our technology to build with Hey Genen and 12:55 create a consumer experience that uses avatars but still feels really personal and engaging so uh get it we turn this 13:04 kind of social curiosity into conversations with avatars uh some of 13:09 those are structured conversations some of those are dynamic conversations we're going to get into that um and we work uh 13:14 currently with direct to consumer brands so brands like True Classic and Lint Life Fuel which is a SoCal based company 13:21 um Ember for example so um what we do is storytell we are creating these 13:27 immersive video experiences you're getting AI guidance when you're purchasing something with one of these direct to consumer brands um and you are 13:35 allowing creators to scale themselves into having an infinite number of touch 13:41 points with their consumers that they would otherwise be more constrained to uh on Instagram or with live shopping so 13:49 super easy integrations that's one of the key points uh hey Jen our consumers basically get that for free with using 13:55 our platform um we pay we pay your we pay the bills don't worry not the consumers um all right so setting up 14:03 with our platform it's four steps um first is that you connect knowledge uh we're going to skip that part of the 14:08 demo because that part takes a little bit of time but I'll walk you through it then we personify the brand that's the creation of the avatar we connected with 14:15 Hey Genen uh we create the avatar with the with one of our customers and then we uh connect via API their knowledge 14:23 with the avatar into our platform we then generate experiences that's what we're going to focus the most on today and then last is that you deploy it all 14:30 right so let's I don't like this song cool all 14:39 right so what we have here is our studio and it's really small it's okay we're 14:44 going to zoom in in a minute so the studio is really what the core of Get AI is um it is a dialogue building platform 14:53 that allows you to create these static and dynamic conversations so um first 14:59 we're going to we're going to pretend like we're setting this up from scratch first we're going to come into the agent settings we're going to give a 15:04 description of what this uh agent is meant to be like we're going to give it a name we're going to describe the tone 15:10 and the biography and this is where we get to connect the Hey Gen avatar so here is 15:18 the avatar uh this is Ryan he's the founder of True Classic who show of hands knows what True Classic is okay 15:24 handful handful of hands they're men's basic wear massively successful hyperrowth company um so they make 15:31 primarily men's t-shirts very basic um but really comfortable shirts so um I 15:36 went in the studio at their headquarters and filmed with Ryan and created an avatar so this is him we've got his 15:43 streaming avatar ID we've got a uh static avatar ID that will become relevant in a minute um we're using 15:50 OpenAI when there is no video available for the voice and um a couple other 15:56 things in the setup so So magic handwave Hollywood magic setup happened great now we're going to create some flows so 16:04 let's start with uh doing something like and I have some stuff on the bottom 16:09 there just to to help with my creativity so here in the studio we're starting 16:15 with our begin the conversation so the user might say something like hi hello 16:21 reset start from the beginning right and what happens after they say one of those things we're going to create a greeting 16:27 hi there I'm Ryan can we see that okay yeah hi there i'm Ryan uh founder of 16:33 True Classic want to check out some 16:38 shirts okay so this is what Ryan is going to say with the avatar that has 16:44 been created we can test it and there'll be a little bit of latency as it loads so we're going to test that and this is 16:49 he's going to say the thing that we just typed out oh that part's loud okay give it a minute might pause 16:56 the music so what's happening right now is it's creating a video in the background in Hijen um hi there i'm Ryan founder of 17:04 True Classic want to check out some shirts so we typed it it triggered the 17:09 event the video has been generated it now exists in our studio so we can play this and it should be much faster every 17:16 time going forward should be sometimes it takes a couple um events to trigger 17:21 i'm Ryan founder of True Classic right want to check out some shirts so that's what we typed that's what he said let's 17:27 make it a little bit more interesting let's add some options and some pictures so what we're going to do and I have it 17:33 again down here just to save time we're going to add classic cruise and a gift 17:38 finder flow so let's do this classic crew te's will take us to this node 17:45 which we built before um where he describes the t-shirt this is static 17:51 content right this is we've decided what we want the avatar to say it goes through the flow it is a story that we 17:57 are presenting to the user in a very structured way what we know is that 80 90% of people who are coming to these 18:03 kinds of online experiences they're doing the same thing as the rest of everybody else that's done it there's 10 to 20% that are 18:10 asking questions that we've never seen before that's where the interactive avatars come in and we're going to show that in a minute um so we now have two 18:17 options um we have an image associated with it it's going to share it into the screen uh it's going to fit the content 18:23 to the screen size and uh those are both going to be buttons that we can click that'll take us into next parts of the 18:29 flow so he'll say his piece in a moment hi there i'm Ryan founder of True 18:35 Classic want to check out some shirts and I'm going to click on classic crew te's so this will be very fast 18:42 theoretically pretty fast right 18:47 so I'm going to mute him for a second so that was really fast because it was a cached video it existed on the server we 18:53 had generated it beforehand but it's the same avatar ID as if we were to just ask Ryan a question that he's never seen 19:00 before and it's going to dynamically generate a response so um I'm going to click through a couple times he's going 19:06 to you know I want a classic t-shirt that's maybe an active fit uh he's going to look it up and get back to us he 19:12 pulled in some results from Shopify we're skipping a lot here i'm just going to get us to the point where we can start asking him questions right um so 19:17 he's pitching us right now he's like "Which one's catching your eye?" Let's say it's the active crew 19:23 neck all right so now this is going to be a streaming response based on all the 19:28 inputs that I gave it it's going to now trigger a streaming event create an interactive avatar session and start 19:35 pitching me this product based on how quickly I clicked through the options based on if it thinks that I've been 19:40 here before if I've seen this product before it's going to take into account a whole bunch of stuff and generate a 19:46 pitch that it delivers but done so via streaming uh if it wants to do that and 19:52 sometimes it takes a while um let's maybe return to a different part of the conversation let's start over again oh 19:58 oh oh oh here's what we're going to do uh we're going to do a slightly more stable version that 20:07 was dev this is prod okay this is our classic team too loud soft 20:12 comfy and breathable for all day wear the tailor continue classic active let 20:18 me look that up and get back to you all right here's our collection of active 20:23 there we go that's a great choice so the is a great choice is a filler active crew neck tea is your goto for active 20:29 days offering moisture wicking odor control and a quick dry finish to keep you comfortable made from a polyester 20:35 spandex blend it provides a contemporary athletic fit with UPF30 protection ready to add this versatile tea to your cart 20:42 right so he's pitching based on all the signals that he can gather he being the avatar um he's trying to speak to the 20:49 specific points that we're interested in that's all prompting that's part of the engine that's part of the the models that we've developed so and those are 20:56 all things that maybe he's said before based on things that have happened in the past um conversations so let's ask 21:02 him some other question i don't know uh I'll I'll think of the first one and then I'll ask you guys to come up with some what about like what uh do you 21:10 suggest for uh like cold weather so this theoretically should be 21:17 triggering a streaming event it's querying cold weather i'd recommend our active joggers for their premium comfort 21:24 fit and moisture wicking so this is perfect for staying warm and active this is an interactive stretch this is an 21:30 interactive avatar response but you as the consumer wouldn't know the difference between a pre-rendered 21:35 response and a streaming response because as I said 80 90% of responses fall into the head of the conversation 21:42 we've seen this before we know the kinds of questions people ask they're clicking on buttons that we're providing them because that's what people like to do so 21:48 uh let's ask him a question that I don't know is maybe not one and this is a test 21:54 environment maybe one that we haven't seen before somebody give me a question that I can ask him what's the data 22:02 more left field than that what's it made out of is a fair question thread count sure okay let's see what's the thread count 22:09 let's see what he comes up with so he's going to look it up which 22:14 we The active joggers don't have a specific thread count like sheets do but they're made from a premium blend of 90% 22:20 polyester and 10% spandex offering a soft stretchy feel this combination ensures durability and comfort not a bad 22:27 response for thread count basically him saying "We don't do that." but still making it feel like a really respectful 22:33 engaging response what about another question anybody does a trip to 22:40 Australia ship to Australia let me look that up and get back to you i'm going to explain those responses in a second yes 22:46 we do ship to Australia we want to make sure everyone can enjoy our comfortable and stylish clothing no matter where 22:52 they are is there a specific product you're interested in having shipped right so what you're noticing there's 22:58 there's something that happens right before he responds which is let me look that up or hold on one second we call those filler sentences in the event that 23:04 it triggers a streaming response there might be some latency you have to create the WebRTC connection you have to 23:11 generate the response send the response begin the stream session render it on the device right there's a lot to happen 23:17 and what we just heard from Eddie is it's going to get a lot wherever Eddie is it's going to get a lot faster we've been working with Hijun for long enough 23:22 that it wasn't fast before it was really long so we have these filler sentences just like we do in human to human 23:28 conversation if we're having a conversation you ask me a question I say "Let me think about that." Right it 23:34 feels it feels normal you buy a few seconds of of grace from the interlocutor so we have those as well 23:41 when it hits a streaming instance then it will add some filler sentence that's relevant to what the user asked so if 23:46 they asked for "Do you have x product?" It'll say "Let me look it up." If it'll say if you say like "Oh I have 23:52 a random question." It'll say "Okay one sec what's your question?" So what about another question that has nothing to do with true classic oh why don't you make 24:00 silk pants sure 24:06 how about silk underpants yeah whatever i have no idea what he's going to say to this no clue let me look that up and get 24:13 back to you right let's see silk underpants while luxurious might not 24:19 align with our focus on creating durable everyday essentials that are both comfortable and affordable we prioritize 24:25 materials that offer stretch breathability and ease of care which are key so who's like thinking about buying 24:31 a true classic shirt after this after this demo right so um this is what it's like to interact with the avatar um 24:38 using like the studio that we've that we've built so let's let's add some more nodes and we can try it out we can see 24:44 what it's like um this is a these are all structured dialogues you can see 24:49 what he says is right here then we get to a point in the conversation where it's powered by an LLM this is the 24:58 handoff to the streaming the interactive avatars we get through the part of the flow that we know you know the user's 25:04 making selections and then they get here search for a crew neck based on the positive search results the preferences 25:10 that the user has given the lifestyle fit show some products just say "All right here's our lineup." And then pitch 25:16 the product which do you want to take a closer look at so it really does start talking through the product um search 25:21 through the products um we can even go through a giftfinder flow where we ask 25:26 questions about what the user might want to buy for somebody with a ton of logic like if they've said short sleeve then 25:32 move them in this direction of the conversation if they said long sleeve move them in that direction of the conversation so the studio is really the 25:38 brain and hijen is our face the studio is allowing Hey Jen to send us results 25:44 and videos that feel like you're having a real natural conversation while still giving control to the creative behind 25:51 the scenes this isn't a fully streaming fully whatever you want the agent to the avatar to say um it'll it'll add libit 25:57 the analogy that that I like to give is it's like with comedy there's different types of comedy there's stand-up comedy 26:03 which is a little bit more scripted they wrote the punchline before they got on stage and then there's improv so 26:09 streaming avatars is improv but there's a really important part of the conversation that happens with like the 26:14 scripted storytelling so this our platform kind of blends those two and then uses it to sell products um so we 26:22 can create some more nodes here let's go to the let's do let's do new drop right 26:27 so let's say that there's a new product um I had started working on this earlier today so if the user says let's see um 26:36 what's new at true classic um 26:42 or description oh yeah yeah yeah uh new product from 26:47 TC we'll add examples what's new at True Classic what is your new drop and then 26:54 we'll generate a bunch of other examples which are used as what we call golden utterances to train the model to trigger 27:02 based on intents that are similar to that has released any new items tell me about new products right so this intent 27:08 now exists the user can say something at any part of the conversation whether you're talking about shipping policy returns product browsing if you just ask 27:15 at any point do you have any new products this intent will trigger based on the intent the golden utterances that 27:21 we just trained the model with um it'll then say "So glad you're interested in our new drops highly anticipated true classic." Blah blah blah i don't know 27:27 true Classic what's something what's a product that we should have them as a new drop anybody give me something silk 27:33 underwear silk underwear the true classic silk undergarments um they're soft is silk 27:41 breathable no soft durable but not so breathable um 27:49 so uh let's pretend like they have that um and we're going to choose a product 27:55 like we'll say that about this is these are all of True Classics products um we're connected to their Shopify in the 28:01 back end so all this is populated from Shopify um let's say that it's the I don't know short sleeve polo just 28:08 suspend disbelief that that's our silk underpants um now what we can do is I'm 28:13 going to get rid of this node and say here at this button which 28:18 is going to precede the conversation uh tell me more and connect it to this LLM over 28:27 here and we're going to tell the LLM like uh take the product the user has been shown and 28:38 pitch it to them do it in just a few sentences um and pretend like true 28:47 classic actually offers it okay and in here we're going to make 28:56 sure that it's sharing screen it's going to continue to share as it moves on to the next node so a lot of this is configurable um I'll show you in a 29:02 minute how it can be done automatically um and like this i'm going to refresh so 29:09 that we can see that all right so there's our pretend 29:14 silk undergarments we're going to play this node which I think should automatically go to the next one 29:20 yep so again it's a structured dialogue but it's the first time that it's ever been said so it's triggering a VidGen 29:26 event in the background but then also initiating a streaming event so that we can see it right now so this you guys 29:32 are getting uh double double drops the highly anticipated true classic silk 29:37 undergarments they're soft durable but not so breathable okay so tell me more let me look that up 29:44 and get back to you streaming events now activating the LLM response looking for the ultimate wardrobe upgrade check out 29:50 our staple crew bottom n pack okay so everything you need to mix and match for any occasion you refused to talk about 29:56 the silk teans and chinos that fit just right all 30:01 this for $199.99 making it a steal for anyone wanting to elevate their everyday style 30:06 without breaking the bank so if this was an actual product then it would be shown on the screen here he would be talking you through it he would be you know 30:12 talking to specific product details you could ask questions about it but it's I'm demonstrating how like as new 30:18 information becomes available in the back end in the knowledge it's a new product it's a new I don't know it's a 30:24 new feature uh it's really easy using like this kind of dialogue building to speak to the points that you're trying 30:30 to sell if there are any sales people you know it's like you've got your points that you got to stick to that you know work really well while also 30:36 allowing for some dynamic content to come in via LLM um so I'm going to put a 30:41 pin in that and I'm going to talk about what happens kind of maybe sometimes at the higher end of the funnel so if we 30:48 think about sales we think about Instagram and the use of avatars the potential use of avatars is for ads 30:55 you're scrolling Instagram and you see an ad for a product or you see an ad for a service or something like that um I 31:01 know that Hen's been exploring some of these use cases we've thought about that as well so what we have here um rather 31:07 than me building it from scratch I'm going to connect some of these nodes that I made earlier um let's assume you 31:13 were on Instagram and you saw uh let's see i think uh where is it here i think 31:21 this is like an actual ad from True Classic let's see if it'll Oh it's going 31:27 to be fine babe if we go in there you're just going to buy a bunch of makeup and it's just going to be boring no no you 31:33 got to trust me go on for today's all about you we're dressing for you we're going to find you some basic everyday 31:40 okay so this is an ad on Instagram this is like an actual bonafide true classic ad that that either they sent us or we 31:46 scraped i'm not sure it doesn't matter um so let's say there's there's always a CTA on Instagram like click here to 31:53 learn more so you click on this ad you would then be taken to this experience 32:05 okay I thought it was pre-loaded there we go over 20,000 positive reviews can't be wrong plus if it's not the best tea 32:12 you've ever worn you've got 30 days to return it without any hassle guaranteed 32:18 and now we're pitching in our platform and if I was on a phone you'd 32:24 you'd see I'm swiping but you swipe through these like short form video contents that are actual UGC from True 32:32 Classic um that they have on reviews on their website on their page um it should 32:38 they should be like automatically playing in this dev environment it might not be working perfectly um so it's like 32:44 this this social ad funnel that allows you in the platform using hey genen 32:49 avatars as kind of the bait or the hook to come onto the platform get some ads 32:55 and social content short form video content and then get taken to a part of the flow that's like okay great so 33:02 you've seen some of these videos now let's talk about the product and it ends up starting to pitch you the product um 33:09 and then you end up in a conversation with the avatar so the platform is 33:14 really flexible and allows you to configure it and modify it in essentially like unlimited number of 33:19 ways um and yeah we're we're as a as a 33:24 company really exploring the different ways that you can think creatively using 33:29 LLM using avatars to help uh innovate adfunnels to create really engaging 33:35 content on the website have the brands owned their own traffic rather than punching it out to some other website 33:40 don't buy it on Amazon buy it on True Classic it's a fun way to buy it on True Classic don't uh you know buy it from an 33:46 aggregator uh buy it directly from the brand itself you can work with some fun 33:51 technology in order to create those experiences so that is an overview um if 33:58 there are any specific questions for studio I mean there's like a a ton of functionality in the studio that I've 34:03 not even begun to scratch the surface on um creating an intent or adding all of 34:09 these different you can add text images create AI responses embeds if you have HTML 34:15 uh you can listen to what the user wants to say you can add image buttons product selections right so like this this 34:22 allows a ton of functionality that are all all of which is facilitated by the fact that you're having a conversation 34:28 with an avatar so yeah questions yes so studio is this for brands to actually 34:36 manage themselves or is it so Yep yep we work in partnership with them 34:43 yeah it's it's there's a lot of functionality in here and it's funny the um we we saw earlier the hen studio like 34:51 their the the creator what is it was it called the studio yeah the editor um we don't need to use that because we 35:00 kind of like API we backdoor in to trigger these video generation events right so you you can actually if you go 35:08 into our account on hey genen you see all of the different videos that we've generated so you can also go in and manipulate those as well that we then 35:14 pull from the server um our studio we work in partnership with the brands we 35:19 give them access we show them how to adjust things uh but we work with them to kind of set it up originally and this 35:24 reminds me um that takes me here so we do have a way 35:29 to create flows based on prompting so this is a it's a a video um let's see 35:37 i'm going to show you from the very beginning so we're in here in studio we 35:42 add an intent we give it a name we describe it we give it a couple examples kind of like I did earlier and then with 35:49 time it takes this is mostly asynchronous it will automatically 35:54 create the flow based on what you've defined so they just pop up um the nodes 36:00 pop up this is what we're moving towards for most customers for most folks you want to be able to say this is what I 36:06 want i wanted to pitch my t-shirt for me and here are some product specifications and then the system will automatically 36:12 dynamically create these nodes for you to refine rather than doing it manually this is a little bit harder you kind of 36:18 relinquish some control but um the brands that we work with they've had 36:24 enough time using Studio that they can then go in here this is kind of like the starting point for them and then they go in and they fine-tune and manipulate 36:31 yes this is next so the question was whether it can have 36:38 a voice prompt or a text prompt the answer was yes you can so you see here in in the UI that we've built um you can 36:45 type a message or you can record a message which is going to I don't know if it's listening to me now it's going 36:51 to be very confused if I end up stop talking let's see 36:59 yeah so but yes you can use both multimodal okay I have one more question is that okay 37:06 all right so when I'm scrolling the videos right uh I can probably go in and 37:11 watch this video but do I also have an option of of watching a video and getting just the text or having just the 37:17 voice uh input outputs there you go all right yep you're not obligated to use 37:24 the video it's our perspective that video is the most engaging form of content so that's what we lead with but 37:31 there is an option to have a textonly interface a more traditional chatbot experience which we're not as much fans 37:36 of as uh Hen I'm sure is very happy to hear yeah um Mike so when you're working 37:43 with these brands how are they responding or feeling about leaving it 37:49 all up to an LLM to respond versus having a defined kind of talk track and 37:54 story how do they Yeah there's a there's a relief that they don't have to think of every single 38:00 thing every single objection the user might have and what we have here actually I can't show it to you on this 38:05 one i'm going to unplug this for one second and then go to somewhere to answer your 38:11 question here and here and because I don't want to 38:17 show too much of the other stuff here okay wait for it there we go i think 38:26 yeah there we go okay so this is a this is a live customer um this is the uh 38:32 like the supervisor view where they can look into conversations that people are having look at actual real life 38:38 transcripts and then places where the agent failed to answer a question which we then capture save it to this for them 38:44 to go to later and review and then update the knowledge base with the kinds of questions people are asking that the 38:50 agent didn't feel confident enough to answer so there's a relief component of 38:55 we don't have to think about every single thing we give you the you know 80% of the content that we know people 39:01 ask us and if you talk to brand owners they know their customers really well if you ask them what are the top 10 39:06 objections or what are the five most popular products they have those answers like off the top of their head so those 39:12 are the things that are hardcoded into the system and the rest is left up to LLM which then with the feedback mechanism that we have here with the 39:18 supervisor view um you can see um that there's like a a let's see response 39:27 excellently describes why I imperial fragrance with detailed notes and includes a call to action effectively addressing the user okay so that was a 39:32 good one uh needs action oh yeah that was a highlight needs action failed to address the user's likely intent to 39:37 continue shopping providing a generic and irrelevant reply agent should clarify the user intent and further 39:42 assist suggestions right so this is where the brand owner goes looks adds information to the knowledge base which 39:48 is a super easy thing to do and then the agent has no issue with that going forward uh was there a question on this 39:54 side no yes another one i love it ask away audience Q&A 40:00 all right uh first question is the is it integrated with GHL for example like if 40:05 I want to set up meetings and everything downstream yes uh we have like I think a Calendarly 40:12 and a reclaim meeting link um that you can schedule a meeting with like a real 40:17 person we don't yet have meetings with the avatars um there's all sorts of you can 40:22 do custom integrations really easily with H with if you just like a custom embed you can put whatever code you want 40:28 into a field into one of the nodes and it'll render that based on however you've created it it's actually quite 40:34 it's quite clever okay second part of the question that's awesome everything 40:39 is positive so far come to this come to this question so so 40:45 I have say I have a conversation for like say 20 minutes uh with the avatar mhm can it summarize 40:54 the whole thing and give me advice on the conversation that we had like for example in education it might be very 41:00 useful right as a teacher right you're having a conversation with a student uh 41:05 yeah that's totally possible we don't right now have like conversation summaries conversation by conversation 41:12 but you can export it and then just have an LLM summarize it for you um but yes this kind of these are the kinds of 41:18 insights that brands are looking for the conversations that people are having with their brand they want to know the 41:25 takeaways what went well what didn't go well did it convert a sale um that's the real the purpose of the agent is to 41:31 increase conversion rates increase average order values um they're not so much there to explore all of our 41:37 products that's what the website's for and we're not trying to get rid of the website we're just trying to make it easier for people to actually have their 41:43 questions answered and convert um as a purchasing user on the website 41:50 really quick I just want to remind you guys you are also welcome to ask questions of the Hey Gen team too wayne 41:55 and Eddie are Q&A like we can I mean I guess we could Yeah sure i feel like there's great Q&A happening here so I'm 42:02 gonna let you rock but let's definitely cool as a reminder guys hey Jen questions are also welcome and I know you had one awesome 42:09 in regards to the LLM is there a backup just in case like one goes down because 42:16 Yeah so like we use open router as our LLM infrastructure so there's I guess 42:22 theoretically like a bajillion Hold on I'm gonna unplug this for one second again 42:29 um there's a lot of LLMs that we have access to uh that we can use and the way 42:37 that the system is built is this is like a very traditional framework for 42:42 conversational AI you have the intents that you've established things that you know you want answered in a very 42:49 specific way but then you have a fallback intent for something that I we we we never thought that somebody was 42:54 going to ask Ryan do you believe in God or like what's your position on uh 43:00 mustard on hot dogs like something that you never would have expected so there's a fallback intent that queries all of 43:06 the knowledge available it takes longer um and it uses a different LLM because we don't need like the speed of open AI 43:13 or the accuracy of Open AI we can use something a little bit cheaper that's uh a little bit has a larger context window 43:18 that allows us to answer that question more effectively yeah 43:31 so uh can you train these avatars with like your personalities so if you want 43:37 to make something more fun and entertaining and can I like train those like avatars with like a lot of like 43:44 nuances on your face and your expressions and sometimes you might be 43:49 angry a little bit but it's all for the fun part so there's two parts two ways to answer that question and there's the 43:54 hijen answer which I'll let the hijen folks answer that around what the face can look like and the expressions that 44:00 it can provide and then there's the personality like what's behind the face the voice the the mind the brain that's 44:06 us so in combination you have like a fully formed persona um on our side yeah 44:12 absolutely i often will will tell the system like speak with speech disluencies say um and like uh uh Ryan 44:20 this guy from True Classic he's got like a very spunky silly funny attitude um it's clear in the brand itself so we 44:26 tell it responds as if Ryan would respond and this is a description of Ryan and the system does that quite well 44:32 um there's a we're not pretending that it's a person it's very clear like it 44:38 says AI Ryan on cure like it's it's not trying to hide that it's an AI um but 44:45 it's also not not trying to be human like it it 44:50 wants after you talk to it for a few minutes you kind of forget that it's an AI um and we like that but every time 44:58 you hit that uncanny valley moment where it says something really robotic you're brought back into the realization oh this is actually an AI so we try and 45:04 minimize that as much as possible how long process going to be for you train the AI that's going to be like you 45:12 or Hey Jen how long does it take to train okay I 45:19 I'll go i can no I I can just stand up there 45:26 great great question yes so uh very quickly on the second question for training such an interactive avatar it 45:33 usually takes uh close to uh 24 hours we're going to make that much faster uh 45:39 for the expression of the avatar like like like as Michael mentioned that the 45:46 the knowledge the words right like he whatever he says is actually up to a 45:51 customers in this case Michael like they have their own kind of LM behind to kind 45:59 of tell the avatar what to say based on the specific brand knowledge the 46:04 specific customer interaction So that part of the personality actually is totally up to to you guys to to 46:11 customize the the look of the avatar and in terms of the expression emotion is 46:17 actually from the footage so if you want to say create a very happy avatar you 46:24 definitely want to smile more in that you know footage you submit to make your avatar if you want it to be like angrier 46:31 or more serious you also do that uh we are working on the next generation 46:36 generative and tech for avatar which is now kind of available in our video avatars which is different than 46:42 interactive avatar so if you go check out like the core product hijen we have this expressive video avatar where their 46:50 facial expression will be much more um wide range expressive and could be 46:56 potentially based on your sematic meaning of the scripts and similar 47:01 technology will be available to interactive avatar soon maybe later this year in the longer term and then that's 47:08 where that you have even more diverse you know result of your avatar combined with such product that to give your end 47:15 user a very very engaging very proper you know experience and hope that answer your 47:22 question 47:27 you guys keep 47:33 Jack Jack I'm gonna go back here because I haven't answered a question from my friend Julian yet here you go hi thank 47:41 you so much this is really wonderful So talking about these mistakes human beings make or ah or like would you put 47:48 that into the 11 labs and then API into hey genen or how would you actually make 47:54 that happen okay if we can yeah sure um so if we take like a maybe a 20 March 48:03 2025 architecture for how the different geni technologies fit together hunen 48:09 creates the face and then there's an LLM that creates the script and then there's some voice in the middle maybe Hen 48:15 decides to make their own voices in the future maybe the LLM comes stock whatever it is so if we assume these three genai 48:22 technologies you can do it either in the first or the second one you can either do it at the LLM level so you say create 48:29 me something that I can send to 11 Labs that has speech disluencies that's what we would call it in like the the ling 48:35 the world of linguistics things like um like uh things like mispronouncing things on purpose because it sounds more 48:41 human when you do that um you can do that at the LLM level or theoretically 11 Labs could also make that an option 48:48 and say you know make this sound extra human by taking longer pauses and 48:54 stuttering and using vocal fry at the end of sentences but I don't think that 48:59 it's it's not currently handled at the Hunen level it'd be via API from 11 Labs or OpenAI's voice API right 49:07 okay and then I have a follow-up question please thank you that's really helpful um are you sorry okay yes so so 49:13 actually to that question we we aim to make our framework very flexible for you 49:20 to plug in whatever level of in-house system you already produced maybe Eddie 49:25 can expand on what kind of options there will will be available to use the voice 49:31 yeah I mean uh with the uh we have kind of like three main levels one is you can 49:37 use our out of the house we'll in this case we'll be using 11 labs and in your 49:42 system prompt if you ask it to add those ums and disfluencies it'll add those and make that conversation more natural um 49:50 uh it really is at the LLM level that you could do most of that it's the easiest way and then uh some of the more 49:56 advanced models that are coming out will add those naturally for you um I'm not sure off the top of my head but some TTS 50:03 providers will add those for you um at that level too um and then for us uh we 50:08 don't care what you use just bring whatever conversation stack you want uh 50:13 if you guys seen Sesame like the research on there that looks crazy we're keeping uh our eye on that one uh so 50:21 yeah question are you have any interest to go 50:27 into film making like what Curious Refuge is doing i mean film making as far as like 50:34 you know like a Hollywood movie like telling a story a short film or is that 50:39 any thing you're interested uh let me let me make sure I understand so your question is like is Haitian interested 50:45 in going to film making yes content like you would prompt a story and then you 50:50 know maybe the background and like um text to video in a sense or text to a 50:55 whole story okay yeah well so we do have we do plan to you know help users create 51:01 better B-roll content in their video h is to help all kind of businesses to 51:07 make a videos for their businesses it's not for just like a entertainment or like you know non-b businessiness you 51:14 know um film or visual storytelling but when when business to convey their ideas 51:20 visually it's not only avatar talking avatar is like that the a critical part 51:26 sometimes they also have some bureau content like a maybe like a factory tour 51:31 or some kind of product info and maybe some just like a general like a cool 51:37 model working in our I don't know street so we we do uh have plan to help people 51:45 generate those bureau content better based on their goal and the intent of 51:51 their avatar script or like the the goal of the video and it's for their purpose 51:57 for their case yes yeah the distinction between A-roll and B-roll is really important like in in 52:05 our example the A- roll is the avatar talking to you selling you a product like that's the core experience the 52:11 B-roll as Wayne was just talking about is this like ad funnel for example you get an ad on Instagram that can be its 52:18 own creative and then on our platform the very first thing you see is B-roll footage because it's engaging it's fun 52:23 it's short form it's like you know it's real it's authentic and then after you get hooked by the B-roll which 52:30 eventually I have no doubt will be created by Hey Jun uh then you move into the A-roll footage where you're actually 52:35 engaging with the avatar directly um hi i have uh a quick question i've 52:42 been using Hijen for almost two years maybe a little bit more since you're maybe out and one of the main things 52:49 that I'm using it is for content creation and I literally mimicking myself but I always get the problem with 52:56 my accent because it's not a regular English accent and so it I always use 11 53:02 Labs i tried to use 11 Labs to be honest it was complicated so I just record myself but now we are starting an we're 53:10 building an academy and we're going to we have artists with millions of 53:15 followers and everything we want to do their avatars on our academy and we want to do it in bunch in like big chunks of 53:23 um of videos is that something that you offer we can do it with multiple people at the same time or like uh is that 53:30 possible can what's the last part of the question 53:35 multiple people of want sorry what's the last part of your question multiple people the question is we want to mimic 53:43 a bunch of people is it doable with the platform itself do I have to connect it 53:48 to 11 Labs for different accents because they have different accents from all over the world and we they have 53:53 different accents and I need to the question if do I need to record each and every one of them like I do to myself 53:59 when I record my videos or is there some way to mimic exactly their accents and 54:05 they're from different countries do you have accent uh like digest um tool or 54:12 something or I have to use an outsource or I have to record them i don't know got it got it yeah so to make say 54:19 avatars or voices for a group of people you you do need to record like submit 54:25 their footage separately uh either audio or video uh we we have our u voice 54:30 technology like you have experience in your avatars we also support you to use 54:35 any other voice provider uh if you can get like a more accurate representation 54:41 of a specific voice you want we support you to import that to hen to make videos 54:46 and this particular accent um element in voice cologne is actually a common issue 54:54 we are seeing from the users on the market it's like is like the the AI can 54:59 learn about someone's voice and speak in whatever language pretty easily but usually when when when some people have 55:05 like their very unique accent of certain language um that I think the AI actually needs to 55:12 be better in picking those up i think many people are working on that and uh they will be better pretty soon and also 55:19 it's kind of interesting that we we were we were basically breaking uh the 55:24 product at Sy by asking the avatar to speak in Texas accent which it did and 55:31 surprised everyone and we just happened to have a Japanese visitor there i was like okay how about you speak Japanese 55:37 back to this customer but in a text accent accent i was like "Yeah sure." 55:43 And then stay quiet because it doesn't make sense right there's no Texas accent in Japanese i think similar similarly 55:52 it's like everyone has their own unique um accent and sometimes people intend 55:58 their avatar to be multilingual different scenario there there's a there's a a part of the problem space 56:07 that XM becomes a a very interesting topic for the technology i think part of 56:13 that still being figured out i I have strong confidence that it'll be solved much better as of right now on one hand 56:20 yes we do need to record everyone to make each of them a good avatar and a good voice yeah I knew we are 56:26 troublemakers no no no it's great it's great feedback it's great like a demand 56:31 for the whole industry to to know where they should be working towards and I 56:36 believe the accent issue will be solved much better very soon do we support professional voice uh that's an option 56:44 too maybe yeah yeah so that's an option uh in the future like 11 Labs what is it 56:50 11 Labs so they have their instant voice clone and then they have their professional voice we tried with it you know it's we we use Arabic and Hebrew 56:58 and some other languages that have like all the you know it's hard for them to 57:03 grasp it you know so for 11 Labs and also for Hijin so we were looking for what what ways we can do it with all the 57:09 speakers that we have yeah uh reach out and I think there's cool we can do uh on 57:16 the side channels yeah sounds good awesome do we have any other questions okay I'm going to go here and 57:23 then I'm going to go here i'm trying to give folks who haven't asked a question yet a chance to do so here you go this 57:29 is not my question but a quick followup we've um been working with Hen also for about two years and we have an 57:36 Australian accent and an Australian accent is really difficult to especially for um Americans because they don't kind 57:44 of hear it when they hear it but um the guys at Hen have been amazing and we've 57:49 done a professional um uh voice clone and now when our um avatar speaks in 57:58 English he sounds Australian when he speaks in Chinese Japanese Korean uh 58:04 Hindu which we do for our international markets he still sounds like himself but 58:09 he's speaking in the language so we had the option of him sounding like he was Indian but it didn't feel authentic 58:16 because when you're an Australian speaking Hindi you don't sound like a Hindi speaking Hindi so we've worked with you guys and that's that's perfect 58:23 but my question actually was for you um in the back end of your studio let's say 58:28 for Ryan he's setting up a campaign maybe a you know a six week campaign he's got product drops um how much time 58:36 is he going to spend in the studio doing those setups or did he do it once and 58:42 then the the learning the LLM picks it up and takes it from there or does he have to do say six different scenarios 58:48 at the beginning for it then to carry it off um it's it's a little bit of both right so there's a new campaign let's 58:55 let's think about it full end to end right there's going to be a social B-roll component of that most of those 59:01 videos are already existing so it's 15 minutes to pull them into studio link 59:06 them together and allow the user to scroll through them until they're done scrolling through and then they hit continue right so 15 minutes for that 59:12 part the setting up of the flow is maybe two hours um for the rest of that 59:17 conversation thinking about h what might people ask or what are the specific talking points they've already thought about this campaign to be sure at the 59:24 brand so it's at an hour or two creating those talking points giving it to the 59:29 avatar and then running through it and experiencing it saying "Yeah if if this was a salesperson or if this was an ad 59:34 creative that we were putting together what feedback would I give to the system tweak it allow it to run through it 59:40 tweak it allow it to run through it and then once it's live it'll self-heal a bit it'll evaluate where is there drop 59:46 off where is it converting very well and bias towards that behavior versus kind 59:52 of like well this is my script so I must stick to it like if it's not converting it'll give either feedback to the the 59:58 console and say you know we're not converting anything or if there's a version of that that exists in a 1:00:04 different part of the flow that is working really well it'll promote that part of the flow instead so that's 1:00:09 that's my back-end question i have a front-end question and I'm really interested because we're now looking at implementing Algolia um in your system 1:00:18 if I'm a customer and I come in and I'm there for five or 10 minutes I go away the next day I come back do you remember 1:00:24 me or do I still need Alolia yeah um we fingerprint slash like cookie for seven 1:00:30 days so yeah it'll remember you when you come back it'll bring you right back to the conversation the part of the conversation you were in um sometimes 1:00:37 it's even like too much frankly it's like "Hey so do you want to keep looking at the shirt?" You're like "Whoa I like 1:00:43 that was that was five days ago it was like quite a while." But um yeah I think striking the balance is I'll be honest 1:00:50 not something that we've like perfectly nailed yet how you greet somebody back without it being too much but yeah it 1:00:57 remembers you seven days and then after that we we don't track anymore awesome great question 1:01:05 hi thank you so much uh so we're developing a new service uh that is a 1:01:10 live video AMA platform ask me anything so which uh works pretty much like a video podcast you can imagine uh so 1:01:17 right now it's a humanto human interaction a human host talks to human guest our goal is to integrate AI 1:01:23 interviewer so the question here is uh how do we like integrate so we tested with a live uh interactive demo on Zoom 1:01:31 but then we're using Amazon IVS so the question here is how do we like bring interactive avatar into our video stream 1:01:37 using uh Amazon IVS yeah for our uh video delivery we're 1:01:44 currently using uh live kit uh for our distribution so uh you can connect the 1:01:51 live kit output uh into any video stream that you want um it's just like a they 1:01:57 have lots of SDKs that you can use um so yeah it's just a matter of you know plumbing for the uh the video part so 1:02:08 more questions all right um can a picture avatar can be 1:02:14 turned into a interactive avatar or does it have to be um footage and also if yes 1:02:20 um how would you manage personality and um like the movements of uh of it 1:02:28 yeah so uh personality we actually covered that a little bit uh personality 1:02:33 is actually mostly from the words the avatar says right which is totally in 1:02:38 your control depends on how you set up your RM and give it prompt and in terms 1:02:44 of turning a photo into interactive avatar we are actually working on that eddie shared the update too it's our 1:02:50 road map it's dual we just want to make it better and higher quality we'll try to make it ready for users um pretty 1:02:58 soon in the next a few u weeks or months right i'll add you read any LLM like 1:03:06 blog post talks about prompt engineering like the thing that this the information you give to the 1:03:14 system there we go the information you give to the system to create the desired output if you infuse the personality 1:03:22 that you want the the result to have and if you follow through the pipeline the 1:03:27 avatar to have um that's that's when it really feels like it's meant that it's 1:03:33 been crafted like it's an actual developed mature personality if you don't inject that into the LLM which 1:03:41 then gets given to the voice which gets given to Hen then it just feels like any other LLM generated personality and and 1:03:49 it's kind of that's a least common denominator across all the training data that the LLM has ingested so it's it's 1:03:54 always best practice to like give it a backstory to be like you know this is this is who you are as an 1:04:01 LLM mind and you'll see a much more realistic human result on the other side 1:04:07 of that as well any more questions 1:04:16 uh first question is how much does one minute of interaction cost 1:04:22 do we know for Hey Jen hey Jen or for Oh for for you forget it 1:04:28 yeah um for the like our customer in the end yeah yeah we don't charge per minute 1:04:33 of interaction what is the cost um cost right should I hand it over to 1:04:39 the the my my team member our head of growth um so cost depends on how many 1:04:44 sessions and then um how many sales are made and how many SKs there are so it's a bit of a matrix um for example if 1:04:52 there are 10,000 products that's a lot of videos for us to have in the end 1:04:58 generated to talk about products so it costs a bit different for that customer than it does for somebody who sells one 1:05:03 product and there's plenty of single product companies out there or uh you know go to Abbott Kiny and you'll see a 1:05:08 bunch of different uh brands some of them have a thousand versions of shoes and some of them have two shoes so the 1:05:14 price structure looks a little bit different depending on how much avatar generation because that's the most expensive part of our stack is going to 1:05:20 be required um in order to create this customer's experience okay are you going to open this for consumers sometime 1:05:29 maybe not yet all right everybody more questions 1:05:37 anyone anyone oh okay i'm sorry I missed you here you go 1:05:43 so I'm wondering how this group can help you all 1:05:53 like how can how can we Michael how can we help you for your let's start with you but then let's talk about hey Jen as 1:06:00 well um yeah there's a couple ways that I think I could I could answer that first 1:06:06 is and this is kind of a meta point if you see AI avatar technology in the 1:06:13 wild use it try it uh don't be afraid of it's going to suck at first of course 1:06:20 like any paradigm shifting technology is a little rough around the edges at first so as like people in your lives it's 1:06:28 easy for us to be skeptical about these kind of emerging technologies but we will get them to a point where they're 1:06:34 really enjoyable and seamless and fit into our lives in a little bit more of a natural and organic way you just got to 1:06:40 use them a bit um either we're like having analytics on our side to watch when it doesn't go well to improve it or 1:06:46 as was the case when we were working with Hen over a year ago on their interactive avatars we gave them direct 1:06:51 feedback or I was in Slack with Wayne being like Wayne this sucks right now please calling calling Alec from like 1:06:59 the lobby of MIT being like I'm giving a live demo at MIT in 45 minutes and the latency is killing me right now so like 1:07:06 that's that's how people can help is by having direct feedback to the source so 1:07:12 if you see a get AI avatar out there in the wild take my LinkedIn take my number and text me being like I just tried it 1:07:18 and it wasn't working so that we can then go in and and bug smash it and then eventually before we realize it it'll be 1:07:23 seamless enough where uh we don't even notice you don't even notice the the rough edges and then on the other side 1:07:29 of that if you know other brands direct to consumer brands that want to use this kind of tech connect with me afterwards we're still constantly onboarding people 1:07:36 and that's the most tangible way that you could help us yeah I think Michael 1:07:42 gave a hello i think Michael gave a really good answer and as you have all seen that their team use hij better than 1:07:49 we do we don't we don't combine video in streaming like all around and they they did a really amazing job and actually I 1:07:57 think from Hij similar the best way to to help us is to give us your feedback anytime anywhere especially if you do 1:08:05 have like a real use case you wish to be further empowered or you wish to try 1:08:11 with whatever AI we are building or there's something you you think that might work but you are still one or two 1:08:17 step away we need to build something further so you can be unlocked to become 1:08:22 a creator to to promote your business better to build a product better all of those insights those ideas those 1:08:30 feedback or those complaints those reporting like reports are the most valuable you know information for us to 1:08:38 to know where to go and my co-founder and I have been talking to users like 1:08:43 some of you mentioned you've been using Hijen since two years ago uh maybe we actually have talked in one of our 1:08:49 previous user calls because my and I have been trying to talk to users like 1:08:54 every week every day in the past few years the reason for us to be able to 1:09:00 have make a kind of okay progress that's useful for people is because we keep 1:09:05 learning from all of you so that that would be like a really appreciate if you 1:09:11 you can all do that thank you for that question yeah it's a great question yeah uh there's a feedback button uh on the 1:09:18 interactive avatar um I read every one of those comments uh right before I 1:09:23 sleep so So you have really sweet dreams sweet dreams uh yeah uh yeah give us feedback 1:09:30 this entire field right now it's in its infancy we're seeing just the beginning 1:09:37 and um I'm we're very privileged to be uh a part of it and we need your 1:09:43 feedback to guide that creation um so this is all just the beginning so we 1:09:49 actually have like 10 plus feedback buttons in Hen if you if you notice yes all right Wayne do you want to close us 1:09:56 out say a few words okay do we do we keep the chair today or Yeah you can you 1:10:01 can keep saying you guys can hang out i I'll stand up i'll stand up yeah thanks everyone for coming this is a great Closing remarks from Wayne Liang (Co-Founder and Chief Innovation Officer, HeyGen) 1:10:07 event and you are all like amazing in terms of audience users and the questions also really appreciate Michael 1:10:14 for coming here to show everyone how amazing their product is for brands for 1:10:20 e-commerce and how they are much better at using HJ than ourselves i think I 1:10:26 said I just I'm shocked like how how how smart those ideas are and also thanks to 1:10:32 Eddie uh is oh Eddie was just here yeah 1:10:37 for walking us through you know like exciting product we've been developing and what's coming that will not only 1:10:44 empower like great product like sketis but also everyone of yours if you have any idea we we totally think um real 1:10:53 time interact communication with an avatar could be the future of interface of so many businesses to their end users 1:11:02 um yeah try it tell us what your use case is and uh let us know what what you need from us we are totally excited to 1:11:09 help you get it up and running thank you for coming and we look forward to see you next time 1:11:15 [Applause] awesome guys so we have this space until 1:11:21 7 p.m so please feel free to hang out network i'm going to put my colleagues on the spot really quick if you work at 1:11:27 Hey Jen raise your hand don't be shy come talk to us come talk to us especially if you guys want to learn 1:11:33 more about the product if you're interested in getting involved with the community either as a speaker for events 1:11:38 or joining a webinar things like that please chat with us um and lastly big thank you to the Kin for having us in 1:11:44 this lovely space um thank you guys all so much and enjoy the rest of the evening