Vext Pitch at AI Pitch Event

Vext Pitch at AI Pitch Event

·

6 min read

Here's a video of a pitch I did on Jan 24th, 2024 at an AI pitch event, I talked about how we've crafted a cutting-edge development platform that simplifies the integration LLM.

I've also included a transcription at the bottom to in case you don't want to sit through an 8-minute video. Enjoy!

Transcription

Opening

Hey everyone, my name is Ed I'm the founder and CEO of Vext. So I thought it was a five minutes pitch but it's actually an eight minutes pitch, so I'm going to do a little interaction with you guys.

So show hands if you heard about LangChain, oh that's a lot, great!

Now, show hands if you heard about Zapier... still a lot that's awesome!

So imagine putting Zapier and LangChain together, kind of turn into our product: a development platform. It's not like another XYZ GPT, another copilot. More like a developer platform.

We want to provide businesses a platform that they can create customize and deploy LLM with speed and scale, as you can see our aspiration is to become the Zapier for AI.

Pain Points

Going straight into the pain points: most of the businesses that are implementing LLM to their product or feature have two main hurdles to overcome, one is customization and two is speed.

One of our customers are incorporating LLM into their physical customer support kiosk and now the biggest problem right now is that as you can see on on the picture, there are so many open source and different closed source frameworks.

It's a good thing for the community, and it's good thing for hobbyists, but for businesses it's a big hassle because the cost of trial and error is super high at this moment and they have a a very aggressive timeline that they have to work around with.

If they want something that works and reliable, and also they don't want an open source framework that just turns into another product line they have to manage, they wanted to focus more on their own product and don't have to worry about the deployment.

Solution

Now, this is where we come in: we want to shoulder their burden and give them an easy platform that they can just pretty much drag and drop and configure their LLM pipeline without having to write a single code. All they have to do is to focus on their product and their own feature, and maybe later after they finished everything all they have to do is integrate everything with a single API between us and their existing product.

Based on the current experience, we've saved most these businesses around 75% of the development and deployment time when versus LangChain, that could take them you know a lot more longer.

Here's an overview of what we do. We want to focus on three main categories:

  1. Data integrations - improving experiences on how they can incorporate their own data to make their LLM pipeline more customized catering to their own use cases

  2. LLM - in most cases you can probably just bring your API, and you can figure out your own logistic but over here to save time and the effort on setting everything up, we're providing managed LLMs, and we're continuing to integrate with more vendors

  3. Output integrations - we want to provide an easier way to integrate with your existing product or features via native integrations or API, which is the most common integration options so far

Vext Overview

Here this a little snapshot with image of what we are providing. So we wanted to recreate this workflow builder for users to create their own LLM pipeline without, again, having to write a single line of code.

This is a multi-step workflow that can enable more use cases and more possibilities, like if you want to stack five, six, seven, or 100 LLMs, you can do that if you want depending on you the use case; you can add an RAG process in between, or you can add a function in between.

From left to right left you can see, first of all, this is the interface where you can add an LLM, it is just a standard LLM configuration, you can select OpenAI GPT 3.5, GPT 4, doesn't matter as long it fits your need.

And the image in the middle is where you configure your RAG process. You can import your data and you have a step to to retrieve relevant information from the data set you imported.

Finally, on the right is the function execution, so for example, if you want to add search capabilities or Wikipedia search to the workflow, you can do that as well, and are continuing to expand options.

Traction

Vext was founded on June 2023, which is just last year and we launched our product 2 months later, and then we found our first customer 5 months later after the founding of the company.

Right now, in terms of traction let's break it down into two parts:

  • SLG: which is me, talking to customers and doing tons of interviews. And fortunately, we have customers are that are actually paying for services. We have engaged with two regional partners throughout the course, 11 customers, 3 POCs done, and we've won three deals so far and one of them is paid POC

  • PLG: we have over 300 activated accounts, and 20x active accounts that's actually using the platform. We are still working on trying to convert more users to improve the conversion rate.

Vext Team

I'm the main person when it comes to the leadership, and my co-founder, Ryan, he's a serial entrepreneur. He founded a company called Nexusguard, focusing on cyber security, which he exited a while ago. And now he's currently the co-founder and chairman of Mlytics. You probably never heard of it before, but it is a cyber security slash multi Cloud solution company, and currently they are at $15 million ARR.

We have a team from Taiwan, mostly comprised of developers right now and the reason being we decided to station our resources in Taiwan because we can get the same results but with one-third of the cost compared to the states.

We have a lineup of very sophisticated and experienced advisers ranging from GTM, all the way to operation, businesses, finance, and etc.

This is my LinkedIn, so let's connect if you're interested!

Ask

Finally, we're looking to raise 2 to 3 million. We are a very early stage, and we haven't raised any round so far. We just kickstarted our uh fundraising campaign officially just a just a couple weeks ago and are actively talking to investors just to see:

  1. Validate our idea and...

  2. See if there's any interest

This ask of 2 to 3 million is going to take us to two key inflection points:

  1. Turn our product into a more PLG optimized position. Right now I think we are kind of there but there are still rooms for better optimization.

  2. Growth. We are aiming for a total SLG-focused ARR this year of 500K, and PLG-focused ARR of 40K this year.

So that's about it, thank you so much.