diff --git a/src/routes/generative-ai/+page.svelte b/src/routes/generative-ai/+page.svelte
index beb6588e8b9a2..6092ea797a266 100644
--- a/src/routes/generative-ai/+page.svelte
+++ b/src/routes/generative-ai/+page.svelte
@@ -1,86 +1,261 @@
+
Use ONNX Runtime to accelerate this popular image generation model.
-
+ Generative AI refers to a type of artificial intelligence that creates new content—such as
+ text, images, audio, or code—based on patterns learned from existing data. Unlike
+ traditional AI models, which primarily classify or predict based on given inputs, generative
+ AI models produce entirely new outputs.
+
+ They accomplish this through advanced techniques like deep learning, often using models such
+ as Transformers and Generative Adversarial Networks (GANs). Examples include AI that generates
+ human-like text responses, creates realistic images from descriptions, or composes music. Generative
+ AI is driving innovation across industries by enabling personalized experiences, automating creative
+ processes, and opening new possibilities for content generation!
+
+ Text generation models are AI systems designed to generate human-like text based on + prompts. They're used in chatbots, content creation, summarization, and creative + writing. Check out our llama 3 and phi-3 demos below: +
++ Other generative models create diverse outputs like code, video, or 3D designs. These + models expand creative possibilities, enabling automation and innovation in fields + ranging from software development to digital art. +
+ Use ONNX Runtime Gen AI for its high performance, scalability, and flexibility in deploying + generative AI models. With support for diverse frameworks and hardware acceleration, it + ensures efficient, cost-effective model inference across various environments. +
+Whether it be Desktop, Mobile, or Browser, run ONNX Runtime on the platform of your choosing!
+Run ORT GenAI locally, without privacy concerns and inference however you desire.
+You aren't limited to just LLMs with ORT GenAI - you can use your favourite vision or (soon) omni models too.
+Getting ramped up is super easy! Get started using any of the various examples we have below!
+The average latency in seconds on Stable Diffusion v1.5 and v2.1 models:
-+ Raring to go? Bring your models to all platforms and get started with any of the following + tutorials and demos: +
+ A Desktop app demo to interact with text and images simultaneously. +
++ An LLM chat app with UI. Pick your favourite model and get chatting! +
+- ONNX Runtime supports many popular large language model (LLM) families in the Hugging Face Model - Hub. These, along with thousands of other models, are easily convertible to ONNX using the - Optimum API. -
-