Unleashing AI with PostgreSQL: Customizing Generative AI

August 30, 2024

EDB is committed to sharing our Postgres knowledge and expertise with the vibrant open source community and unlocking the potential of Postgres for the AI workloads of the future. In our recent video series Unleashing AI with PostgreSQL, EDB Chief Architect for Analytics and AI Torsten Steinbach explored critical AI concepts like lakehouses, embeddings, vector databases, and feature engineering. One topic that’s worth a closer look is generative AI and how to customize it with your own private data.

EDB Chief Architect for Analytics and AI, Torsten Steinbach walks through the process of customizing generative AI. Watch the video

How public data makes its way to generative AI

Generative AI applications are built on top of large language models (LLMs). Public data, which is basically the vast amount of information available on the internet, is used to train these large language models. Before this training can occur, however, the data needs to be cleansed, formatted, and preprocessed. 

Because training these massive language models is extremely costly and computationally intensive, many commercial vendors offer generic pre-trained models that reduce the barriers to entry. There are also open initiatives and ecosystems, such as Hugging Face, that provide services at a low cost or for free.

How to train generative AI with your own private data

Customization refers to tailoring a generative AI model to your own private or domain-specific data not available in public training datasets used by commercial vendors. Just like public data, if you have proprietary data that’s unique to your organization or industry, it needs to be prepared before it can be incorporated into a generative AI solution. 

Once your data has been prepared, there are two fundamental paths for customizing it:

Fine-tuning approach:

• An existing generic LLM is fine-tuned (further trained) on the prepared private data.

• This produces a custom LLM that can be prompted for custom outputs.

• Retraining is required for new data, which is time-consuming and computationally expensive.

Retrieval-augmented generation (RAG) approach:

• Prepared private data is converted to vector embeddings and stored in a vector store.

• During prompting, relevant embeddings are retrieved from the vector store and used to augment the prompt to the generic LLM.

• This allows incorporating new data in real time without retraining the LLM.

These two approaches can be combined, with fine-tuning for a base custom model and RAG for incorporating the latest data. However, due to RAG’s flexibility, it’s not surprising that it’s become the dominant approach for generative AI solutions. We’ll cover RAG in more detail in our next Unleashing AI blog article.

Do more with your data with EDB Postgres AI

EDB Postgres AI brings analytical and AI systems closer to your organization’s core operational and transactional data. With EDB Postgres AI, you can seamlessly store, query, and analyze vector embeddings (text, video, or images) with the trusted Postgres open source database without adding operational overhead.. Want to learn more? Just ask us

Watch the video for more tips on customizing generative AI 

Read the white paper: Intelligent Data: Unleashing AI with PostgreSQL 

Share this

More Blogs

RAG app with Postgres and pgvector

Build a RAG app using Postgres and pgvector to enhance AI applications with improved data management, privacy, and efficient local LLM integration.
October 08, 2024

Mastering PostgreSQL in Kubernetes with CloudNativePG

Previewing EDB’s training session for PGConf.EU 2024, presented by our Kubernetes experts EDB is committed to advancing PostgreSQL by sharing our expertise and insights, especially as the landscape of database...
September 30, 2024