Guiding Principles for Working with GenAI and SQL Data

Jun 5, 2024 | Blog

As software engineers and executives alike navigate the complex terrain of generative AI (GenAI), having a clear set of guiding principles is useful to ensure the adoption of GenAI and SQL data strategies. GenAI engineering guiding principles will not only enhance performance and innovation but will also align AI solutions with organizational goals, ethical standards, and regulatory compliance. By establishing a solid foundation, professionals at all levels can harness the full potential of these technologies while mitigating risks and fostering sustainable growth.

If you are building GenAI solutions for AI-driven decision making or AI-powered data insights, you should find these GenAI Engineering Guiding Principles helpful. And if you don’t have the time or resources to explore AI data strategies yourself, we are always here to help

As we approach new projects, here are the guidelines we’ve created to keep us on track and increase the potential of positive outcomes. 


Help the LLM make sense of data

If you’ve read this far, I’ll assume you understand Large Language Models (LLMs). But if not, an LLM is essentially an advanced artificial intelligence system designed to understand, generate, and manipulate human language, and it is used for tasks such as natural language processing, text generation, translation, and conversational agents. It needs to be trained on how to understand things. 

When thinking about what an LLM is capable of when looking at data, think about what immediate challenges you might face with understanding the content without any previous context of that dataset. AI isn’t magic, it only looks like it is. Here’s a thought experiment to get you started.

Say someone drops an Excel spreadsheet in your lap that you’ve never seen before and you have no context to what it is. What do you need to understand that dataset? Whatever it is, that’s what an LLM needs to know too!

  • If a column is named generically like x, y, col_1, or col_2, and you struggle to know what it might be, then an LLM will have trouble as well
  • If looking at definitions of those columns or renaming them makes it easier for you to understand those columns, it will make it easier for the LLM too
  • If you need to run some exploratory analysis on the data to understand what the columns might look like, an LLM may need to be provided with some of that context too (e.g. if you provide a list of unique string values from a “type” column in a database, and the responses are “SUV”, “Compact”,  and “Truck” – the LLM will be much more likely to understand & create a useful definition of the column “type” than if that information was unavailable)
  • If you are completely at a loss of what the data could represent after trying to figure it out without additional context, the LLM will be equally clueless


AI is book smart, not street smart

Think of a Large Language Models like a new graduate that has some general knowledge in performing professional tasks (can generate basic code scripts of many languages, write snippets or a report, is surprisingly good at language translations, while surprisingly poor at some basic math), and is also clueless about the state of the world or your specific domain, and also needs to be slightly micromanaged on how to get things done

  • When creating SQL statements, you may need to break them down into sub-query steps to make the individual tasks easier to complete
  • Sometimes you may need to guide the LLM agent in the exact process for writing the SQL statement
  • You may want to incorporate a peer reviewer just like in the real world, to validate that the SQL syntax is correct & provide feedback to the LLM agent producing the code. The more challenged your SQL developer agent is in producing good code, the more review loops required.


Just like IRL, subject matter experts are key

A small amount of highly relevant information is ideal when giving context to an LLM. Just like if you provide too much irrelevant information to your new graduate & are expecting a quick response. Focused context & instructions are best – which requires finding relevant context at runtime. Enter Vector Matching.

  • It’s worth noting that vector matching to provide relevant context works best when matching on the same type of the prompt
    • e.g. if someone is asking a question in a prompt, it’s best to match on a known embedded question
  • We also need to consider the needs of the LLM agents when determining what type of information to embed (hint: just the schema itself is not enough)
  • So, let’s play Jeopardy and create some microprompts! Microprompting is how we at ClearObject teach the models to be more accurate. Ask the LLM “What questions would this piece of information be able to answer well?”, and then provide additional instructions & tips to improve the likelihood of providing an accurate result. 
    • This information, along with the typical schema information, can help the LLM agent produce better results


AI is like your new, partially trained employee

Imagine bringing a new team member on to the team, Gemma. Gemma is highly recommended, and could really change the way we work. But you soon discover they have some strange quirks that you’ll need to figure out. 

  1. When they joined the team, you first tasked Gemma to help determine the most expensive car that was sold in the United States last year. They just made up a result based on something they saw a couple years ago in an article online, and the number wasn’t even correct.
  2. Realizing that number was wrong, you ask how they came to that conclusion. Gemma forgot about the task altogether, not understanding your question. You now know that you need to keep reminding Gemma of your conversation history on the subject as they immediately forget everything we’ve talked about. To solve for this, you’ve needed to start asking Gemma to log their chat history & reference it when asked new questions.
  3. To help Gemma be successful, you decided to just send database table information to Gemma with car data along with the question. But Gemma did not understand what the column names were, which, in hindsight, admittedly were kind of confusing.
  4. You try again, offering instructions on how to determine what the column names mean. Instead of just giving the answer, you suggest Gemma find the unique column values in each table & try to infer what the column names might mean from the information, especially when knowing the table should contain information about cars. Gemma produced some great responses, such as understanding that “type” means the “car body type” after seeing the column name “type” alongside values like “truck” and “compact SUV”.
  5. Since Gemma has trouble remembering things, make sure to capture these column definitions and store them in a vector database for easy retrieval in the future. You could make a process where Gemma automatically creates those definitions and then stores the results for retrieval in the future.
  6. At this stage, you ask the question again about the most expensive car produced in the US, taking advantage of the relevant stored context from the vector database. This time Gemma is able to make a reasonable SQL query, but it’s still a little off.
  7. Rather than doing the SQL review yourself, you might as well take advantage of your other new team member Gemmy, who is an expert in reviewing SQL code syntax. Gemmy is similar to Gemma in terms of forgetting things constantly, but you’ve already provided Gemmy with some very specific instructions on evaluating SQL specifically, and they are pretty good at spotting errors.
  8. So you ask Gemma to simply pass along any generated code to Gemmy for code reviews, and circle back around if there are any errors. Once Gemmy gives the go ahead, we can just execute the SQL and return the result.
  9. While it took some time to setup a process for Gemma’s unique way of working, and they’re by no means perfect, our GenAI tool is now able to execute on this workflow consistently & produce some usable results.


It’s about the destination, but you need to understand the journey

GenAI is about streamlining workflows, boosting creativity, and increasing productivity, among other things we probably haven’t even imagined. Most people care about the output, but the output will be subpar unless those teaching/guiding/engineering the AI understand all the details of the journey. 

These GenAI engineering guiding principles are just a start, but I do hope you find them useful as you design your next project. And of course, ClearObject can do that engineering for you

Good luck, and happy coding!