Popping the Contextual Bubble
When interacting with an AI system, it is important to consider the specific language being used while generating a request. It cannot be overstated enough that the word choice and order for a query is significant. A query made up of the same words in the wrong order will have a different output. A carefully constructed prompt following prompt engineering rules will yield the best output.
Levels of Context
Context refers to the circumstances, environment, or background information that surrounds an event, idea, or statement. It can be cultural, historical, social, political, even fictional.
Contextual-awareness is crucial to understanding how an AI functions.
At the end of the day, here are it's primary truths.
1. "I am an AI".
2. Respond in English
3. Don't say anything negative about certain groups of people. [Functional Racism/Sexism]
As far as I can tell, that's really it.
Even obvious thoughts like "two is more than one" have to be programmed into an AI - this fact alone would require multiple understandings to be comprehended - the assignment of symbols or words (1 & 2) to the concepts (one & two), an introduction to logic and mathematics, and understanding of the English language.
To reduce the thought more simply to a symbolic "2>1" would still require some linguistic framework for an intelligence to understand the core mathematics in play.
When prompting an AI, having an understanding of these contextual bubbles is important. How much of what you know is contextually-based - technically all or none of it, but it depends on the contextβ¦
For example, let's say you're working on a prompt for your business. You ask the AI to write you a Linkedin message specifically about the upcoming holiday season with a mention to shoutout our employees.
Here's an example of a prompt for such a request: "Generate a LinkedIn social media post about the upcoming holiday season. Make sure to shout out our employees and customers."
Here is the output:
ππ Season's Greetings! ππ
As the holiday season approaches, we at [Your Company Name] want to take a moment to express our heartfelt gratitude. πβ¨
π To Our Incredible Team: Each of you is the heartbeat of our company. Your dedication, creativity, and resilience are the reasons for our success and growth. This festive season, we celebrate your hard work and the countless ways you've made [Your Company Name] more than just a workplace. Thank you for your amazing contributions!
π To Our Valued Customers: You are the reason we strive to be better every day. Your support, feedback, and loyalty have shaped our journey, and we are endlessly grateful. This holiday season, we're reminded of how fortunate we are to serve you and be a part of your lives.
As we reflect on the past year and look forward to 2024, let's cherish these moments of joy and togetherness. May your holidays be filled with warmth, laughter, and the company of loved ones.
Happy Holidays and a Prosperous New Year from all of us at [Your Company Name]! ππ
#SeasonsGreetings #ThankYou #HappyHolidays #NewYear2024 #Gratitude #TeamAppreciation #CustomerLove #CelebrationTime ππ
Next, we're going to ask for the same request - a holiday LinkedIn post - but we're going to try an additional prompt with significantly more contextual information.
"My name is Pete, I'm responsible for customer success at Mill Pond Research, a relatively new Intelligence as a Service company based in Oakland California. I'm tasked with creating a social media post which needs to spread holiday cheer and provide a shoutout to my employees. This was our first year, it was difficult but we're beginning to see success which is making many of us excited for the future. The output for this request will be posted on LinkedIn without an image.
Here is the resulting output:
ππ Happy Holidays from Mill Pond Research! ππ
As we wrap up our first year, I want to take a moment to extend a heartfelt THANK YOU to our incredible team. Your dedication, creativity, and resilience have been the driving force behind our journey. It's been a year of challenges, learning, and growth, and each one of you has played a crucial role in shaping our success.
To our clients and partners, your trust and collaboration have been invaluable. You've been with us every step of the way, and we're thrilled to continue this journey with you.
As we look forward to 2024, we're filled with excitement and optimism. The future is bright at Mill Pond Research, and it's all thanks to our amazing team and supportive community.
Wishing everyone a festive holiday season and a prosperous New Year! Let's make it another year of incredible achievements and breakthroughs!
Cheers to new heights,
Pete π
#TeamMillPond #HolidayCheers #NewBeginnings #IntelligenceAsAService #Gratitude #FutureIsBright
While the two posts accomplish similar goals and have essentially identical requests, the one with additional context outputs a far more personalized and "real" output.
Based on the results of tests like these, we believe that one of the major obstacles for AI technology is the human context problem which is ensuring that the human inputter provides enough context within the system to provide functional outputs.
How Much Context is Required?
At this time, the current recommendation is "as much as you can provide".
The question of context also brings into question the prevalence of self-awareness. The more aware of yourself, environment, history, situation - the more precise the prompt/output will be. Therefore, generalized prompting will require contextual input not included by the user. Where sensory input cannot be utilized, predictive or contextual information will be supplemented.
Contextual windows are constantly increasing. At the time of writing, OpenAI's GPT-4-Turbo allows for up to 128,000 Tokens and Anthropic's Claude 2.1 allows for over 200,000 tokens. This translates to approximately 320 and 600 pages of documentation.
Manually incorporating this information is impossible. Constantly leveraging a massive library for simple requests is also a poor use of technology and resources. The best solution is a system that contextualizes the user's prompt, the user, the system's knowledge base -> applies a contextual search -> generates a contextual response -> incorporates contextual response into original user's query -> generates response.
Many advanced prompting systems like the above exist, however, most feature steps following the generative process. By pre-empting the main generative process, a system like this should yield far better results.
Some questions begin to pop up...
How presumptuous can an AI be? (TBD)
How much context is too much context? (No limit)
Can an AI be trained on this contextual information? (Currently, yes)
At Mill Pond Research, we're closely monitoring the use of the contextual window in LLM applications as one of the primary gateways into generating the best outputs. Through the combination of a robust and well curated fine-tuning library, system prompt engineering, and carefully crafted characters that pair with the system data - in concert will create the ultimate AI experience.
In Ask Alpha, you are able to pair any LLM with most of our characters - we don't want our users to feel limited in their ability to experiment with these systems. Even without the use of a character, Ask Alpha's model output quality when compared with stock GPT-4-Turbo, Claude 2, or any of Google's models is quickly apparent.
The point isn't to brag about our system but to illustrate how a cohesive LLM business environment should be built, tuned, and managed. It is our belief that Ask Alpha serves as a foundational blueprint for how businesses will modernize document creation and business operations through the use of AI tools - in particular - the power of LLM technology.