Introducing Jamba-Instruct: A Powerful New Language Model in Amazon Bedrock
Introducing AI21 Labs Jamba-Instruct in Amazon Bedrock
We are thrilled to announce the availability of the Jamba-Instruct large language model (LLM) in Amazon Bedrock. Built by AI21 Labs, Jamba-Instruct offers support for a 256,000-token context window, making it especially valuable for processing large documents and complex Retrieval Augmented Generation (RAG) applications.
What is Jamba-Instruct?
Jamba-Instruct is an instruction-tuned version of the Jamba base model, combining a production-grade model, Structured State Space (SSM) technology, and Transformer architecture. This unique hybrid approach allows Jamba-Instruct to achieve the largest context window length in its model size class while delivering top-notch performance. Compared to AI21’s previous generation of models, the Jurassic-2 family, Jamba-Instruct provides a significant performance boost. For more information on the SSM/Transformer architecture, refer to the Jamba: A Hybrid Transformer-Mamba Language Model whitepaper.
Getting Started
To begin using Jamba-Instruct models in Amazon Bedrock, follow these steps:
- Visit the Amazon Bedrock console and navigate to Model access in the navigation pane.
- Choose Modify model access.
- Select the AI21 Labs models you wish to use and click Next.
- Submit your request for model access.
For further details, refer to the Model access documentation. Once you have access, you can test the model in the Amazon Bedrock Text or Chat playground.
Example Use Cases
Jamba-Instruct’s long context length is ideal for complex RAG workloads and document analysis. It can be used to detect contradictions between documents, analyze documents in context, and perform query augmentation. Additionally, Jamba-Instruct supports standard LLM operations like summarization and entity extraction.
For detailed guidance on prompts and use cases, consult the AI21 model documentation and the Built for the Enterprise: Introducing AI21’s Jamba-Instruct Model documentation.
Programmatic Access
Access Jamba-Instruct via an API using Amazon Bedrock and the AWS SDK for Python (Boto3). Below is a sample code snippet to get you started:
“`python
import boto3
import json
# Insert your prompt here
prompt = “INSERT YOUR PROMPT HERE”
# Model ID for Jamba-Instruct
modelId = “ai21.jamba-instruct-v1:0″
# Create a Boto3 client for Bedrock
bedrock = boto3.client(service_name=”bedrock-runtime”)
# Define the request body
body = json.dumps({
“messages”:[{“role”:”user”,”content”:prompt}],
“max_tokens”: 256,
“top_p”: 0.8,
“temperature”: 0.7,
})
# Invoke the model
response = bedrock.invoke_model(
body=body,
modelId=modelId,
accept=”application/json”,
contentType=”application/json”
)
# Print the response
result = json.loads(response.get(‘body’).read())
print(result[‘choices’][0][‘message’][‘content’])
“`
Conclusion
AI21 Labs Jamba-Instruct in Amazon Bedrock is a powerful tool for applications requiring a long context window. The innovative SSM/Transformer hybrid architecture offers enhanced model throughput and performance. With Jamba-Instruct, you can tackle tasks like summarization, document analysis, and query augmentation with ease.
If you’re interested in exploring the capabilities of AI21 Labs Jamba-Instruct in Amazon Bedrock, visit the Amazon Bedrock console in the US East (N. Virginia) AWS Region. For detailed information and guidance, refer to the Supported foundation models in Amazon Bedrock documentation.
About the Authors
Joshua Broyde, PhD, and Fernando Espigares Caballero are experts in generative AI solutions and cloud technologies. Together, they bring a wealth of experience and knowledge to the development and deployment of AI21 Labs Jamba-Instruct in Amazon Bedrock.