Model caches
This notebook covers how to cache results of individual LLM calls using different caches.
First, let's install some dependencies
%pip install -qU langchain-openai langchain-community
import os
from getpass import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI
# To make the caching really obvious, lets use a slower and older model.
# Caching supports newer chat models as well.
llm = OpenAI(model="gpt-3.5-turbo-instruct", n=2, best_of=2)
In Memory
Cache
from langchain_community.cache import InMemoryCache
set_llm_cache(InMemoryCache())
%%time
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
CPU times: user 7.57 ms, sys: 8.22 ms, total: 15.8 ms
Wall time: 649 ms
"\n\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!"
%%time
# The second time it is, so it goes faster
llm.invoke("Tell me a joke")
CPU times: user 551 µs, sys: 221 µs, total: 772 µs
Wall time: 1.23 ms
"\n\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!"
SQLite
Cache
!rm .langchain.db
# We can do the same thing with a SQLite cache
from langchain_community.cache import SQLiteCache
set_llm_cache(SQLiteCache(database_path=".langchain.db"))
%%time
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
CPU times: user 12.6 ms, sys: 3.51 ms, total: 16.1 ms
Wall time: 486 ms
"\n\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!"
%%time
# The second time it is, so it goes faster
llm.invoke("Tell me a joke")
CPU times: user 52.6 ms, sys: 57.7 ms, total: 110 ms
Wall time: 113 ms
"\n\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!"
Upstash Redis
Cache
Standard Cache
Use Upstash Redis to cache prompts and responses with a serverless HTTP API.
%pip install -qU upstash_redis
import langchain
from langchain_community.cache import UpstashRedisCache
from upstash_redis import Redis
URL = "<UPSTASH_REDIS_REST_URL>"
TOKEN = "<UPSTASH_REDIS_REST_TOKEN>"
langchain.llm_cache = UpstashRedisCache(redis_=Redis(url=URL, token=TOKEN))
%%time
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
CPU times: user 7.56 ms, sys: 2.98 ms, total: 10.5 ms
Wall time: 1.14 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
# The second time it is, so it goes faster
llm.invoke("Tell me a joke")
CPU times: user 2.78 ms, sys: 1.95 ms, total: 4.73 ms
Wall time: 82.9 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
Semantic Cache
Use Upstash Vector to do a semantic similarity search and cache the most similar response in the database. The vectorization is automatically done by the selected embedding model while creating Upstash Vector database.
%pip install upstash-semantic-cache
from langchain.globals import set_llm_cache
from upstash_semantic_cache import SemanticCache
UPSTASH_VECTOR_REST_URL = "<UPSTASH_VECTOR_REST_URL>"
UPSTASH_VECTOR_REST_TOKEN = "<UPSTASH_VECTOR_REST_TOKEN>"
cache = SemanticCache(
url=UPSTASH_VECTOR_REST_URL, token=UPSTASH_VECTOR_REST_TOKEN, min_proximity=0.7
)
set_llm_cache(cache)
%%time
llm.invoke("Which city is the most crowded city in the USA?")
CPU times: user 28.4 ms, sys: 3.93 ms, total: 32.3 ms
Wall time: 1.89 s
'\n\nNew York City is the most crowded city in the USA.'
%%time
llm.invoke("Which city has the highest population in the USA?")
CPU times: user 3.22 ms, sys: 940 μs, total: 4.16 ms
Wall time: 97.7 ms
'\n\nNew York City is the most crowded city in the USA.'
Redis
Cache
See the main Redis cache docs for detail.
Standard Cache
Use Redis to cache prompts and responses.
%pip install -qU redis
# We can do the same thing with a Redis cache
# (make sure your local Redis instance is running first before running this example)
from langchain_community.cache import RedisCache
from redis import Redis
set_llm_cache(RedisCache(redis_=Redis()))
%%time
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
CPU times: user 6.88 ms, sys: 8.75 ms, total: 15.6 ms
Wall time: 1.04 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
# The second time it is, so it goes faster
llm.invoke("Tell me a joke")
CPU times: user 1.59 ms, sys: 610 µs, total: 2.2 ms
Wall time: 5.58 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
Semantic Cache
Use Redis to cache prompts and responses and evaluate hits based on semantic similarity.
%pip install -qU redis
from langchain_community.cache import RedisSemanticCache
from langchain_openai import OpenAIEmbeddings
set_llm_cache(
RedisSemanticCache(redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings())
)
%%time
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
CPU times: user 351 ms, sys: 156 ms, total: 507 ms
Wall time: 3.37 s
"\n\nWhy don't scientists trust atoms?\nBecause they make up everything."
%%time
# The second time, while not a direct hit, the question is semantically similar to the original question,
# so it uses the cached result!
llm.invoke("Tell me one joke")
CPU times: user 6.25 ms, sys: 2.72 ms, total: 8.97 ms
Wall time: 262 ms
"\n\nWhy don't scientists trust atoms?\nBecause they make up everything."
GPTCache
We can use GPTCache for exact match caching OR to cache results based on semantic similarity
Let's first start with an example of exact match
%pip install -qU gptcache
import hashlib
from gptcache import Cache
from gptcache.manager.factory import manager_factory
from gptcache.processor.pre import get_prompt
from langchain_community.cache import GPTCache
def get_hashed_name(name):
return hashlib.sha256(name.encode()).hexdigest()
def init_gptcache(cache_obj: Cache, llm: str):
hashed_llm = get_hashed_name(llm)
cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"),
)
set_llm_cache(GPTCache(init_gptcache))
%%time
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
CPU times: user 21.5 ms, sys: 21.3 ms, total: 42.8 ms
Wall time: 6.2 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
%%time
# The second time it is, so it goes faster
llm.invoke("Tell me a joke")
CPU times: user 571 µs, sys: 43 µs, total: 614 µs
Wall time: 635 µs
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'
Let's now show an example of similarity caching
import hashlib
from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from langchain_community.cache import GPTCache
def get_hashed_name(name):
return hashlib.sha256(name.encode()).hexdigest()
def init_gptcache(cache_obj: Cache, llm: str):
hashed_llm = get_hashed_name(llm)
init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{hashed_llm}")
set_llm_cache(GPTCache(init_gptcache))
%%time
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
CPU times: user 1.42 s, sys: 279 ms, total: 1.7 s
Wall time: 8.44 s
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is an exact match, so it finds it in the cache
llm.invoke("Tell me a joke")
CPU times: user 866 ms, sys: 20 ms, total: 886 ms
Wall time: 226 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
%%time
# This is not an exact match, but semantically within distance so it hits!
llm.invoke("Tell me joke")
CPU times: user 853 ms, sys: 14.8 ms, total: 868 ms
Wall time: 224 ms
'\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'
MongoDB Atlas
Cache
MongoDB Atlas is a fully-managed cloud database available in AWS, Azure, and GCP. It has native support for Vector Search on the MongoDB document data. Use MongoDB Atlas Vector Search to semantically cache prompts and responses.
MongoDBCache
An abstraction to store a simple cache in MongoDB. This does not use Semantic Caching, nor does it require an index to be made on the collection before generation.
To import this cache, first install the required dependency:
%pip install -qU langchain-mongodb
from langchain_mongodb.cache import MongoDBCache
To use this cache with your LLMs:
from langchain_core.globals import set_llm_cache
# use any embedding provider...
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
mongodb_atlas_uri = "<YOUR_CONNECTION_STRING>"
COLLECTION_NAME="<YOUR_CACHE_COLLECTION_NAME>"
DATABASE_NAME="<YOUR_DATABASE_NAME>"
set_llm_cache(MongoDBCache(
connection_string=mongodb_atlas_uri,
collection_name=COLLECTION_NAME,
database_name=DATABASE_NAME,
))
MongoDBAtlasSemanticCache
Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends MongoDBAtlas as both a cache and a vectorstore.
The MongoDBAtlasSemanticCache inherits from MongoDBAtlasVectorSearch
and needs an Atlas Vector Search Index defined to work. Please look at the usage example on how to set up the index.
To import this cache:
from langchain_mongodb.cache import MongoDBAtlasSemanticCache
To use this cache with your LLMs:
from langchain_core.globals import set_llm_cache
# use any embedding provider...
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
mongodb_atlas_uri = "<YOUR_CONNECTION_STRING>"
COLLECTION_NAME="<YOUR_CACHE_COLLECTION_NAME>"
DATABASE_NAME="<YOUR_DATABASE_NAME>"
set_llm_cache(MongoDBAtlasSemanticCache(
embedding=FakeEmbeddings(),
connection_string=mongodb_atlas_uri,
collection_name=COLLECTION_NAME,
database_name=DATABASE_NAME,
))
To find more resources about using MongoDBSemanticCache visit here