Showing results for chatgpt - Surface Duo Blog

Oct 12, 2023
0
0

“Search the web” for up-to-date OpenAI chat responses

Craig Dunn
Craig Dunn

Hello prompt engineers, Over the course of this blog series, we have investigated different ways of augmenting the information available to an LLM when answering user queries, such as: However, there is still a challenge getting the model to answer with up-to-date “general information” (for example, if the ques...

openaichatgpt
Oct 5, 2023
0
1

Android tokenizer for OpenAI

Craig Dunn
Craig Dunn

Hello prompt engineers, The past few weeks we’ve been extending JetchatAI’s sliding window which manages the size of the chat API calls to stay under the model’s token limit. The code we’ve written so far has used a VERY rough estimate for determining the number of tokens being used in our LLM requests: This very simple approxim...

openaichatgpt
Sep 28, 2023
2
0

Speech-to-speech conversing with OpenAI on Android

Craig Dunn Kristen Halper
Craig,
Kristen

Hello prompt engineers, Just this week, OpenAI announced that their chat app and website can now ‘hear and speak’. In a huge coincidence (originally inspired by this Azure OpenAI speech to speech doc), we’ve added similar functionality to our Jetpack Compose LLM chat sample based on Jetchat. The screenshot below shows the two new ...

openaichatgpt
Sep 21, 2023
0
1

Infinite chat with history embeddings

Craig Dunn
Craig Dunn

Hello prompt engineers, The last few posts have been about the different ways to create an ‘infinite chat’, where the conversation between the user and an LLM model is not limited by the token size limit and as much historical context as possible can be used to answer future queries. We previously covered: These are te...

openaichatgpt
Sep 14, 2023
0
0

“Infinite” chat with history summarization

Craig Dunn
Craig Dunn

Hello prompt engineers, A few weeks ago we talked about token limits on LLM chat APIs and how this prevents an infinite amount of history being remembered as context. A sliding window can limit the overall context size, and making the sliding window more efficient can help maximize the amount of context sent with each new chat query. ...

openaichatgpt
Sep 7, 2023
0
0

De-duplicating context in the chat sliding window

Craig Dunn
Craig Dunn

Hello prompt engineers, Last week’s post discussed the concept of a sliding window to keep recent context while preventing LLM chat prompts from exceeding the model’s token limit. The approach involved adding context to the prompt until we've reached the maximum number of tokens the model can accept, then ignoring any remaining older mes...

openaichatgpt
Aug 31, 2023
0
0

Infinite chat using a sliding window

Craig Dunn
Craig Dunn

Hello prompt engineers, There are a number of different strategies to support an ‘infinite chat’ using an LLM, required because large language models do not store ‘state’ across API requests and there is a limit to how large a single request can be. In this OpenAI community question on token limit differences in API vs Chat, user ...

openaichatgpt
Aug 24, 2023
2
2

OpenAI tokens and limits

Craig Dunn
Craig Dunn

Hello prompt engineers, The Jetchat demo that we’ve been covering in this blog series uses the OpenAI Chat API, and in each blog post where we add new features, it supports conversations with a reasonable number of replies. However, just like any LLM request API, there are limits to the number of tokens that can be processed, and the API...

openaichatgpt
Aug 17, 2023
0
1

Prompt engineering tips

Craig Dunn
Craig Dunn

Hello prompt engineers, We’ve been sharing a lot of OpenAI content the last few months, and because each blog post typically focuses on a specific feature or API, there’s often smaller learnings or discoveries that don’t get mentioned or highlighted. In this blog we’re sharing a few little tweaks that we discovered when creating ...

openaichatgpt
Aug 10, 2023
0
0

Dynamic Sqlite queries with OpenAI chat functions

Craig Dunn
Craig Dunn

Hello prompt engineers, Previous blogs explained how to add droidcon session favorites to a database and also cache the embedding vectors in a database – but what if we stored everything in a database and then let the model query it directly? The OpenAI Cookbook examples repo includes a section on how to call functions with model ...

openaichatgpt