{"version":"1.0","provider_name":"Surface Duo Blog","provider_url":"https:\/\/devblogs.microsoft.com\/surface-duo","author_name":"Craig Dunn","author_url":"https:\/\/devblogs.microsoft.com\/surface-duo\/author\/craigdunn\/","title":"Infinite chat using a sliding window - Surface Duo Blog","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"gpKLykwvl6\"><a href=\"https:\/\/devblogs.microsoft.com\/surface-duo\/android-openai-chatgpt-16\/\">Infinite chat using a sliding window<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/android-openai-chatgpt-16\/embed\/#?secret=gpKLykwvl6\" width=\"600\" height=\"338\" title=\"&#8220;Infinite chat using a sliding window&#8221; &#8212; Surface Duo Blog\" data-secret=\"gpKLykwvl6\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script type=\"text\/javascript\">\n\/* <![CDATA[ *\/\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/\/# sourceURL=https:\/\/devblogs.microsoft.com\/surface-duo\/wp-includes\/js\/wp-embed.min.js\n\/* ]]> *\/\n<\/script>\n","thumbnail_url":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/08\/a-screenshot-of-a-message-description-automatical.png","thumbnail_width":844,"thumbnail_height":327,"description":"Hello prompt engineers, There are a number of different strategies to support an \u2018infinite chat\u2019 using an LLM, required because large language models do not store \u2018state\u2019 across API requests and there is a limit to how large a single request can be. In this OpenAI community question on token limit differences in API vs [&hellip;]"}