{"version":"1.0","provider_name":"Surface Duo Blog","provider_url":"https:\/\/devblogs.microsoft.com\/surface-duo","author_name":"Craig Dunn","author_url":"https:\/\/devblogs.microsoft.com\/surface-duo\/author\/craigdunn\/","title":"Android tokenizer for OpenAI - Surface Duo Blog","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"B8tBCFVMe1\"><a href=\"https:\/\/devblogs.microsoft.com\/surface-duo\/android-openai-chatgpt-21\/\">Android tokenizer for OpenAI<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/devblogs.microsoft.com\/surface-duo\/android-openai-chatgpt-21\/embed\/#?secret=B8tBCFVMe1\" width=\"600\" height=\"338\" title=\"&#8220;Android tokenizer for OpenAI&#8221; &#8212; Surface Duo Blog\" data-secret=\"B8tBCFVMe1\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script type=\"text\/javascript\">\n\/* <![CDATA[ *\/\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/\/# sourceURL=https:\/\/devblogs.microsoft.com\/surface-duo\/wp-includes\/js\/wp-embed.min.js\n\/* ]]> *\/\n<\/script>\n","thumbnail_url":"https:\/\/devblogs.microsoft.com\/surface-duo\/wp-content\/uploads\/sites\/53\/2023\/10\/a-screenshot-of-a-chat-description-automatically.png","thumbnail_width":1000,"thumbnail_height":755,"description":"Hello prompt engineers, The past few weeks we\u2019ve been extending JetchatAI\u2019s sliding window which manages the size of the chat API calls to stay under the model\u2019s token limit. The code we\u2019ve written so far has used a VERY rough estimate for determining the number of tokens being used in our LLM requests: val tokens [&hellip;]"}