{"id":30885,"date":"2022-09-21T15:04:55","date_gmt":"2022-09-21T15:04:55","guid":{"rendered":"https:\/\/devblogs.microsoft.com\/cppblog\/?p=30885"},"modified":"2022-09-27T16:37:55","modified_gmt":"2022-09-27T16:37:55","slug":"a-visual-studio-debugger-extension-for-the-raspberry-pi-camera","status":"publish","type":"post","link":"https:\/\/devblogs.microsoft.com\/cppblog\/a-visual-studio-debugger-extension-for-the-raspberry-pi-camera\/","title":{"rendered":"A Visual Studio Debugger Extension for the Raspberry Pi Camera"},"content":{"rendered":"<p>While developing a sample application for a Raspberry Pi with a camera using the <a href=\"https:\/\/github.com\/rmsalinas\/raspicam\">RaspiCam<\/a> library, it occurred to me that it would be convenient and fun to be able to see the current camera input while debugging the application. \u00a0The Visual Studio debugger supports type-specific custom visualizers and end-user extensions that implement UI for these visualizers. I decided to make one for the RaspiCam camera types that would display the current image from the camera. The image below is the end result, showing Visual Studio debugging a program running on the Raspberry Pi and displaying the content of a Raspberry Pi camera object in a pop-up debugger visualizer.<\/p>\n<p><img decoding=\"async\" class=\"aligncenter\" src=\"https:\/\/devblogs.microsoft.com\/cppblog\/wp-content\/uploads\/sites\/9\/2022\/08\/Picture1.png\" alt=\"Picture of rasperry pi displaying camera\" \/><\/p>\n<h5>Build a RaspiCam Application<\/h5>\n<p><span class=\"TextRun SCXW83782341 BCX8\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"><span class=\"NormalTextRun CommentStart CommentHighlightPipeRest CommentHighlightRest SCXW83782341 BCX8\">I needed a sample Raspberry Pi camera application.<\/span><span class=\"NormalTextRun CommentHighlightRest SCXW83782341 BCX8\"> I downloaded the <a href=\"https:\/\/gnutoolchains.com\/raspberry\/\">Raspberry Pi SDK<\/a><\/span><span class=\"NormalTextRun CommentHighlightRest SCXW83782341 BCX8\"> and a CM<\/span><span class=\"NormalTextRun SpellingErrorV2Themed CommentHighlightRest SCXW83782341 BCX8\">ake<\/span><span class=\"NormalTextRun CommentHighlightRest SCXW83782341 BCX8\"> toolchain to my windows machine<\/span><span class=\"NormalTextRun CommentHighlightRest SCXW83782341 BCX8\">. I downloaded <\/span><span class=\"NormalTextRun CommentHighlightRest SCXW83782341 BCX8\">the <a href=\"https:\/\/github.com\/rmsalinas\/raspicam\">Raspiam cd camera library<\/a> <\/span><\/span><span class=\"TextRun SCXW83782341 BCX8\" lang=\"EN-US\" xml:lang=\"EN-US\" data-contrast=\"auto\"><span class=\"NormalTextRun CommentHighlightRest SCXW83782341 BCX8\">and <a href=\"https:\/\/github.com\/opencv\/opencv\">OpenCV<\/a><\/span><span class=\"NormalTextRun CommentHighlightRest SCXW83782341 BCX8\">. I used the Raspberry Pi toolchain to build <\/span><span class=\"NormalTextRun CommentHighlightRest SCXW83782341 BCX8\">both these repositories. I then created a new CM<\/span><span class=\"NormalTextRun SpellingErrorV2Themed CommentHighlightRest SCXW83782341 BCX8\">ake<\/span><span class=\"NormalTextRun CommentHighlightRest SCXW83782341 BCX8\"> project in Visual Studio, wrote a simple camera application and linked to these two libraries. My <\/span><span class=\"NormalTextRun SpellingErrorV2Themed CommentHighlightRest SCXW83782341 BCX8\">CMakePresets.json<\/span><span class=\"NormalTextRun CommentHighlightRest SCXW83782341 BCX8\"> file contains this configuration<\/span><span class=\"NormalTextRun CommentHighlightPipeRest SCXW83782341 BCX8\">:<\/span><\/span><span class=\"EOP SCXW83782341 BCX8\" data-ccp-props=\"{}\">\u00a0<\/span><\/p>\n<pre class=\"prettyprint\">  \"configurePresets\": [\r\n    {\r\n      \"name\": \"raspberrypi-Debug\",\r\n      \"generator\": \"Ninja\",\r\n      \"binaryDir\": \"${sourceDir}\/out\/build\/${presetName}\",\r\n      \"installDir\": \"${sourceDir}\/out\/install\/${presetName}\",\r\n      \"cacheVariables\": {\r\n        \"CMAKE_BUILD_TYPE\": \"Debug\",\r\n        \"CMAKE_TOOLCHAIN_FILE\": \"C:\/temp\/raspberry\/toolchain-rpi.cmake\",\r\n        \"OpenCV_DIR\": \"D:\\\\opencv-4.5.5\\\\opencv-4.5.5\\\\out\\\\install\\\\RaspberryPi-Debug\\\\lib\\\\cmake\\\\opencv4\",\r\n        \"raspicam_DIR\": \"D:\\\\raspicam\\\\out\\\\install\\\\raspberrypi-Debug\\\\lib\\\\cmake\"\r\n      },\r\n      \"environment\": {\r\n        \"RASPBIAN_ROOTFS\": \"c:\/temp\/raspberry\/arm-linux-gnueabihf\/sysroot\",\r\n        \"PATH\": \"c:\/temp\/raspberry\/bin;$env{Path}\"\r\n      }\r\n    }\r\n  ],\r\n  \"buildPresets\": [\r\n   {\r\n      \"name\": \"cross-build\",\r\n      \"environment\": {\r\n        \"PATH\": \"c:\/temp\/raspberry\/bin;$penv{PATH}\"\r\n      },\r\n      \"configurePreset\": \"raspberrypi-Debug\"\r\n    }\r\n  ]\r\n<\/pre>\n<p>My Raspberry Pi application uses the OpenCV library and the <code>RaspiCam_Still_Cv<\/code> camera type for capturing and manipulating the camera image.<\/p>\n<pre> \u00a0\u00a0 raspicam::RaspiCam_Still_Cv Camera;\r\n\u00a0\u00a0\u00a0 cv::Mat image;\r\n\r\n \u00a0\u00a0 Camera.open();\r\n \u00a0\u00a0 Camera.grab();\r\n \u00a0\u00a0 Camera.retrieve(image);\r\n\u00a0\u00a0\u00a0 cv::imwrite(\"raspicam_cv_image.jpg\", image);<\/pre>\n<p>I wanted to debug this application on the Pi so I added the launch configuration below. This instructs the debugger to first deploy the application to my RaspberryPi device in directory ~\/camera. I copied all the shared OpenCV and RaspiCam libraries I needed to this directory too.<\/p>\n<pre> \u00a0\u00a0 {\r\n \u00a0\u00a0\u00a0\u00a0 \"type\": \"cppgdb\",\r\n \u00a0\u00a0\u00a0\u00a0 \"name\": \"DebugOnPi\",\r\n \u00a0\u00a0\u00a0\u00a0 \"project\": \"CMakeLists.txt\",\r\n \u00a0\u00a0\u00a0\u00a0 \"projectTarget\": \"simpletest_raspicam\",\r\n \u00a0\u00a0\u00a0\u00a0 \"debuggerConfiguration\": \"gdb\",\r\n \u00a0\u00a0\u00a0\u00a0 \"MIMode\": \"gdb\",\r\n \u00a0\u00a0\u00a0\u00a0 \"args\": [],\r\n \u00a0\u00a0\u00a0\u00a0 \"env\": {},\r\n \u00a0\u00a0\u00a0\u00a0 \"deployDirectory\": \"~\/camera\",\r\n \u00a0\u00a0\u00a0\u00a0 \"remoteMachineName\": \"<em>&lt;your-connection-name-here&gt;<\/em>\",\r\n \u00a0\u00a0\u00a0\u00a0 \"preDebugCommand\": \"export LD_LIBRARY_PATH=~\/camera\"\r\n\u00a0\u00a0\u00a0 }<\/pre>\n<p>I could now debug on the Raspberry Pi.<\/p>\n<h5>Add Visualizers to the Linux Debugger<\/h5>\n<p>Although the Visual Studio Windows debugger supports UI visualizers, it turned out that the Visual Studio Linux debugger did not have this feature implemented. \u00a0Since the Raspberry Pi runs Raspian, a Debian Linux variant, the first thing I had to do was fix that. The Linux debugger is an open source project called <a href=\"https:\/\/github.com\/Microsoft\/MIEngine\">MIEngine<\/a>. The MIEngine runs as a Visual Studio debugger \u201cengine\u201d that controls a gdb process running remotely on a Linux host. The MIEngine already supported custom variable visualization using <a href=\"https:\/\/docs.microsoft.com\/en-us\/visualstudio\/debugger\/create-custom-views-of-native-objects?view=vs-2022\">natvis<\/a> files (example below). However, it did not support the <code>UIVizualizer<\/code> tag in natvis files which is necessary to open a new visualization window containing, for example, a camera image. The work described here is now part of the MIEngine and in Visual Studio. The completed PR with the changes is <a href=\"https:\/\/github.com\/microsoft\/MIEngine\/pull\/1281\">here<\/a>. Remember it\u2019s open source, so you too can contribute in this way!<\/p>\n<pre>&lt;?xml version=\"1.0\" encoding=\"utf-8\"?&gt;\r\n&lt;AutoVisualizer xmlns=\"http:\/\/schemas.microsoft.com\/vstudio\/debugger\/natvis\/2010\"&gt;\r\n \u00a0\u00a0\u00a0 &lt;UIVisualizer ServiceId=\"{0A73397B-D550-4BFE-94C9-C0E5122DC06F}\" Id=\"1\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 MenuName=\"Raspberry Still Camera (OpenCV) Visualizer\"\/&gt;\r\n\r\n \u00a0\u00a0\u00a0\u00a0 &lt;Type Name=\"raspicam::RaspiCam_Still_Cv\"&gt;\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 &lt;UIVisualizer ServiceId=\"{0A73397B-D550-4BFE-94C9-C0E5122DC06F}\" Id=\"1\" \/&gt;\r\n \u00a0\u00a0\u00a0\u00a0 &lt;\/Type&gt;\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\r\n&lt;\/AutoVisualizer&gt;<\/pre>\n<p>There were two changes I needed to make to the MIEngine: First was to recognize and process the <code>UIVisualizer<\/code> element in natvis files and second was to automatically load natvis files for registered Visual Studio extensions.<\/p>\n<ul>\n<li>Recognizing the element was simple. The MIEngine already parsed natvis files, it was just ignoring the <code>UIVisualizer<\/code> element. I simply changed the natvis lookup code to return not just the visualized string value for an expression, but also a list of any <code>UIVisualizers<\/code> that were present for the expression type. The MIEngine returns an expression value to the Visual Studio debugger as an <code><a href=\"https:\/\/github.com\/microsoft\/MIEngine\/blob\/main\/src\/MIDebugEngine\/AD7.Impl\/AD7Property.cs\">AD7Property<\/a><\/code>. I updated the <code>AD7Property<\/code> class to also return the list of <code>UIVisualizers<\/code> found.<\/li>\n<li>The MIEngine makes requests of Visual Studio via its <code><a href=\"https:\/\/github.com\/microsoft\/MIEngine\/tree\/main\/src\/DebugEngineHost\">DebugEngineHost<\/a><\/code>. In order to support visualization, I needed to call into Visual Studio from the <code>DebugEngineHost<\/code> and ask it for an <code>IVsExtensionManagerPrivate<\/code> service reference. I then called into the resulting service asking for a list of all resources tagged as \u201c<code>NativeCrossPlatformVisualizer<\/code>\u201d. It returned a list of file names for resources that have this tag. These are given to the MIEngine natvis processor for parsing.<\/li>\n<\/ul>\n<p>Finally, there was an additional feature that was missing from the MIEngine. It didn\u2019t have a convenient way for a caller holding an <code>AD7Property<\/code> object to determine the execution context (that is, stack frame) where the expression that generated that property was evaluated. So, I extended the <code>AD7Property<\/code> class to implement a new interface <code><a href=\"https:\/\/github.com\/microsoft\/MIEngine\/blob\/main\/src\/DebugEngineHost.Stub\/Shared\/Microsoft.VisualStudio.Debugger.Interop.MI.cs\">IDebugMIEngineProperty<\/a><\/code> that contains a method for returning the properties <code>IDebugExpressionContext2<\/code>. Now all the parts are in place for developing a <code>UIVisualizer<\/code> for Linux applications.<\/p>\n<h5>Create a Visualizer VSIX Project<\/h5>\n<p>The <code>UIVisualizer<\/code> for a data type is identified in natvis by a GUID <em>ServiceId<\/em> and an integer <em>Id<\/em>. These are the values the debugger will use to find and open the associated visualizer. The <em>ServiceId<\/em> identifies a type that implements <a href=\"https:\/\/docs.microsoft.com\/en-us\/dotnet\/api\/microsoft.visualstudio.debugger.interop.ivscppdebuguivisualizer?view=visualstudiosdk-2022\"><code>VsCppDebugUIVisualizer<\/code><\/a>. The Id identified which of its visualizers to open \u2013 in the case that more than one visualizer is supported by the service. I needed to author a service for my camera picture viewer as a visual studio extension.<\/p>\n<p>To start, I created a VSIX (Visual Studio extension project) called PictureViewer (you need to have the \u201cVisual Studio extension development\u201d workload installed). Then I defined an interface for my visualizer tagged with my viewer\u2019s <em>ServiceId<\/em> and attributed the VSIX Package object with this interface. I now had to define the relationship between the <code>UIVisualizer<\/code> element in the natvis file with this VSIX package.<\/p>\n<pre>[Guid(\"0A73397B-D550-4BFE-94C9-C0E5122DC06F\")]\r\npublic interface IPictureViewerService\r\n{\r\n}\r\n\u2026\r\n[PackageRegistration(UseManagedResourcesOnly = true, AllowsBackgroundLoading = true)]\r\n[ProvideService(typeof(IPictureViewerService), ServiceName = \"PictureViewerService\", IsAsyncQueryable = true)]\r\n[Guid(PictureViewerPackage.PackageGuidString)]\r\npublic sealed class PictureViewerPackage : AsyncPackage\r\n{\r\n\u2026\r\n}<\/pre>\n<p>I then added a new <code>PictureViewerService<\/code> that implements both <code>IPictureViewerService<\/code> and <code>IVsCppDebugUIVisualizer<\/code>. The only method it implements is <code>IVsCppDebugUIVisualizer.DisplayValue<\/code>.<\/p>\n<pre>int DisplayValue(uint ownerHwnd, uint visualizerId, IDebugProperty3 debugProperty)\r\n\r\n<\/pre>\n<p>This is the API called by the debugger to display a value using a custom <code>UIVisualizer<\/code>. The already evaluated expression value is passed to the visualizer in the <code>debugProperty<\/code> object.<\/p>\n<p>I implemented my UI in class <code>PictureViewerViewModel<\/code>. The UI in this example is very simple, just an Image and a Button in a pop-up window. When the service activates the UI the view model object fires off a task to load the image from the application being debugged and then invokes <code><a href=\"https:\/\/docs.microsoft.com\/en-us\/dotnet\/api\/system.windows.window.showdialog?view=netframework-4.7.2\">ShowDialog<\/a><\/code>. The XAML snippet below defines the content of the dialog.<\/p>\n<pre>&lt;Grid Margin=\"4\"&gt;\r\n \u00a0\u00a0 &lt;Grid.RowDefinitions&gt;\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 &lt;RowDefinition Height=\"*\" \/&gt;\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 &lt;RowDefinition Height=\"Auto\" \/&gt;\r\n \u00a0\u00a0 &lt;\/Grid.RowDefinitions&gt;\r\n \u00a0\u00a0 &lt;Image Width=\"{Binding Image.Width}\" Height=\"{Binding Image.Height}\"\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Source=\"{Binding Image}\"\/&gt;\r\n \u00a0\u00a0 &lt;Button Grid.Row=\"1\" Margin=\"4,0\" x:Uid=\"LoadNext_Button\"\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Content=\"Next\"\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Command=\"{Binding ClickCommand, Mode=OneTime}\" \/&gt;\r\n&lt;\/Grid&gt;\r\n\r\n<\/pre>\n<p>The view model is where the work is done to populate the UI from the camera content. In this case I was interested in retrieving JPEG content from the camera and using it to create a <a href=\"https:\/\/docs.microsoft.com\/en-us\/dotnet\/api\/system.windows.media.imaging.bitmapimage?view=netframework-4.7.2\">BitmapImage<\/a>. To get an image from a camera object the debugger must perform the same operations that the application would have to do to do the same thing. Or more accurately, the debugger must instruct the debuggee to do these steps using expression evaluation. The code below uses Visual Studio debugger APIs to evaluate a string expression given an <code>IDebugMIEngineProperty<\/code> <em>context<\/em>. It returns both a string with the debugger\u2019s result of the evaluation and a memory context object that can be used to read the raw bytes of that result.<\/p>\n<pre>private DEBUG_PROPERTY_INFO EvalExpression(string expr, out IDebugMemoryContext2 ppMemory)\r\n{\r\n\r\n\u00a0\u00a0 if (context.ParseText(expr, enum_PARSEFLAGS.PARSE_EXPRESSION, 10, out IDebugExpression2 ppExpr, out string error, out uint errorCode) != VSConstants.S_OK)\r\n \u00a0 {\r\n \u00a0\u00a0\u00a0\u00a0 throw new ApplicationException($\"Failed to parse expression '{expr}'.\");\r\n \u00a0 }\r\n\r\n \u00a0 if (ppExpr.EvaluateSync(0, 0, null, out IDebugProperty2 prop) != VSConstants.S_OK)\r\n \u00a0 {\r\n \u00a0\u00a0\u00a0\u00a0 throw new ApplicationException($\"Failed to evaluate expression '{expr}'.\");\r\n \u00a0 }\r\n\r\n \u00a0 DEBUG_PROPERTY_INFO[] value = new DEBUG_PROPERTY_INFO[1]; ;\r\n \u00a0 if (prop.GetPropertyInfo(enum_DEBUGPROP_INFO_FLAGS.DEBUGPROP_INFO_VALUE, 10, 0, null, 0, value) != VSConstants.S_OK)\r\n \u00a0 {\r\n \u00a0\u00a0\u00a0\u00a0 throw new ApplicationException($\"Failed to get expression value for '{expr}'.\");\r\n\u00a0\u00a0 }\r\n\r\n \u00a0 if (prop.GetMemoryContext(out ppMemory) != VSConstants.S_OK)\r\n \u00a0 {\r\n \u00a0\u00a0\u00a0\u00a0 ppMemory = null;\r\n\u00a0\u00a0 }\r\n\r\n \u00a0 return value[0];\r\n}\r\n\r\n<\/pre>\n<p>Now I needed to implement the right sequence of expression evaluations. I based the sequence on the sample C++ code below that retrieves an in-memory jpg image from a camera object.<\/p>\n<pre>raspicam::RaspiCam_Still_Cv Camera;\r\n...\r\ncv::Mat image;\r\nCamera.grab();\r\nCamera.retrieve(image);\r\n\r\nvector&lt;uchar&gt; jpgImage;\r\nvector&lt;int&gt; jpgParams;\r\ncv::InputArray ia = cv::_InputArray(image);\r\nstring jpg = \".jpg\";\r\n\r\ncv::imencode(jpg, ia, jpgImage, jpgParams);\r\n\r\n<\/pre>\n<p>Since I have registered the visualizer for the camera type, the debugProperty object that is passed into the visualizer will contain the results of evaluating a <code>RaspiCam_Still_Cv<\/code> object. It can be cast to an <code>IDebugMIEngineProperty<\/code> and used to get a context for evaluating expressions. The final result is that the debuggee application creates an in-memory object containing a JPEG image. The debugger then reads the bytes from the debuggee memory and creates the BitmapImage.<\/p>\n<p>The first thing you might notice is that this sequence of evaluations involves creating several objects: <em>image<\/em>, <em>jpgImage<\/em>, <em>jpgParams<\/em>, <em>ia<\/em>, and <em>jpg<\/em>. These objects do not exist on the debuggee so they must be created. To create an object via expression evaluation, the debugger must first malloc the memory for the object and then invoke the object\u2019s constructor. I do this in <code>MakeHeapObject<\/code>\u00a0below:<\/p>\n<pre>private string MakeHeapObject(string type, string constructor, string cparams = \"\")\r\n{\r\n \u00a0 var address = this.EvalExpression($\"malloc(sizeof({type}))\");\r\n \u00a0 if (!address.bstrValue.StartsWith(\"0x\"))\r\n \u00a0 {\r\n \u00a0\u00a0\u00a0\u00a0 throw new ApplicationException(address.bstrValue);\r\n \u00a0 }\r\n\r\n \u00a0 var init = this.EvalExpression(\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 $\"(({type} *){address.bstrValue})-&gt;{constructor}({cparams})\");\r\n \u00a0 return address.bstrValue;\r\n}<\/pre>\n<p>For example, to make an object of type \u201c<code>vector&lt;int&gt;<\/code>,\u201d the visualizer calls:<\/p>\n<pre>this.MakeHeapObject(\"std::vector&lt;int, std::allocator&lt;int&gt; &gt;\", \"vector\")\r\n\r\n<\/pre>\n<p>A string containing the new object\u2019s address in the debuggee is returned, for example \u201c<code>0xf7003421<\/code>\u201d. <span class=\"NormalTextRun SCXW228642498 BCX9\">The object is similarly <\/span><span class=\"NormalTextRun SCXW228642498 BCX9\">deleted<\/span><span class=\"NormalTextRun SCXW228642498 BCX9\"> in <\/span><code><span class=\"NormalTextRun SpellingErrorV2Themed SCXW228642498 BCX9\">FreeHeapObject<\/span><\/code><span class=\"NormalTextRun SCXW228642498 BCX9\"> when it is no longe<\/span><span class=\"NormalTextRun SCXW228642498 BCX9\">r<\/span><span class=\"NormalTextRun SCXW228642498 BCX9\"> needed. Notice that <\/span><span class=\"NormalTextRun SCXW228642498 BCX9\">it is possible to introduce new interactions that would not have occurred whenever running code in the <\/span><span class=\"NormalTextRun SpellingErrorV2Themed SCXW228642498 BCX9\">debuggee<\/span><span class=\"NormalTextRun SCXW228642498 BCX9\">. It may not be<\/span><span class=\"NormalTextRun SCXW228642498 BCX9\"> safe <\/span><span class=\"NormalTextRun SCXW228642498 BCX9\">to create and destroy <\/span><span class=\"NormalTextRun SCXW228642498 BCX9\">objects that<\/span> <span class=\"NormalTextRun SCXW228642498 BCX9\">affect<\/span> <span class=\"NormalTextRun SCXW228642498 BCX9\">shared resources.<\/span><\/p>\n<p>Now all the pieces are in place for evaluating the sequence of expressions necessary to get the jpg image into a debuggee in-memory byte array. Once this is done the <code>IDebugMemoryContext2<\/code> is used to read the bytes from debuggee memory into a buffer in Visual Studio. The Bitmap image is created from a <a href=\"https:\/\/docs.microsoft.com\/en-us\/dotnet\/api\/system.io.memorystream?view=netframework-4.7.2\">MemoryStream<\/a> built on that buffer.<\/p>\n<p>The VSIX project also needs to contain the .natvis file containing the type-to-Visualizer mapping (see top of this section). The VSIX manifest points to the file and tags it as a \u201c<code>NativeCrossPlatformVisualizer<\/code>\u201d asset.<\/p>\n<pre>\u00a0 &lt;Asset Type=\"NativeCrossPlatformVisualizer\" Path=\"raspicam.natvis\"\/&gt;<\/pre>\n<p>All the details can be found in <a href=\"https:\/\/github.com\/microsoft\/VisualStudioMIEnginePictureViewer\">this git repository<\/a>, including the sample CMake application (above) to test with.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>While developing a sample application for a Raspberry Pi with a camera using the RaspiCam library, it occurred to me that it would be convenient and fun to be able to see the current camera input while debugging the application. \u00a0The Visual Studio debugger supports type-specific custom visualizers and end-user extensions that implement UI for [&hellip;]<\/p>\n","protected":false},"author":16035,"featured_media":35994,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1,279],"tags":[],"class_list":["post-30885","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-cplusplus","category-linux"],"acf":[],"blog_post_summary":"<p>While developing a sample application for a Raspberry Pi with a camera using the RaspiCam library, it occurred to me that it would be convenient and fun to be able to see the current camera input while debugging the application. \u00a0The Visual Studio debugger supports type-specific custom visualizers and end-user extensions that implement UI for [&hellip;]<\/p>\n","_links":{"self":[{"href":"https:\/\/devblogs.microsoft.com\/cppblog\/wp-json\/wp\/v2\/posts\/30885","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devblogs.microsoft.com\/cppblog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devblogs.microsoft.com\/cppblog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/cppblog\/wp-json\/wp\/v2\/users\/16035"}],"replies":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/cppblog\/wp-json\/wp\/v2\/comments?post=30885"}],"version-history":[{"count":0,"href":"https:\/\/devblogs.microsoft.com\/cppblog\/wp-json\/wp\/v2\/posts\/30885\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/cppblog\/wp-json\/wp\/v2\/media\/35994"}],"wp:attachment":[{"href":"https:\/\/devblogs.microsoft.com\/cppblog\/wp-json\/wp\/v2\/media?parent=30885"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/cppblog\/wp-json\/wp\/v2\/categories?post=30885"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devblogs.microsoft.com\/cppblog\/wp-json\/wp\/v2\/tags?post=30885"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}