Skip to main content

Flow AI Documentation

As of 9/7/2023 release

ThankWe're thrilled to give you foraccess jumpingto inour new AI features. This is just the beginning, and experimentingyour with us! This first release is a little raw, and thingsinsights are changinginvaluable rapidly,to butus, so as you explore, we wantedencourage you to share ityour withthoughts youon and see ifhow we arecan on the right track and getelevate your feedback.experience.

This

Here's isa not the full visionlist of whereinitial the app is going, and we’ll keep rolling out functionality over the next weeks and months.

Please provide any and all feedback here! Please don’t hesitate to let us know how this works or could work better for you.

What we have started with are these features

We’ll discuss each of these in more detail below.features:

  • Get context offrom any selected dot
  • BuildEngage in a visual conversation with AI
  • AskExtract acontext from PDF document for contextdocuments
  • ManipulateCommand the swarm with your voice commands

We'll delve deeper into each of these shortly. We're committed to continuously enhancing our app, so stay tuned for more exciting updates in the coming weeks and months.

Enabling AI and the AI buttons

First, enable AI for any particular Flow in the Settings, accessed through the menu in the upper right of the AI: 

Within the Settings, find the “Enable AI features” checkbox.


You may also want to check the “Show AI conversation” in order to see the history of the conversation.

To actually start using AI, you create a new button here, using the “+” icon:


A new button editor will appear:

Click the checkbox to “Enable AI.” This will show the options to control the AI and put the button on the “heads heads-up display” at the bottom of the screen.

There are three options for the Action for response:

  • Ask AI
  • Ask PDF
  • Edit swarm


After you have set up two buttons with labels, andWith perhaps with additional context for your prompt,prompt here(see is"GPT whatprompt" below), it looksmay look like *bmthis, Iwith onlya see 1new button: in the center or the 3D canvas.


Also, in this screenshot, you can see the partial AI response above the "Ask AI" button.

Here is a view on the DesktopDesktop, *bm: Need to add text that indicatesafter you have givenset butoonup #2a thecouple labelof buttons with labels to "Ask AI" and "Change":, here is what it might look like when Presenting:

Here is what it looks like in a VR or AR headset:

image.png

GPT prompt:

Using LLM prompts is a whole sub-specialty these days.

Variables are used to include user input into the AI prompt. The variables floware:

provides become available with the GPT prompt:

$spokenInput - whatThe diduser's theverbal userinput just say verbally? *bm: change to "what the user just said verbally"
$visiblePopupData - what is the text within the currently visible Popup? *bm: change to "the text currently visible in a Popup"swarm popup

You don’t need to get carried away,away with prompting, but you might find some inspiration from these examples:

Generic example:

$spokenInput| Tell me more about $visiblePopupData| Do not respond with apologies as an AI large language model. Keep the answers to three short concise paragraphs.

Example from a historical biographical Flow::

$visiblePopupData, $spokenInput.
Respond in the voice of the person referenced, including linguistic characteristics of this person's writings.
Don't  apologize about being an AI model, but if there are no extant writings by this person, end your response with a short phrase such as "I've done my best to speak in my voice with limited material."


AI response window

On the desktop, the AI response is shown in a popup window that is scrollable and movable.


Each time a response is provided, the window will open. It can also be opened by clicking on the short response directly above the AI buttons.

In an XR headset, the window appears a bit to the left of the user:



Cancelling an on-going spoken command

Whenever a $spokenInput is present on a visible button, then an accompanying “x” button will appear beside the button to cancel or reset listening. This is great when you misspeak and want to start the query over, or when AI makes a mistake and recognizes your speech incorrectly.


Edit Swarm

Edit Swarm is used to manipulate the swarm, as shown below where the user asked to filter the data.

The following current prompts are supported, but remember that simple variations on these phrases work as well because AI is not expecting an exact match:

  • Filter to a category: “Filter to Vietnam”
  • Filter to a time frame or a numeric range: “Filter from 2016 to 2023”
  • Change swarm color: “Make the color a gradient from green to red”
  • Set swarm column on a specific axis for the scatterplot: “Set the depth axis to population”
  • Toggle axis on and off: “Hide the depth axis”
  • Change dot size: “Make the dots bigger”
  • “Remove filter”

The mechanism currently works with only one filter, so each filter request will replace the prior AI-generated filter with a new one, which removes the ability to generate complex filters (no AND or OR filtering). In the future, we’ll need a more complex UI visible to the user to help a user indicate when they want to replace or add a filter.

As of this release, there is NOT good error reporting, so it fails silently if it gets confused. It is also possible for AI to create a filter that shows no dots. In this case, ask AI to “remove filter.”

Ask PDF

Flow has the ability to upload a PDF document (just one for now) and use that as the source material for ChatGPT to query against.

Upload the PDF in the “Manage Data” popup.

GPT model

CurrentlyCurrently, this release supports OpenAI’s ChatGPT 3.5 and 4.0 models. If using 4.0, you also have the ability to add system context. We can support additional models upon request for enterprise customers.


Conversational AI experiment

We have been experimenting with Conversational AI. 

There is now a checkbox in the Settings in the UI that enables the “conversation” with the AI to be captured and displayed. The conversation is actually is displayed as a swarm, off to the right of the origin by 1.5 meters. Currently, this swarm is not very configurable, but we look forward to getting feedback and will be extending the functionality soon.