AI-Assisted SAMS

AI-Assisted SAMS

Authors: Jesko Lamm, Tim Weilkiens

The SAMS Method: Storyboards for explaining operational scenarios

Storyboard Activity Modeling for Systems (SAMS) is a way to explore the operational scenario of a system by drawing a storyboard that shows the scenario. The resulting understanding of the functions to be provided by the system in its operational environment can improve systems analysis. In particular, it supports the identification of borderline cases and enables creativity. It has been proposed to derive system use cases from such storyboards of operational scenarios and to link snippets of the storyboard images to the corresponding use cases in a model-based approach [Weilkiens et al., Model-Based System Architecture – Second Edition, Wiley, 2022].

In the work of an analysis team together with an artist, such storyboards can be created in real-time to facilitate the analysis process and to enable fast externalization of thoughts via the visualizations that evolve in the storyboard under creation. Such work has been conducted in an experimental setup, resulting in the storyboard of a novel airplane boarding system based on a detachable cabin. The storyboard below was created in real time during that work.

Storyboard Example - © 2017 oose Innovative Informatik eG, reproduced with permission
Storyboard Aircraft Boarding System (© 2017 oose Innovative Informatik eG, reproduced with permission)

Artificial Intelligence as a Real-time Visualization Assistant

What if Artificial Intelligence (AI) could play the artist’s role in the SAMS approach and do the visualization task in real-time by just listening in to team discussions during systems analysis? Whether this is entirely realistic with today’s technological means is still under investigation. Still, it will be shown further below that it is easy with off-the-shelf AI tools to get close to the stated vision.

In order to fulfill the vision, the AI would need to perform the following actions continually:

  1. Listen into the conversation of an analysis team and extract information to be visualized.
  2. Visualize the extracted information in real time.
  3. Bundle the set of obtained visualizations into one or more storyboards, showing the operational scenarios of the system of interest.
  4. Adapt the created visual materials while the analysis team gets wiser from its discussion, hence describing a final version of the system of interest.

In the following, we will explore each of these steps separately to show how easy it is already with current means to get very close to it.

We are already very close to implementing these steps in practice. The speech recognition capabilities of AI-based systems are awe-inspiring. Recognizing several voices in a room, assigning them, and filtering out the content relevant to the storyboard is still challenging. However, there are already products that can automatically create meeting minutes, for example. 

A human artist is currently even better at this skill and can also ask specific questions and contribute to the discussion. Once the AI has extracted information, the visualization can be done in real-time. The technology for this is just becoming available. The following short video is a simple demonstration created with Clipdrop and GetImg.

What doesn’t work well here is creating a storyboard. 

A storyboard is a sequence of images that shows a sequence from an operational scenario, just as if someone had made a short cartoon story about the system of interest. The special challenge for AI is to maintain consistency between the different images of the storyboard and, hence, ensure that human actors and technical systems in the story look the same across all images. An experiment with the DALL-E image generator shows that this is, to some extent, already possible with today’s off-the-shelf AI image generators. The consistency between storyboard images can be maintained by prompting the whole storyboard creation at once, i.e., the AI is asked to produce the whole storyboard in one go, whilst ensuring that objects and persons look the same across images.

Here is an example from the stated experiment with DALL-E, again related to the sample boarding system that was introduced further above:

AI image generators are much better at creating a single image that can then be manually stitched together into a storyboard.

However, you can ask an AI image generator for a storyboard. The following is an example.

Normally, the individual images or storyboards are refined step by step. The current AI generators we are familiar with do not do this. You can request changes, but the images that have already been generated are not specifically adapted; instead, new images are generated.

Discussion and Conclusion

Off-the-shelf AI image generators have already shown their ability to generate visualizations to support systems analysis, for example, in storyboard format with the possibility to subsequently or simultaneously apply the SAMS method.

The shown experiments are still a step away from the complete vision of an AI visualization assistant that will listen to team conversations and visualize on the move. However, developments in AI are very fast these days, and who knows how soon we will reach a reality that is close to or even more advanced than this vision. We have only tested general AI generators. If you specifically develop an AI generator for the generation of storyboards, including iterative refinement, it is probably already feasible today.

In general, it should be noted that the information you enter is shared with the AI respectively its operator. It could, therefore, be an IP problem. The topic is not new, and there are also solutions for running AI models locally. 

We would also like to mention another IP problem that is currently being discussed. The AI has gained its ability by being trained with a lot of data. The question arises as to what extent this infringes the rights of the originators of the data. Some authors are making claims. Conversely, it is argued that AI, like humans, is inspired by other ideas (texts, images, etc.) and incorporates these into its own works. There is a need for clarification here at a legal and, if possible, global level. 

Finally, we should not forget the human touch and that AI is just A, which means artificial. When comparing the current AI-generated results with storyboards we have created together with human artists, we see one aspect in which the AI is far less skillful than its human counterpart: The artworks by a human artist capture the emotions of involved humans and can even be humorous, to ensure that certain important aspects of the system-of-interest are highlighted (e.g., the stress of passengers that have to wait during the boarding process or that are challenged with handling their hand luggage and the boarding machinery at the same time). While the human artist will ensure these quality attributes of the artworks autonomously, an AI needs to be prompted very precisely to do so, which contradicts the vision of having the AI just silently visualize in the background without dominating the systems analysis work.

No matter how much of this we use today and how fast we will get into more of this, the mentioned Intellectual Property considerations should not be neglected.

 

Leave a Reply

Your email address will not be published. Required fields are marked *