What Needs to Happen Before AI Becomes a Thing in Design and Research

Written by maxspeicher | Published 2022/10/17
Tech Story Tags: artificial-intelligence | machine-learning | user-behavior-tool | user-behavior-model | ux-design | user-research | research-paper-summary | research-paper | web-monetization

TLDRTo what extent do digital designers and user researchers in industry make use of AI-powered systems to support their work? The answer seems to be “not much,” as we found in a survey with 34 practitioners. In general, there seems to be little awareness of the specific advantages AI can bring to a design process and which tools are already out there, despite significant issues with existing processes that could be well mitigated with the help of AI. However, designers and researchers are very open to the topic and would appreciate AI-powered systems to support them in both ideation and evaluation. Providing practitioners with such systems that are easily accessible and demonstrate added value holds great opportunities.via the TL;DR App

TL;DR: To what extent do digital designers and user researchers in industry make use of AI-powered systems to support their work? The answer seems to be “not much,” as we found in a survey with 34 practitioners. In general, there seems to be little awareness of the specific advantages AI can bring to a design process and which tools are already out there, despite significant issues with existing processes that could be well mitigated with the help of AI. However, designers and researchers are very open to the topic and would appreciate AI-powered systems to support them in both ideation and evaluation. Providing practitioners with such systems that are easily accessible and demonstrate added value holds great opportunities.

This article describes research that has been conducted in collaboration with Maxim BakaevJohanna Jagow, and Sebastian Heil. The research paper [2] was published at the 2022 International Conference on Web Engineering (ICWE).

Design and research are a thing. Artificial intelligence (AI) is a thing. But they’re not quite a thing together yet. There have been massive advances in AI over the past decades. Yet, designers and user researchers in the digital industry still often execute human-centered design processes manually and tediously — from clicking together prototypes to conducting fiddly usability tests to adjusting a design to different screen sizes. (If they have the time and resources to do it properly, that is.) The specific software designers and researchers are using has changed, and there are advances in, e.g., remote asynchronous user testing. But apart from that, the way design is being done, by and large, still doesn’t look that much different from what Alan Cooper described in the first edition of About Face* in the 1990s (except for design thinking, but that’s a different story). While that’s not a bad thing at all (you should really listen to Alan Cooper when you wanna design properly), in today’s fast-paced, dynamic environments with their ever-growing amounts of devices, user segments and personalization strategies, tight budgets, and even tighter timelines, we could all use some help. And in the case of digital design and user research, that it where AI comes into play.

Or so we thought. There’s a variety of tools out there that leverage the power of AI and aim at supporting designers and researchers in their processes. Some strive to progress the automatic creation of wireframes and interfaces (e.g., Paper2Wire [3] and Chen et al.’s work [4]). Some aim at being able to replace traditional approaches to usability testing (e.g., USF [5] and WaPPU [7]). Still others automatically generate interface-related performance metrics (e.g., Bakaev et al.’s work [1] and the Aalto Interface Metrics [6]). Yet, none of these seem to be overly popular among practitioners. There’s a clear gap between what’s already out there in terms of AI-powered tools and what designers and researchers rely on in their daily work. Is this a marketing problem, since most of these tools stem from academic projects? Or do practitioners see no added value in them?

To get to the bottom of this, my colleagues Maxim Bakaev, Johanna Jagow, and Sebastian Heil and I initiated a research project in which we set out to answer two research questions:

  1. Do practitioners indeed not have the time and resources to follow proper design processes? and, more importantly
  2. Why don’t they leverage AI-powered tools to mitigate that issue?

What did we find?

Based on an online survey we conducted (we’ll get to the details later), first of all, designers’ and researchers’ processes in industry indeed seem to be pretty “standard,” as mentioned in the introduction above. That is, we mostly heard about some kind of human-centered design done along some variation of the double-diamond model. There was no mention of AI when asking about existing processes.

Second, the major obstacle to a proper design process seems to be (easy) access to the right users for research; and if there is access and proper user research could be done, conflicting corporate timelines and limited resources.

Third, a considerable amount of designers and researchers seem to know and use validation and checking tools such as Test.ai or the W3C Validator. Also, tools for automatically generating UX-/UI-related metrics are relatively well known. However, the opposite hold for tools that enable automated (i.e., “user-less”) usability evaluation and the like. Overall, the AI-based kinds of tools we were after in our survey seem to be virtually unknown to designers and researchers in industry.

Fourth, when asking about the reasons for not working with AI-powered tools, the top three responses were a lack of familiarity (cf. point 3 above), cost considerations, and a perceived lack of added value.

Based on these findings (please refer to the actual research paper for a more detailed analysis), we formulated three challenges and three opportunities w.r.t. existing design processes and how to potentially support them with AI (taken from [2]):

“– CHALLENGE 1. Availability of users. Finding and recruiting relevant participants for user studies, especially for specialized products.

“– CHALLENGE 2. Lack of time and resources. Tight deadlines and timeframes limited by stakeholders and processes, and a lack of user research resources.

“– CHALLENGE 3. Designers’ unfamiliarity with ML/AI. There seems to be little awareness of what exactly ML-/AI-based systems constitute and in which ways they can support design processes.

“– OPPORTUNITY 1. Openness to new things. In principle, digital designers and user researchers welcome tools that support their processes and they do follow the topic (e.g., “Not a part of our current process, but open to trying them if they can mitigate our issues finding qualified users.”).

“– OPPORTUNITY 2. Open playing field. New AI-/ML-based tools do have a chance to gain traction as currently, there seem to be no established players.

“– OPPORTUNITY 3. Support for ideation & evaluation. Specifically ideation (in the sense of creating wireframes, mockups, and prototypes) and evaluation (of design artifacts) are mentioned as the activities in which practitioners would appreciate support.”

How did we find it?

Our findings are based on a survey we conducted with 34 digital designers and user researchers from industry, a majority of which came from Germany or the U.S. While this is not exactly the largest sample size we’ve ever worked with, our participants had a lot of experience in their field (more than 8 years on average), with several reporting to work for major companies, including IBM, Microsoft, Meta, and Bosch.

The survey itself comprised 28 questions that could be completed in 20–40 minutes, hence the relatively small sample size (boy, it’s really difficult to get people to fill out surveys that take longer than 5 minutes!). It inquired into participants’ design processes, their familiarity with common design tools like InVision and Figma, their familiarity with a set of “user-less” tools, and demographics and open-ended feedback.

Of these, the third part — familiarity with “user-less” tools — was of particular interest to us since it contained all of the AI-powered tools for supporting design processes. It’s no exaggeration to say it was the core of the whole exercise. Overall, we identified a set of 61 “user-less” tools and platforms — both academic and commercial — that supported at least semi-automation and were further classified into categories such as “generative tools based on AI/ML,” “automated usability validation/evaluation,” “behavior simulation/GUI testing automation,” etc. For each of the 61 tools, we asked the survey participants whether they know them and whether they actively use them, with the sobering result described above. For a complete list of the tools and categories, please check out our online appendix.

What comes next?

We’re well aware that we didn’t run a representative study and that there is, among other things, still a need to gain deeper qualitative insights about designers’ and researchers’ processes and (potential) interactions with AI-based systems. Yet, our research has already yielded some valuable indications. Mainly that

  1. restricted timelines and resources in industry are a major and very real obstacle to the proper execution of design processes (a confirmation of what everyone knows already);
  2. access to appropriate users for research is often a pain point; and
  3. AI could help with the above two, e.g., by automating tedious, mechanical parts of a design process and providing access to “user-less” evaluation. Yet, there seems to be a lot of room for improvement when it comes to leveraging this support to alleviate strain from practitioners.

As next steps in our research agenda, we plan to extend the set of 61 “user-less” tools and platforms into a fully-fledged taxonomy, gather the above mentioned deeper qualitative insights, and work towards a specific set of requirements that practitioners have for efficient and effective AI support of their everyday work. With all of these, we want to work towards a deeper understanding of what the specific problems with existing AI-powered systems are and what needs to change so that they gain more traction among designers and developers in industry. So, stay tuned, there’s more of this research to come.

In the meantime, you can head over to ResearchGate and give our paper “We Don’t Need No Real Users?! Surveying the Adoption of User-less Automation Tools by UI Design Practitioners” [2] a thorough read for more detailed information on our research and its results.

☕ I love coffee, and if you want to support my work, you can always spend me one, or subscribe to my newsletter. 🗞️

References

  1. Bakaev, Maxim, Sebastian Heil, Vladimir Khvorostov, and Martin Gaedke. “Auto-extraction and integration of metrics for web user interfaces.” Journal of Web Engineering 17, no. 6 (2018): 561–590.
  2. Bakaev, Maxim, Maximilian Speicher, Johanna Jagow, Sebastian Heil, and Martin Gaedke. “We Don’t Need No Real Users?! Surveying the Adoption of User-less Automation Tools by UI Design Practitioners.” In International Conference on Web Engineering, pp. 406–414. Springer, Cham, 2022.
  3. Buschek, Daniel, Charlotte Anlauff, and Florian Lachner. “Paper2Wire: a case study of user-centred development of machine learning tools for UX designers.” In Proceedings of the Conference on Mensch und Computer, pp. 33–41. 2020.
  4. Chen, Chunyang, Ting Su, Guozhu Meng, Zhenchang Xing, and Yang Liu. “From UI design image to GUI skeleton: a neural machine translator to bootstrap mobile GUI implementation.” In Proceedings of the 40th International Conference on Software Engineering, pp. 665–676. 2018.
  5. Grigera, Julián, Alejandra Garrido, José Matías Rivero, and Gustavo Rossi. “Automatic detection of usability smells in web applications.” International Journal of Human-Computer Studies 97 (2017): 129–148.
  6. Oulasvirta, Antti, Samuli De Pascale, Janin Koch, Thomas Langerak, Jussi Jokinen, Kashyap Todi, Markku Laine et al. “Aalto Interface Metrics (AIM) A Service and Codebase for Computational GUI Evaluation.” In The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings, pp. 16–19. 2018.
  7. Speicher, Maximilian, Andreas Both, and Martin Gaedke. “Ensuring web interface quality through usability-based split testing.” In International Conference on Web Engineering, pp. 93–110. Springer, Cham, 2014.

Originally published here.


Written by maxspeicher | Computer scientist, designer, part-time philosopher. I write, mostly about design and user experience.
Published by HackerNoon on 2022/10/17