Ideas: Design Methodologies for Data Sprints

Photo by William Iven on UnsplashIdeas: Design Methodologies for Data SprintsAnne GibbonBlockedUnblockFollowFollowingMay 30I recently spent four days at a research lab with a group of data scientists and a few behavior scientists diving into a large, messy data set to see if we might find insights related to group dynamics.

It was framed as an initial dive to look around, get an initial assessment of potential for useful research insights, and then build a proposal for two to three months of work based on what was found that week.

I attended theoretically in the role of a behavioral scientist, but since I’m also obsessed with creative problem solving processes, or design thinking (and I’m opinionated as hell), the following are some reflections on the process we used and ideas for what’s next.

A lot of process methodologies occupy the space in the overlap between software development, data analytics, and creative problem solving disciplines: UX, design thinking, Think Wrong, Google Design Sprint, Agile, Kanban, emerging data analysis methods for group sprints, etc.

It’s great, because if you’re curious you can acquire a pretty big toolset to bring together groups of people to crash and get creative on all different kinds of problems for widely varying lengths in time — a two hour meeting to an 18 month project.

The challenge, or mastery of practice as a facilitator, is to learn not just the buzz words of the methodologies or the exercises themselves, but the underlying science behind group dynamics, creativity, psychology, and neuroscience.

During this workshop, we had a two distinct types of professionals in the room — social science and data science, each with complementary skillsets within those groupings.

They were all brand new to the data set, and it was pretty messy when we first looked at it.

The direction from the sprint’s sponsor was extremely broad — what general insights about the group dynamics of these co-workers can be identified in a three day sprint.

Unfortunately, we didn’t clearly identify the direction for what kind of insights would be most useful until the afternoon of the second day.

At that point, the social science group separated to brainstorm a range of different questions that could be asked about group dynamics, and which theories and research questions were emerging as especially intriguing for the field.

Besides ensuring the whole team had a clear idea of the direction for the workshop and what question themes regarding group dynamics would be most useful (ie.

performance of small work teams, changes in group composition over time, looking for patterns in the kinds of people who grouped together, etc) we also needed design constraints, but didn’t identify them.

Intro design thinking uses a generic Venn diagram to illustrate the criteria a design must successfully meet in order to be green lit for funding/ effort to develop a high-fidelity solution.

Companies develop their own specific criteria that usually fall somewhere into these buckets.

Without identifying our own, the team relied on the expertise and gut instinct of the people gathered in the room.

Not bad — everyone was incredibly smart and knew their field well; but not great if you want to maximize resources toward understanding the most important/ impactful questions.

via Legacy InnovaOur facilitator took the route of using UX and design thinking exercises — personas and needs statements — over the first two days.

They were meant to get the group to identify specific questions that could then be voted on by the whole team.

On the third day, small groups broke off, each taking one or two questions that might be asked of the data.

Normally personas and needs statements are an incredibly reliable method; they work in all kinds of situations.

But during this sprint many people found them to be frustrating when they had been told to look for insights about group dynamics, not individual personas.

The first challenge with persona methods is that they are designed to be used with deliberative modes of thinking, or System 2 (from Thinking Fast and Slow) that take a diverse, seemingly disconnected array of information — most often qualitative ethnography — and synthesize it to come to some detailed statement of need.

With no prior set of information about the people in the data set, we were left looking at spreadsheets of raw data and making up cartoon characters as personas.

The whole team went along gamely and tried to make use of it, but the time could have been better spent.

The second challenge regarding the selection of persona methods is that they are meant to be used during design processes where the outcome is some product or service serving archetypal individuals.

There is a relationship between problem for the persona, and solution — solving a need of the persona.

For this workshop, our team was not meant to produce a product or service, the insights were the deliverables.

Rather than personas, we should have been exploring and mapping the space of the complex concept of group dynamics.

The somewhat awkward use of some design thinking/ UX methods and rejection of others for this workshop was totally understandable.

As design sprints have become a popular, justifiably so, tool for businesses to bring multi-disciplinary teams together and develop novel concepts and products, the broad space of design and software development methodologies have edged into unfamiliar territory — data analytics.

While I feel experienced to comment on design methodologies, data science is new territory for me.

From the literature review I’ve done, and the limited commercial experience with analytics teams, it seems that they are themselves wrestling with what methodologies serve group data sprints best.

Analysis has traditionally been done solo, or sitting side by side, not requiring a more formal group process.

Three forces are pushing data scientists to develop group analytics methodologies: the data science field has bifurcated with an ever increasing number of specializations and tools, the sheer amount of data has exponentially increased, and data science is spreading to disciplines that had previously done their analysis with almost ubiquitous qualitative methods and tools — so multi-disciplinary teams were becoming the norm.

In the last five to seven years, there have been several academic articles and blogs describing group data analysis processes.

Thoughts on managing data science team work streams: Medium article more about how a data science team distributes its work across development, research, engineering, and service over years, rather than a process for a specific analytics question, or for a sprint.

Method for managing data science projects in tech industry: Medium article about adapting core values of Agile for research methodologies.

A review and future direction of agile, business intelligence, and data science: 2016 academic article that discusses how Agile principles and practices, and data science practices have evolved as part of business intelligenceTowards design principles for visual analytics in operational contexts: Although meant for visual analytics alone, vice statistics and machine learning data science, this 2018 article from CalTech offers a useful look at their research process for developing visual analytics.

My PhD research involves developing a co-design process to analyze a complex concept — wellbeing, using 3D data visualization software.

I’ve been thinking about how we bring groups together to investigate and communicate about complex concepts, the very early work of group and individual need finding in order to eventually do policy, service, and product design.

Trying to grasp complex systems and complex concepts is a totally different beast than even synthesizing nuanced psychological and social needs from personas.

Oral cultures have very different approaches to thinking about and discussing complexity, and I believe our western dominated, literate, tech cultures have a lot to learn from them about how to analyze complexity.

I would like to offer these ideas for multi-disciplinary groups working to identify insights in large, messy data sets.

I’m curious what others are doing, and would love some feedback.

First, here is the framework I use to organize design sprints organized around addressing particular needs of particular archetypal personas.

Four quadrants correspond to the four phases usually associated with design thinking: interviewing and observing users, synthesis of needs, ideation, and prototyping and testing.

I find that it helps me as a facilitator to understand the modes of thinking I’m trying to induce my participants to exercise for different methods.

It’s also helpful to pair the mode of thinking with a clear delineation between problem and solution.

Trying to brainstorm about both the problem and solution at the same time is a recipe for a chaotic descent of the group way, way off track.

This 2×2 framework is most useful when designing exercises for workshops that revolve around understanding a user’s need and developing some thing to solve it — a software product interface, a household object, a social service.

Complexity, ie group dynamics, requires a different framework for thinking about which methods to select and which modes of thinking to use during a sprint.

In our sprint, the work revolved around getting to an insight, refining a raw data set and teasing apart a complex topic.

Using a 2×2 matrix again, complex concept and raw data set replace problem and solution on one of the axes.

For the second axis, we might use different flare and focus modes than ones that serve thinking about defined needs of individual archetypes, two examples would be inductive/ deductive and declarative/ modal.

Dr.

Mihnea Moldoveanu of Rottman Business School, led a decade long research effort to understand the modes of thinking, or adaptive intelligence, that transcends discipline boundaries.

His admonishment is essentially to consciously select the mode of thinking, or pattern of thinking modes, best fit for the problem and group of people at hand.

A hypothetical set of methods for another data sprint might go: 1) map the scales, layers, and theories related to the complex concept in question, 2) down select to the most important set of themes or meta-questions to ask of the data set, 3) exploratory analysis of the raw data set, 4) develop a schema and begin cleaning the data and organizing it into an easy to access data frame, 5) break up into small groups to each tackle a theme or meta-question and use alternating flare and focus modes of thinking like inductive/ deductive and declarative/ modal to iteratively analyze the data, develop hypothetical insights, and refine them with more data analysis.

This space of methodologies — especially sprint methodologies — to ask complex questions of big data sets using multidisciplinary teams is really exciting.

So many of the most interesting questions facing our policy-makers, scientists, and researchers sit on huge piles of data at the intersection of fields that traditionally haven’t had to find a common language to work together.

Facilitators, data scientists, and design practitioners might develop new theories on how best best to organize teams to more quickly and efficiently divine insights from large, chaotic data sets.

Inevitably, someone will try to come along and commoditize it with pretty graphic icons and simplified descriptions.

But right now it’s a fun, collaborative space of community exploration as we figure out how to bring more disciplines and professions together with data scientists and raw data.

Photo by Startaê Team on Unsplash.

. More details

Leave a Reply