Exploring, Learning and Forging a Path

Lately I’ve been joking (or maybe not joking) that I don’t know what I want to do when I grow up. I suppose that has a bit of irony, considering that even today I am working out times to chat with someone looking for career advice from me, but then again, perhaps the exploration is part of who I am.  

If you look at any of my official bios scattered about the web, it can appear that I have followed a reasonably straightforward path that led to where I am now, but the truth is that path was fairly meandering. I went to college thinking I’d study biology, but  instead landed in Social Anthropology. My interest in museums led to jobs that focused me on archaeology, where I settled in for grad school and wrote a dissertation on pilgrimage in 14-16th century India. Instead of staying in academia, I went to work for a design firm and did some consulting on the side, before becoming a corporate anthropologist, and for the last few years I’ve been in GovTech. Along the way I have dipped my toes back into academia, teaching business, and advising entrepreneurial venture teams. 

A recent conversation with a colleague (who has his own very interesting background) led to him saying “oh, you’re a learner!” which gave me a great label to put on myself--I do find joy in learning new things and stretching myself in new ways, which is probably why the proverbial “end goal” of all of this is still not clear, and a reminder that it’s OK because I am an explorer.

I have a huge amount of respect for the self-knowledge I encounter in others. It’s a great reminder of our diversity, and the importance of following your own path (straight or windy) versus what others may think of as “success.” 

We recently spent time with a friend who started volunteering at a local children’s hospital after retiring from a civil service job. After a while they offered him a part-time paid job in the gift shop, which often ends up being nearly 40 hours a week. He said “it probably seems crazy that I get so excited to go work in a gift shop” -- to which I immediately responded “no, it seems like you are finding joy in your life.” 

It doesn’t take until retirement to figure yourself out--I’ve been having these conversations with plenty of other folks who are “officially” working full time or on the search. One recent college graduate in her first job is in a role that is utilizing her degree, but is also a very small firm of introverts and mostly remote. It’s helped her realize how much she wants to be around, and interacting with, other people in her day to day (by contrast I know plenty of people who are perfectly happy in the quiet of their own space), 

The list goes on--folks who are leaving jobs because they see they cannot be their full selves or achieve their own goals. A friend who is a business owner who readily admits that although it has been a rocky road,  the frustrations, risks, and challenges are a better fit than being someone else’s employee.  

It can still be hard for me to remember that I don’t need to be looking for goalposts to cross--but maybe it is because I am not that kind of an athlete. Perhaps as a lover of the outdoors it is easier for me to keep in mind how much I love wandering along a trail, spotting the plants and wildlife (and occasionally wandering down a side trail), whether or not there is a mountaintop view at the end. 


Machines are not human. And that’s OK.

As an anthropologist who has spent years studying human behavior, , I've been thinking a lot about the tendency in both the tech industry and popular culture to use a human metaphor for Artificial Intelligence. While it arguably makes this complex technology more accessible and approachable to non-technically trained people, I believe this approach is fundamentally flawed and potentially misleading.

I know I am not alone in this--a recent OpEd in the New York Times was musing that we lose our interactions with other humans when we start to substitute AIs for humans. Others have noted that there is more than a hint of dystopia in the way these machines are portrayed in real life as well as the movies. 

But I think the core problem with anthropomorphizing AI is the simple fact that machines are not human. People are  inherently contextual and often (seemingly) irrational. There is no set of rules that can accurately predict what a human will do in every situation.

This complexity is something that many technologists, particularly those without a background in social sciences, often fail to fully grasp. They miss key components of what makes us human – our ability to act unpredictably, to be influenced by subtle contextual cues, and to make decisions that defy logical explanation. While we can make educated guesses based on patterns and tendencies, the core of human decision-making cannot be fully coded or replicated.

 Artificial Intelligence, on the other hand, operates based on rules and algorithms. While these can be incredibly sophisticated, they are fundamentally different from human cognition. That is not the same as saying humans are "better" than AI. Certainly machines beat out humans in their ability to process vast amounts of data and identify patterns, among other skills. My point is that AI is  fundamentally different from homo sapiens and we should treat it as such, especially if we want to best utilize it for broader benefit.

By anthropomorphizing AI, we risk obscuring these crucial differences. We may start to expect human-like behavior from AI systems, leading to misunderstandings about their capabilities and limitations. This risks us starting to expect AI to understand nuance, context, or emotion in ways it simply can't, or to over rely on it for  tasks that require human judgment. But it also may limit our ability to envision distinctly new things this technology might enable, that go beyond computing faster than humans. 

Instead of trying to make AI more human-like, we should focus on developing a nuanced understanding and utilizing AI for what it is – a powerful tool with its own unique strengths and limitations.  Recognizing these differences and framing AI in its own terms, rather than through a human lens, will allow us to leverage AI for broader and more meaningful benefits. 

I expect that this is only my first foray into articulating my thoughts around this, and that my ideas will continue to evolve as the technology and my understanding of it matures, and as I listen to others who approach it from a different point of view to my own.. A few weeks ago, I had the pleasure of meeting John Kao, who believes in the psychological importance of making AIs more human.  He is part of a group demonstrating the humanity of machines through a project of having AI write an opera about Alan Turing, which is honestly quite wonderful and thus will leave it to readers to form their own opinions. 

Building Resilient Teams

Building Resilient Teams

A recent Wall Street Journal article that centered on how making yourself “indispensable” by being the only person who can do a task or who has key knowledge is actually not a productive strategy for immunizing yourself from a layoff. This came as no surprise to me, as I have worked with knowledge hoarders who were eventually let go. In most workplaces, when things get tight, the most essential people to keep around are not necessarily the ones who are the most technically proficient or “best” at what they do. Most employers want to keep the flexible people who are willing to stretch their knowledge and abilities by taking on new challenges, and who are ready and willing collaborators.

The Research Thinking Field Guide

The Research Thinking Field Guide

The real value of research lies in its ability to inform better decisions at every level, from how to lay out a dashboard to what priorities should drive long-term resource planning in digital services. Research enables you to clearly describe wider issues and identify potential solutions in a way that can be tested. It allows you to determine appropriate metrics with which to measure success, and to steer a course ever closer to that success.

Focus on the activity (not the object)

Focus on the activity (not the object)

As someone who spent my early career working in collections management, I am familiar with the object based focus of museums, and admit to an inherent love of things and what we can learn from them. But this is overpowered by my understanding of the importance of context and the knowledge that things don’t stand in isolation. If we don’t understand how and why they were made, used, and valued, and what other objects, practices, and values are attached, we cannot learn and understand more deeply.

The Right Benchmarks

Our vision at Ad Hoc is to close the gap between consumer expectations and government. As more of our daily lives move online, it’s crucial that online government services be usable and accessible to everyone, but it’s also important to clarify the distinctions between consumer expectations and emulating what popular commercial websites are doingso that agencies can focus on enabling the best outcomes.

The impact of internal incentives on product outcomes

Often, organizations we work with are focused on output. It’s easy to understand and measure, and government leaders often use it as a metric to determine if a legal or regulatory requirement has been met. There is value in ensuring teams meet their contractual obligations, but an output-focused culture can create perverse incentives that can harm the outcomes products enable for users.

Ethnographic thinking without ethnography

Ethnographic thinking without ethnography

Over the last few months, many ethnographers have been challenged by the question of how to conduct observational research in a time when we must maintain social distancing. This change in norms is an issue people in many professions are facing, and I am always interested to learn how others approach the problem.

Creating inclusive experiences, virtually

Creating inclusive experiences, virtually

Working for Ad Hoc has made me much more aware of accessibility issues, or in the broader framing I prefer, inclusivity. Because the underlying question should not be “how do we accommodate edge cases,” but “how do we make sure everyone (or at least as many as possible) is included in what we create?”

The 21st Century IDEA Act Playbook Part 4: Prioritize modernization with user research

The 21st Century IDEA Act mandates that every agency with a website or digital service review those services, assess which are “most viewed or utilized by the public or are otherwise important for public engagement,” and prioritize those that need modernization. The best way for agencies to prioritize their services is to thoroughly research how users interact with those services.

Qualitative and quantitative research

The members of our Ad Hoc research team come from a variety of backgrounds–we are a diverse group of social scientists, designers, and information scientists. In addition to caring deeply about the people who will be using the digital tools our teams build, we are all data mavens, and focus on ensuring our research can be turned into action by ensuring key decisions are based in data. Most of our work is focused on qualitative studies, and we find that there is often a lack of clarity among our stakeholders on the benefits of qualitative versus quantitative data, what methods are appropriate for what kinds of questions, and how different types of data sets can work together to inform design, development, and product roadmaps.

A case for incremental change and accessibility

When we think about computer accessibility, we often focus on compliance with Section 508, the law mandating that websites, IT resources, and electronic documents procured and maintained by federal agencies are accessible to people with disabilities. Current best practices in the broader UX world also look to ensure minimal accessibility standards. At Ad Hoc, our work must meet these standards, but we strive to go beyond them.

How to avoid inflexible design

How to avoid inflexible design

A couple of months ago I heard a podcast on the how to make change with the fewest number of coins. I learned that in the UK, self service checkout machines are generally not very efficient about the process. While the mathematical challenge intrigued me, it is the design problem that has stuck with me, because it highlights the challenges we face we don’t consider that the tools we build may need to adapt to broader contexts of use.