top of page

Everyday Life, Media, and Viral Cultures

Section editor: Dr. Marzieh Yousefi

33.png

Cultural Flatness in the Algorithm Era​

Screenshot 2025-11-28 at 8_edited.jpg

Dr. Marzieh Yousefi, Assistant Professor

ACSS Department, UCW

Everywhere we turn, we see similar people with similar lifestyles and tastes. What we buy, eat, wear, listen to, and watch are becoming increasingly similar all around the world. Cultural variation is converging toward a global sameness. But why? Aren't people from different countries, races, and languages supposed to have different tastes?

Algorithms are not just tracking but shaping human experiences. They track what we do, and once they learn that a certain look or experience performs well, they push it to millions of users. This algorithmic “flatness” fosters homogeneity and uniformity — not just of digital content, but of physical, real-world spaces and experiences.

This reduction of local, traditional, and creative ideas into uniformity, or “cultural flatness”, is not a random event. It is a result of automated recommendation systems and algorithms (on social media, streaming platforms, and other digital infrastructures). The concept of cultural flatness is very similar to the old theory of “culture industry” in 1940s, when Adorno and Horkheimer argued that culture was turning to standardized, mass-produced, and guided by commercial interests rather than individual expression. The main difference is that algorithms are now mass-producing culture faster, more invisibly, and on a global scale through feedback loops.

In today’s digital world, lifestyles, social media feeds, fashion trends, music, tourist destinations, food, and even living spaces are becoming more and more similar. Global aesthetic and people’s taste are evolving into a universal visual template without any cultural depth. Minimalistic designs, beige furniture, nude makeup, neutral color clothes; this is the image that dominates social media today, accompanied by hashtags like #simpleliving and #minimalism. A minimalist look can be a symbol of calm, order, and a seemingly perfect life. However, the standard look is not merely aesthetic; it is a loss of cultural variations, individuality, creativity, and most probably minority cultures

Minimalist visuals perform well in the algorithm, so creators keep repeating the same visuals. This can lead to the homogenization of aesthetics worldwide. Moreover, what we desire, purchase, admire, and imitate is increasingly shaped by what algorithms predict will keep us engaged. Kyle Chayka, in his book “Filterworld: How Algorithms Flattened Culture” argues that algorithms result in sameness across global spaces, the blurring of taste, and a narrowing of experience. Chayka also argues that the algorithms pressure artists and other content creators to shape their work in ways that fit the feeds. However, once culture is flattened, traditional or colorful cultural trends are overshadowed by standardized imagery and the universally “Instagrammable” look. According to Chayka (2024), “The outcome of such algorithmic gatekeeping is the pervasive flattening that has been happening across culture”.

The convergence of distinct human cultures toward a reduced set of standardized global norms is diminishing diversity and individuality of human experiences. Over time, algorithms are not simply making us dress alike — they are making us think alike, desire alike, and perform identity alike, just like a generation of monoculture robots.

References

Adorno, T. W., & Horkheimer, M. (2002). The Culture Industry: Enlightenment as Mass Deception. In Dialectic of Enlightenment: Philosophical Fragments (pp. 94–136). Stanford University Press. (Original work published 1944)

Chayka, K. (2024). Filterworld: How algorithms flattened culture. Doubleday.

Are We Becoming the Algorithm? The Psychology Behind Online Identity

Maryam-Safa-Schneider_edited_edited.jpg

Maryam Safa Schneider, Assistant Professor

ACSS Department, UCW

Interviewer: Dr. Marzieh Yousefi

Can you share a bit about your background in psychology, particularly in relation to digital behaviour or media use?

I have an academic background that bridges psychology, communication technology, and digital arts. My research focuses on how digital environments and culture influence mental health, behaviour, and identity, with particular attention to the design of systems that support wellbeing and inclusion. Much of my work examines the psychological experience of working and interacting in digital spaces, for example, exploring how technostress, digital norms, and platform design shape attention, emotion, and connection at the workplace. I’m also deeply interested in accessibility and Universal Design for Learning (UDL), and in applying these frameworks to understand how inclusive systems design can foster healthier and more equitable digital experiences across work and educational settings. I also a health and work wellness coach.

 

From a psychological standpoint, how might recommendation algorithms contribute not just to cultural homogenization, but also to shifts in individual identity, self-expression, or sense of belonging?

Psychologically, constant exposure to algorithmically amplified aesthetics and lifestyles encourages social comparison and “norm” internalization, especially in visually driven platforms such as Instagram. Users learn which looks, opinions, and formats receive more visibility and positive feedback, and may gradually adjust how they present themselves. This happens through language choice, style, or content to align with what the system appears to reward. Research informed by social identity theory indicates that algorithmic curation can intensify identification with in-groups and exposure to specific cultural narratives, that is, the dominant ones (Carrasco-Farré et al., 2025). These “norms” or narratives favour high-engagement ideals such as curated wellness or productivity. For this reason, individuals may feel a sense of temporary belonging only within algorithmically validated bubbles in a digital environment, but not connected to diverse offline groups. This is partly why we see anxiety on the rise, especially among Gen Zs.

 

Do you think people are aware of how algorithms influence their tastes and preferences, and what psychological factors might make this influence hard to notice?

Most people know algorithms exist, but they rarely recognize how subtly they operate. Without explicit education about cognitive biases, fallacies, and how recommender systems work, most people do not have a clear mental model for questioning their feeds or their reactions to them. Understanding ideas like confirmation bias, selective exposure, and the mere‑exposure effect can help people pause and ask, “Do I like this because it’s in line with my values, or is it because it keeps being shown to me?”

This is why in my Social Psychology class, I teach my students how biases shape perception and judgment. Always towards the end of the term, I have class discussions about how algorithms can exploit us, and how developing critical awareness can create a small but meaningful gap between “what the feed wants me to mindlessly consume and accept” and “what I consciously endorse.” In this context, the goal is not to make students distrustful, but to give them the tools to recognize that their online preferences are partly psychological and partly technical, and to encourage them to remain open to reflection and improvement.

 

In your view, how do brands, influencers, and monetization models shape users’ psychological experiences online—self-esteem, comparison, anxiety around fitting in, etc.?

Influencers and brands prioritize enhanced, materialistic content that drives engagement and ultimately sales. They present filtered versions of life that emphasize optimal productivity, flawless bodies, or lavish lifestyles. This constant exposure triggers upward social comparisons, in which individuals measure themselves against unattainable standards, leading to lowered self-esteem and heightened pressure to "optimize" their own lives. Monetization reinforces this by rewarding content creators who embody and promote that specific aesthetics. For this reason, such online environments become a marketplace of mindless performative behaviours.

As a health and wellness coach, I frequently hear my clients express anxieties rooted in social comparisons with health and wellness influencers. Through social comparison, they perceive their own lifestyles as deficient. These perceptions stem from algorithmically amplified content that equates "health" with extreme routines, supplements, or aesthetics, intensifying feelings of inadequacy. Once we debunk these myths together by examining the curation of such content and the unique individual needs of each person, that is, never a one-size-fits-all solution, their anxiety subsides. This shows how much of the pressure is externally manufactured rather than a true benchmark for wellbeing.

 

 

What do you see as the psychological or social-emotional consequences of culture flattening—for individuals and for communities?

I agree that there are negative consequences of culture flattening, but I would also argue that it may not inevitably erode psychological depth or social-emotional richness; instead, it can take the form of cultural exchange and hybridity, which foster expanded creativity, hybrid identities, and resilient global communities through cross-cultural exposure and adaptive self-expression.

I see myself as a hybrid identity, and I believe that this algorithmic homogenization has a flip side that sparks hybrid creativity. We can see global trends with local flavours everywhere online. For instance, we can see viral fusions, such as TikTok dances that blend African rhythms with Western aesthetics. This type of content enhances self-expression without suppressing uniqueness.

 

 

What responsibilities should policymakers, designers, and communicators have in supporting users’ mental wellbeing and protecting cultural diversity in digital spaces?

Policymakers must push for transparency and accountability in algorithm design. Designers and software engineers can integrate wellbeing-informed principles. Educators can help people understand how digital environments shape behaviour and identity, too. But ultimately, some measures must be enacted at the governmental level and embedded within national legislation to ensure lasting and equitable protection for people’s mental well-being.

 

 

Is there anything else you’d like to share with our UCW readers about the psychological impacts of algorithm-driven culture?

Developing digital self-awareness, questioning why a certain trend appeals to us, or asking who benefits from the content I’m being shown can help us reclaim our agency. Our digital habits are shaped by design and established norms, but the good news is that they can also be reshaped through collective intention. And as educators, creators, and digital citizens, we all play a role in creating cultures, whether online or offline.  

UCW readers, including fellow educators and researchers, should also recognize that algorithmic culture also subtly controls academic research agendas, shaping which topics trend as "worthy" by prioritizing clickable, high-engagement themes over nuanced, diverse inquiries into unfiltered isness of things.

 

Carrasco-Farré, C., Grimaldi, D., Torrens, M., & Longobuco, E. (2025). Social Identity Theory and Algorithmic Bias: Ingroup and Outgroup Acrophily in Recommender Systems. Journal of Management Information Systems, 42(4), 1017–1054. Https://doi.org/10.1080/07421222.2025.2561382

bottom of page