Thursday, May 1, 2025

Final Post --- Society with Technology

 

New York World's Fair – GM Futurama 1964 Vintage Advertising Poster - Etsy


    The Futurama exhibit of 1964 promised a world of technology where it would be a savior to humanity, but that optimism presented back in the day feels underwhelming as technology progresses. The visuals in Mad World show a world that's dominated by a broken, technology obsessed society, where there is no empathy or care. Steve Cutts' Man shows us further that there should be no optimism, as he shows humanity's reckless exploitation of the planet enabled by technology. 

    My relationship with technology would be considered a work in progress. It's a powerful tool to have, where my phone gives me access to global news, academic research, and any sort of social media my heart desires. The internet is a tool that changed lives and the way as we, a society, function as a whole, but there's a catch to this wonderful tool. Ive lost meaningful hours of my life doom scrolling, chasing dopamine hits from likes and followers. Studies show a connection from rising youth suicide rates to the use of technology, more specifically the cyberbullying behind it. I worry that myself and others aren't in control of their lives anymore, as technology and the internet dictates my attention and what I do rather than the other way around. To combat this, I try and set limits: no screens after 10:30pm and instead, I read a section of my book for 30 minutes to wind down. I also fact-check everything I see online, as you don't know what's real and what's fake, and some of those news articles can instill serious anxiety. 

    4 Ways To Stay Connected With Family & Friends — SquareTrade Blog

    Technology when it comes to my friend and family, I would still say its a mixed connection, but I do value it more than it being in my free time. Sometimes for me, it's a lifeline, a connection where I get to talk to my mom and my dad from hundreds of miles away, talk to my boyfriend over long breaks, and keep connected with my friends back home on social media. My parents also use apps such as the health app, that's connected to their smartwatch, to track their health which is seriously important. Although, my friends have expressed some downsides with technology, often feeling anxious when posting stuff on social media. Technology can erode a simple hang out, often turning a connection that should have all the attention of the participants, into a photo op and status update. Technology is able to connect us in unimaginable ways, but it also has the power to disconnect us from what matters most.

    Googling myself, nothing of mine popped up. I am currently not that invested on LinkedIn, so my profile does not show up as I have barely set it up just yet, but no social medias or pictures show up. I think this is good because if a job were to look me up, nothing of sorts comes up that would put me into bad standing with them. I think once I get my LinkedIn figured out, I will be smooth sailing for my digital footprint. Being intentional online can be both an opportunity and a liability, because in an age where technology rules everything, you have to be smart with your presence online. 

    Our relationship with technology requires attention. For me, it's about using it purposefully and attentively, making sure that I stay in control. For society, it's about being able to use it for its potential while also staying aware of the harms. Technology can be a partner but it can also be our enemy if we let it. 

Friday, April 25, 2025

EOTO 2 Reax (Post #11)

 


Media theory - Gatekeeping


    After seeing everyone's great presentations, I decided to react to Gatekeeping made by Maddie. Gatekeeping is the process of selecting and then filtering items of media that can be controlled and consumed. Gatekeepers themselves are able to control what passes through the "gate" to reach audiences. This can be influenced by personal bias, personal experiences, or organizational policies. News editors like to sift through global stories and choose what aligns with their channels ethics or audience expectations. 

    The first person to make sense of this theory was Kurt Lewin (1890-1947) who was a German born psychologist and pioneer in social psychology who first revolutionized how we understand human behavior today.   He was able to coin the phrase "gatekeeping" as a term describing the process of filtering information in order to block harmful content. Once originally a psychology concept, became a cornerstone of communication studies as media grew. 

Gatekeeping also sets standards for information value, as we can determine what's "real" and what's "fake" in a world where the media landscape is incredibly crowded. It can help influence policies, act as a watchdog on society, or reinforce audience bias. Along with major mainstream companies, people just like us can also gatekeeping by filtering content based on personal relevance. 

Other scientists on the theory such as Shoemaker and Reese ended up expanded gatekeeping with the hierarchy of influences model, which outlines factors that shape media messages across these 5 levels:

- INDIVIDUAL LEVEL: Personal Biases, experiences or preferences of the gatekeepers

- MEDIA ROUTINES LEVEL: Practices like deadlines, professional writing, or newsworthy media

- ORGANIZATIONAL LEVEL: Goals, policies, or sector specific priorities of media outlets

- SOCIAL INSTITUTIONS LEVEL: Cultural norms, ideologies, or pressures that influence content

- SOCIAL SYSTEM LEVEL: Broader societal structures such as government regulations or economic forces

This model shows that gatekeepers don't operate by themselves, but that their decision is based on a complex relationship between personal factors and systematic factors. 

    Gatekeeping isn't just about control as one may think, it's about relevance. Although it can be harmful, for the most part it just ensures we engage with content that matters to us, but since it can be harmful, it raises questions about bias and power. Who can accurately and unbiasedly decide what's worthy and what gets left behind?

Thursday, April 24, 2025

In the Age of AI (Post #10)


PBS Distribution | In the Age of AI


    The documentary In the Age of AI by Frontline truly dissects the unfortunate dual edge sword that is artificial intelligence. The second hour of this documentary was fascinating as I was struck by the balance between AI's potential and its unsettling risks. These topics were reflected within the class discussions and presentations we had on technology's societal impact.  The positives of AI are undeniable, as it helps healthcare with faster diagnostics, enhances productivity through automation, and personalizing education.  This documentary does an excellent job at highlighting how AI can optimize industries all across the board. These positives could eventually solve pressing global challenges with time.  However, the trade off for these benefits is the risk of privacy loss. AI is known to strive on data, which more times than not is personal data, raises concerns about consent and surveillance. The documentary notes how companies, without any form of transparency or consent, will intrude and erode individual autonomy. While data-driven services improve user experiences, they can also create risk. In a day and age where AI is almost everywhere, privacy is considered a luxury.

    National security is another complexity within the world of technology and AI. While AI strengthens defense through analytics and systems, the documentary also warns us about the AI arms race. Nations deploying AI in warfare could lead to destabilized geopolitics, especially if these autonomous weapons act without any human supervision.

    Online security and identity theft are both equally troubling as well. AI can support cyber security by detecting threats in record time, yet it also can empower cyber criminals at the same time. This can include deep fakes and AI phishing scams that are incredibly harmful and makes identity theft a lot easier. The whole idea that AI has this duality as both a protector and predator left me feeling uneasy and confused about its presence in my life.

    What surprised me the most is how AI is integrated into daily life, often without any public debate. The example of China's social credit system seemed incredibly dystopian and its sparked questions about how do we balance innovation with accountability. As we grow more as a society, we often get powered by AI without even realizing it, and we have to start asking questions such as: Can we regulate AI globally in order to counteract any sort of threats.

    This documentary is definitely a must watch for anyone navigating AI as its helps us understand AI trajectory and how it depends on human choices. It also gives great examples on why AI can be helpful but also harmful, and can be a real eye opener for an everyday citizen who doesn't know what's happening behind closed doors.

    

Friday, April 18, 2025

Smartphones Through The Lens Of Diffusion Of Innovation Theory (Post #9)

 


What is the The Diffusion of Innovation model? | Smart Insights


    The Diffusion of Innovation is a theory that seeks to explain how, why and what rate new ideas and technology spread. Popularized by American communication theorist and sociologist, Everett Rogers, This theory helps us understand the rate at which new ideas and technologies spread. As seen in the model above, this is normally how it is understood and taught by Rogers, and is commonly used by marketers to understand where consumers are and when these consumers will adopt a new product or service. 

    In a time where smartphones exist, it's hard to imagine life without them, but there was a period of time when these pieces of technology were not the norm. The tale of how smartphones became nearly universal is a great example of the Diffusion of Innovation theory in action. Breaking the theory down a bit more, it classifies adopters into five distinct categories (as seen above in the model): Innovators, Early Adopters, Early Majority, Late majority and Laggards. 

Smartphone History: The Timeline of a Modern Marvel

    When the first iPhone was released in 2007, it was extremely revolutionary, but its downsides were its high cost and limited functionally compared to phones now. The Innovators were the first who saw the potential of the product and were willing to pay the price for being the first enthusiasts. The Early Adopters (trendsetters, influencers, and businesspeople) were the ones to recognize the technology not just for the functionally, but the social value it implemented. These people were the ones who turned the smartphone into a symbol of status, connectedness and productivity. The Early Majority adopters are the ones that jumped in when the smartphone became more "popular" (when the price came down, more apps were available, and the infrastructure made it more useful.) By the time the Late Majority had entered into the world of this technology, smartphones weren't as optional as they were before. They were a necessity to everyday life and were the norm at the time. The Laggards were the ones who resisted the change to the point where it was almost unavoidable. Think of the Boomer generation where they tried to hold onto their flip phones for as much as possible. 

    We have come to see that what makes a new technology popular isn't how new it is or how cool it is, but by the people who see potential and use those ideas to make it easy to use, have visible benefits and how compatible that technology may be with existing human habits. That's what the diffusion theory helps people understand, and starting smartphones are no different. 

Monday, April 14, 2025

EOTO 2 (POST #8)

 Major milestones in the history of automated face recognition. While... |  Download Scientific Diagram


    Facial Recognition technology is a software that has been around for 50+ years, with many people globally contributing to its success. In a definition made by the Department of Homeland Security, Facial recognition technology is a contemporary security solution that automatically identifies and verifies the identity of an individual from a digital image or video frame. This technology can be compared to other biometric technologies, and used for a number of activities. Although its success is made to be a team work style effort spanning over decades, the first innovator to capture the idea of facial recognition software is Woodrow W Bledsoe and his team of researchers.

    Bledsoe and the researchers would run experiments through 1964-1966 to see if computers could recognize human faces. To carry out this experiment, the team would use rudimentary scanners to map out a persons eyes, nose and hairline and then see if the computer can match the same characteristics to the correct person. Unfortunately, this experiment came up unsuccessful, with Bledsoe stating: “The face recognition problem is made difficult by the great variability in head rotation and tilt, lighting intensity and angle, facial expression, aging, etc.” This is why facial recognition software became a global project, because of these difficulties, people all over the world from different times are able to put their own spin on the software and solve each one of the issues Bledsoe and his team were facing. Thanks to camera technology improvements, mapping processes, machine learning and processing speeds that were developed by other researchers, Facial Recognition has become a lot easier to achieve and use.


What Is Facial Recognition and How Does It Work?


    Today, we know Facial Recognition by Apple and the iPhone (although not the only facial recognition technology), generally how a successful run of facial recognition is done is the process starts off with Face detection, where the camera detects and locates an image of a face. Once the camera locates the correct face, the next step the software completes is a face analysis. This step of the program is where the software reads the geometry of your face, focusing on the key points such as the distance between your eyes, the depth of your eye sockets, the distance from forehead to chin, the shape of your cheekbones, and the contour of yours lips, ears and chin.  Once the software has captured the necessary imaging required, it will then take this image and convert it to data, essentially turning your face into a mathematical formula called Faceprint. Think of Faceprint as the thumbprint to your face, this code is unique to you and is almost impossible to recreate. This faceprint is what is compared to multiple databases in order to find a match.


Things You Need to Know Before Installing a Facial Recognition System |  HackerNoon


    Technology that recognizes a person from their iris or thumbprint is slowly going out of style due to facial recognition being so natural and convenient. Nobody can recognize a person from their thunbprint or iris, but a picture of someone's face says a thousand words. Due to this convenience, this technology is used for a multitude of things, such as: 

  • Unlocking phones: Used in smartphones like iPhones to protect data and prevent unauthorized access.
  • Law enforcement: Used to match mugshots with databases and identify suspects in the field via mobile devices.
  • Airports and Border control: Speeds up travel with biometric passports and enhances security at checkpoints and events.
  • Finding missing persons: Helps locate missing individuals by matching faces in public spaces to databases.
  • Reducing retail crime: Identifies known criminals entering stores, helping prevent theft and fraud.
  • Improving retail experience: Recognizes returning customers to suggest products or enable quick checkout via “face pay.”
  • Banking: Allows secure, password-free transaction authorization through face scans.
  • Marketing and advertising: Tracks customer reactions to ads or products and tailors content using facial cues.
  • Healthcare: Used for patient ID, emotion detection, and ensuring medication adherence.
  • Tracking student or worker attendance: Scans faces to log attendance in schools and workplaces.
    And so much more. People don't understand how this technology is everywhere, scanning everything you do almost. We have come to a period in time where going anywhere and doing anything will require some sort of facial recognition, whether you realize it's happening or not--and they don't even need for you to know due to the fact your data is already stored and at their disposal. 

    While it can be scary to know your data is sitting in multiple companies database, ready to be scanned and used, facial recognition has some significant advantages, including ones of:
  • Increased security: Helps identify criminals and locks personal devices or home systems securely.
  • Reduced crime: Deters petty crime and secures systems by replacing hackable passwords.
  • Greater convenience: Enables contactless payments and fast, seamless identity checks.
  • Faster processing: Verifies identity in seconds, aiding both security and efficiency.
  • Integration with other technologies: Works well with existing systems, reducing setup costs.
    Just like everything in the world, facial recognition may have its advantages but to some people its a cause for concern for others due to:
  • Surveillance: Can enable mass tracking of innocent people, raising fears of restricted freedom.
  • Scope for error: Mistaken identity is possible due to camera angles or appearance changes.
  • Breach of privacy: Faces can be stored without consent, sparking ethical and legal concerns.
  • Massive data storage: Requires large, costly data sets that small companies may struggle to manage.
Although its extremely difficult to rid facial recognition technologies from your everyday life, there are some preventions in order for you to stay as safe as possible, including: 
  • Limiting what you share online
  • Wearing clothing that may interfere with cameras, such as hats or reflective material
  • Adjusting the privacy settings on your device to not include facial recognition 
  • Staying aware of your surroundings
  • Stay up to date and speak up about the concerns

Stay Cyber Safe While Travelling Overseas 🌍 Protect yourself and your  devices with these quick tips to avoid falling victim to cybercrime: 📱  Switch Off Facial Recognition: 👀 Beware of Phone Snatchers



SOURCES:

Sunday, April 6, 2025

Why Don't We Hear Them? (Post #7)


War! What Is It Good For?: Antiwar Images of the 20th Century – PRINT  Magazine


    When we fast forward to the United States today, we are involved in many military relations around the world, from drone strikes to military aid, its extremely easy to tune into mainstream media and hear about what we are involved in overseas. Discussions about strategies, foreign policy goals, or the political implications of these actions, but there's always something missing about these discussions, such as the unapologetic, antiwar voices. 

    While exploring the websites such as Antiwar.com and The American Conservative, there was a lot of surprising factors I found. Not only the content in itself but the perspectives that come along with it. These sort of sites are filled with perspectives of well written, reasonable takes of the United States with topics such as military spending and human cost of perpetual war. The writers don't cut corners or sugar coat the topics they are trying to get across, they flatly question the ethics, motivation, and the long-term consequences of American intervention in global conflicts. So the question still stands, why aren't we hearing them?

    Let's be real: none of us have ever heard about these websites, and that's not an accident. These real perspectives are often pushed to the side--sometimes fully ignored--due to mainstream media outlets. This raises the question on who in actuality controls the narrative on what sort of media we digest. Is it because the sheer idea of war sells? Or maybe it's just easier to frame war as inevitable? Whatever the reason is, it's clear we are missing something vital. 

    So next time you hear something related to war or something about a big military budget increase, as yourself: What voices am I not hearing in this discussion? 

Wednesday, April 2, 2025

Carrier Pigeons (EOTO - Someone else's technology) Post #6

 Top Tips For Managing Your Carrier Pigeons | Imperial War Museums


    Carrier pigeons (often going by names such as mail pigeon, messenger or homer pigeon) were used in the war in order to communicate and transfer information in a more "reliable" way. The first usage of them was actually in the ancient times, where they were utilized by the Greeks and Romans. They were also used in 1890 where they were used to communicate information from other cities for the news. 

    Carrier pigeons have natural homing abilities which allow them to be able to find their way back home to their loft. This works by people taking them from their loft, attaching a message onto them, and then releasing them. The pigeon will then return back to its loft where the message can be retrieved by the other recipient. 


Exciting tales and top secret work of pigeons in the First World War -  Science Museum Blog


    Some of the most famous carrier pigeons in history are Cher Ami, who saved a group of American soldiers in World War One, Gi Joe who saved 1,000 lives in World War Two by flying twenty miles in twenty minutes and Commando, which flew 90 trips in and out of France to deliver important information.  These birds, although to some just may be birds, saved a lot of people's lives in the war, making carrier pigeons extremely important at the time and will forever go down in history as hero's. 


The British Army entrusted its secrets to birdbrains | National Army Museum