Reflections on Digital Literacy topics, inquiry into core uses of Microsoft Excel

Category: Reflections

This is the category to apply to your Weekly Reflection posts from the course.

Weekly Reflection: Datafication, Cybersecurity, and Accessibility

This week comprises three presentations on distinct but related topics. The first to discuss is Bonnie Stewart’s presentation on Datafication.

Sometimes a good chorister enjoys being preached to. Stewart’s talk on datafication outlines many of the ways that the reduction of lives into data which can be analyzed, commodified, and sold is used by the powerful to exert their control. A lot of these ideas are familiar to me, and the lens Stewart takes towards the social influences they cover is quite similar to that which I try to apply in my own understanding of current affairs.

The issue of the reductivity of data is particularly impactful in EdTech, where a primary purpose of a lot of educational technology is improved ability to surveil the progress and activities of students, which can grant a useful level of detail or play into a “teach to the test” mindset around maximizing educational practice towards whatever benchmark can be easily measured. It should not go unsaid that systems are built with purpose, and the benchmark a system measures can be designed around pushing the interests of the unelected heads of large companies whose interests may or may not align with societal well-being.

A cluster of security cameras
Surveillance video cameras, Gdynia, 2007, Paweł Zdziarski, Wikimedia Commons, CC BY-SA 3.0

A lot of conversation around datafication of educational practice calls to mind Goodharts Law: When a measure becomes a target, it ceases to be a good measure. Having one number be a target allows for enormous exploitation to make that number go up, which can be divorced from genuine improvements. The most famous example is the Cobra Effect, wherein the British Raj attempted to reduce cobra populations in Dehli by putting a bounty on cobras, resulting in cobra breeding farms to harvest more snake corpses, resulting in a rise in cobra populations. This incident likely didn’t actually happen, I can find no reliable source for it and the bit of this article above the paywall seems to suggest historians dispute it, but it’s demonstrative regardless. Musk and his DOGE agency seem to be doing Goodhart’s law on purpose, or at least with reckless negligence, aiming to cut the biggest amount of government spending without much regard for how much damage they do to basically everything. Reduction of people to data essentially turns everything into a perverse incentive; humanity and societies can perhaps be understood more thoroughly with detailed statistics, but to base policy and worldview on numbers exclusively isn’t rational, it’s just inhumane.

Wency Lum offered a basic overview of cybersecurity threats and some best practices to protect oneself from attacks. A primary focus was identifying phishing scam emails, which, when one is diligent, is not exceptionally difficult. UVic often sends it’s own phishing emails to train people to spot them by linking to a “you fell for it” page, which definitely made the threat of phishing more tangible when I clicked one of the links in my first year. The emails seem to have had an opposite effect on a coworker in Reslife, who told me the other day that she saw one, thought “I bet thats one of those phishing emails”, and clicked the link and was satisfied when she was right and it went to the UVic phishing warning page. This really highlights a point that Lum brings up: cybersecurity is a collective effort reliant as much on people as the systems you build for them.

Charlie Watson’s talk on accessibility outlines various conditions requiring accommodation and many existing tools that currently aim to fulfill that purpose. Watson also lays out various techniques to ensure digital design is accessible to everyday users and the tools that facilitate digital engagement.

An image of a ramp and staircase leading to the platform of a train station
Commuter rail platform ramp, stairs, Peoria Station, 2016, xnatedawgx, Wikimedia Commons, CC BY-SA 4.0

A core framework I picked up on was a chain of design intentionality that is required for disabled people to function in society; for people to participate, they need technology built for their needs, and for that technology to function, infrastructure must be built to facilitate it. A person needs a mobility scooter to move, a mobility scooter needs a ramp to get up an incline; infrastructure is built for tools which are built for people. In order to pursue digital accessibility, you must understand the internet as infrastructure, where, just as a mobility scooter is useless in a building full of stairs, a screen reader is impaired in it’s function when it engages design that doesn’t use heading systems properly, relies on coloured text for information conveyance, or uses images of text instead of usual script. It’s always a good thing to affirm that accessibility isn’t a problem that can be solved externally; it must be built into the core design of our societal infrastructure lest disabled people be left behind.

Weekly Reflection: Social Annotation

This week’s centerpiece was a talk by Remi Kalir on social annotation. Social annotation involves the process of collaboratively adding insights retrospectively to existing texts in a forum with some degree of public accessibility, whether it be free to the general public or accessed by a more restricted group. As part of his definition of social annotation, Kalir invoked public protest art like that pictured below as an example of annotation in a social space as a method of making statements through existing cultural artifacts.

A woman and child wearing Black Lives Matter shirts look at a fence covered in taped-on messages in support of the BLM movement. Text on the image reads "annotation is an everyday literacy practice"
Screen grab from Remi Kalir on Social Annotation

The reframing of protests like this as annotation is clearly applicable and brings a certain sense of intellectual satisfaction, but at the level of depth Kalir was able to convey in the talk, I am left unclear on what exactly the framework does to expand our understanding of these acts. Interpreting these signs as annotations is a different way of looking at these signs which I would normally understand as public art or political speech, but I am unclear on how it might be a better or worse way of seeing them. I would hardly consider this a criticism though, as I assume these questions are answered in Kalir’s book on annotations.

The main digital aspect of the talk is hypothes.is, a tool for shared annotation of digital resources that allows people within a group to make annotations on existing texts and see the annotations that others within the group have made. I have experience using Perusal, an online platform that is executed differently but can be summarized identically. The course I am taking that uses Perusal fails to incorporate it into it’s educational design, mainly because the instructor implemented it at the request of some students and had never heard of it before the class started. In that course group, you can find notes on the readings from precisely 1 student, and, to be completely honest, I’m judging them the whole time. They always do the readings before me and I always see their comments and think “that one’s a bit shallow,” or “that’s not really what the auther is saying,” or “that point already gets addressed a couple of pages later.” I do consider this a me problem; the point of social annotation as a vehicle for learning is not to share your most polished insightful musings or, for that matter, to read other people’s annotations with a critical eye to rate them out of ten. Social annotation requires both vulnerability and a willingness to engage in a way that doesn’t punish vulnerability, and I will be the first to admit I have work to do on both fronts in this context.

A graphic showing letter grades A+ to F in a gradient from green to red
Energy efficiency label A Plus, 2017, Loominade, Wikimedia Commons, CC BY-SA 4.0

Social annotation, like many other progressive educational practices, can only be as powerful as a student’s willingness or capacity to engage with it. Education as we know it is designed to revolve students experiences around grades and deadlines; the goal of classes is, from the perspective of incentive sets at play, to get the highest grades by doing the fastest (which often means least) work. I aim for high grades to earn and maintain scholarships that cover the cost of my tuition while my physical disability makes it difficult to work, so I feel particularly vulnerable to how these incentive sets shape our educational experiences. Even keeners who always go for an A+ will focus on completing objectives and no more, because doing more means you have less capacity to spare for the next required task on the list, disability or not.

I’m just a bit too prideful to use Perusal to write whatever comes to my head, and I’m not getting a grade for writing out constructed, insightful responses to the texts, so I keep my notes to myself. Rather than a criticism of the technology, I consider this an assessment of how my profile as a student ends up pulling me away from educational practices that require active engagement without concrete guidelines to be successful. The only way for progressive educational practices like social annotation to make an impact is as a part of a fundamental shift in educational priorities away from benchmarks and deadlines.

Weekly Reflection: Generative AI and the Slow Death of Cognition

This fireside chat with Lucas Wright makes me concerned for the state of our minds going forward. The offloading of cognitive tasks to bland forest-burning bots regurgitating the mean average of all of the questionably legally obtained data they have is so not inspiring or exciting to me; the most positive feeling it instills in me is a sense of resignation to the continuing decline of our ability or motivation to think for ourselves.

Lucas Wright in the recorded lecture demonstrating use of ChatGPT
Lucas Wright & Valerie Irvine Fireside Chat

Plato was known to be opposed to writing. He commented on it’s invention in Phaedrus, claiming that students “will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.” I used to find this fact reassuring; it shows that people have been sounding alarms about how new technology will make us stupider for thousands of years, yet humanity marches on. The rise of generative AI has changed my opinion on this fact, and it now only fills me with dread. In the chat, Wright demonstrates a process of creating a presentation on evaluating judgements during which he tells the bot to collect sources, has the bot summarize them, uses a bot to create diagrams out of those sumarries to use in a presentation, and then gets the bot to create a learning activity relating to the information. The act of knowledge communication here is being treated as a discardable obstacle, and the cognitive ability to research, understand writing, and communicate one’s ideas is being discarded with it. Just as Plato could never have done anything to preserve our memories and stop us from relying on paper, we in modern times will not be able to do anything to stop humanity from giving up on finding, understanding, and communicating information without offloading the task to water-polluting data skimmers brought to you by the Alphabet Corporation.

Black and white image of Carl Sagan speaking at a lecturn
Carl Sagan speaks at Cornell University, 1987, Kenneth C. Zirkel, Wikimedia Commons, CC BY-SA 4.0

Wright mentions a move from a “search and create” model to a “generate and evaluate” model. I believe this shift is one that turns knowledge communication into an empty task. Ask Carl Sagan, David Attinborough, or Bill Nye what they think about removing “create” from the process of communication. These people demonstrate that knowledge is granted power, influence, and accessibility through the way it is communicated, and they show the difference that is made when it is communicated passionately and creatively. Even on the smaller scale of presentations, reports, or blogs, knowledge has meaning because it is important to human beings, and we should be purposeful in what and how we say things lest information become meaningless to us. Every word in, for example, a scientific paper, has a purpose. These papers are long because they contain nuance, detail, and often the personality and opinions of writers. AI summaries exist to remove all of these things. They create easily consumable bullet points of only what they “think” is essential, and in the process deprive the user the experience of judging and interpreting information for themselves. Even reports and papers are (ideally) created with intent by incredibly dedicated, knowledgable people who communicate specific facts in a specific manner to underscore their importance, and relying on quick AI summaries bulldozes this work in a depressingly disprespectful fashion. Evaluation of AI summaries is not enough. You either lose the voice, intention, and nuance of the authors, or check the articles thoroughly enough to ensure nothing was missed, in which case the AI summary is nothing but a redundancy.

On the subject of redundencies, Wright mentions his AI emailing tool and talks about how AI will give way to new forms of communication. This is because he uses a bot to change “no worrries” to “Dear ____, You don’t need to worry” and he imagines a second person who will, unwilling to read the email themselves, use a bot to summarize it back down to “no worries.” The only inefficiency here worth addressing is AI itself. We’re so afraid of being curt (and so offended that someone might be curt towards us) that we have built a gazillion dollar atmosphere hole-punching machine and used it to add and remove formalities.

screenshot of Lucas Wright's email responder. Lucas types "Yes I have dont worrry", the chat bot responds with "Here's a concise and professional response: 
Subject: Re: Power Bill
Dear [Sender's Name],
Yes, I have checked the power bill today - no need to worry"
Lucas Wright’s email responder, Lucas Wright and Valerie Irvine Fireside Chat, 3:30

Wright is later asked about how he copes with the environmental damage done by these generative models. He says that it is unfair that consumers have their feet held to the fire when the corporations are the ones who should be held responsible. This often valid criticism is being used here as a thought terminating cliché to absolve Wright of any personal responsibility. “Corporations are damaging the planet, not individuals, so I won’t stop throwing my used car batteries into the ocean, thank you very much.” In this blog, I have tried to keep much of my criticism more broadly focused to the field that Wright describes rather than towards him specifically, but in this case I feel some individual commentary is warranted. On a daily basis he is using and endorsing these tools which have not yet become so ubiquitous as to be required, so he is part of the problem of how their proliferation opposes sustainability, and the shallowness of his deflection of this fact verges on the parodic.

When we invent technologies that carry a function, it replaces that function in us. The GPS has largely replaced our inner compass, constant calculator access has made us worse at mental math, and paper has made us worse memorizers. I do not care how practical generative AI can be in replacing our ability to research, process, and communicate information; these abilities are ones we should never let humanity leave behind. We must at some point draw a line at what we can’t be bothered to do on our own, and for me, that line is far earlier than giving up on reading and writing anything longer than two sentences.

Weekly Reflection: Open Educational Resources

Cable Green speaking in the recorded lecture
Cable Green on Open Educational Resources

This locking down of information stifles innovation. Having no access to research across a scientific field means slower progress and expensive redundancies in different organizations doing the same work. With course materials having such high cost, keeping textbooks up to date either costs schools too much or students too much, and this makes education less accessible, which means we get fewer qualified people growing up into important fields. Pursuing profit motive means making everything cost as much as you can get away with, which means locking things down wherever you can, which means slowing society down.

Reflection: BC Digital Literacy Frameworks

In reading the B.C. Post-Secondary Digital Literacy Framework what struck me most was two underlying assumptions which, in this reflection, I hope to if not challenge then at least interrogate.

The first is an assumption that applies to a specific section: “Being able to differentiate between truth and misinformation”. This contains an underlying assumption that belief in misinformation stems from poor skills in analytically determining truth from falsehood. This seems like it should be an obvious fact, but it is not beyond questioning. Below is a video by Dan Olsen, In Search of a Flat Earth (2020), in which Olsen analyzes the internet presences of the flat earth and Q-anon movements as they existed at that time. The key section for our purposes is from 27:40-29:50, during which Olsen states “the end goal of conspiratorial beliefs is to simplify reality … Most people don’t actually believe Flat Earth because they were persuaded by shoddy evidence, or they found other evidence to be less persuasive about the nature of the physical world, they do so because it says something they already believe about the nature of the social world.” The Earth being flat would prove to flat Earthers that God created it in an exceptional way and that a powerful group is hiding that, which reinforces their wider social beliefs, so they will believe that the Earth is flat and never run out of new evidence unless they let go of those core assumptions. One’s ability to determine truth from falsehood is not useful when one is not willing to employ it.

27:40-29:40
A romantic era painting of a bearded man in shabby clothes sitting on a rock
L’Esule by Antonio Ciseri, 1860s: Captures a Romantic individualistic aesthetic presenting solitude even in the poverty and desolation of exile as aspirationally powerful and masculine.

Another underlying assumption is an acceptance of a highly individualistic framework. The section that most clearly exemplifies this is under Digital Wellbeing: “A digitally literate person will have healthy boundaries with digital technologies, use them intentionally and will not use digital technologies in ways that harm others.” One can easily present a cynical framing of this passage. The government stands by as enormously wealthy companies spend unfathomable amounts of money building systems to capture as much of your attention as possible, actively damaging your mental health and general societal wellbeing to generate as much revenue as they can, and the government responds to that situation by saying “hey, you should get digitally literate and have some healthier boundaries.” Here, we see a systematic problem that is being solved by attempting to systematically confer individual skill in dealing with it. It reflects an unthinking acceptance of the status quo, which is not something particularly surprising coming from a government document by a branch of the government that does not have the capability of changing this status quo. I find that note worth bringing up, since I want it clear that I wouldn’t expect this document to be a radical solution to all of our digital problems via demands for sweeping changes. The cynical framing also rejects individualism too enthusiastically, I think. As I have previously mentioned, no one is hurt by learning good digital literacy, and we should possess these skills whether or not social media spaces are well-regulated. I simply hope that through this interrogation of two underlying assumptions we can more deeply understand the subject of digital literacy and the structural forces and limits that apply to this particular attempt to improve it.

© 2025 EDCI136 Portfolio

Theme by Anders NorenUp ↑