In reading the B.C. Post-Secondary Digital Literacy Framework what struck me most was two underlying assumptions which, in this reflection, I hope to if not challenge then at least interrogate.
The first is an assumption that applies to a specific section: “Being able to differentiate between truth and misinformation”. This contains an underlying assumption that belief in misinformation stems from poor skills in analytically determining truth from falsehood. This seems like it should be an obvious fact, but it is not beyond questioning. Below is a video by Dan Olsen, In Search of a Flat Earth (2020), in which Olsen analyzes the internet presences of the flat earth and Q-anon movements as they existed at that time. The key section for our purposes is from 27:40-29:50, during which Olsen states “the end goal of conspiratorial beliefs is to simplify reality … Most people don’t actually believe Flat Earth because they were persuaded by shoddy evidence, or they found other evidence to be less persuasive about the nature of the physical world, they do so because it says something they already believe about the nature of the social world.” The Earth being flat would prove to flat Earthers that God created it in an exceptional way and that a powerful group is hiding that, which reinforces their wider social beliefs, so they will believe that the Earth is flat and never run out of new evidence unless they let go of those core assumptions. One’s ability to determine truth from falsehood is not useful when one is not willing to employ it.
Consider an example one might find more empathetic. My youtube shorts feed is filled with videos chronicling the failings of the cybertruck. A recent favorite is about a cyber cooler for the truck which Tesla sells for hundreds of dollars and will not fit in most cybertrucks, because if the bed is warped by a centimeter, it won’t slide in. I believe that Elon Musk believes racist and hateful things in large part because he is incompetent and not particularly intelligent. Because the cybertruck being terrible is evidence that Musk is stupid, and because that supports my underlying assumption about the general stupidity of attention-craving neonazis, I do not bother to fact check these videos; I accept them effortlessly, even though they are quite shoddily made. The only difference between me and Olsens flat earther is that I have the self-awareness to identify this tendancy and at least an intention to check my facts if ever my opinion on them becomes materially important to myself or the world.
An objection one might bring up is that access to quality education is another quite important difference between me and an average flat earther is education. A poll by the University of New Hampshire did show that higher level of education was positively correlated with disagreement that the Earth is flat, but it also showed a positive correlation between higher education and the belief that scientists exaggerated the danger of COVID-19 and showed no correlation at all between level of education and the belief that vaccines contained microchips. The assumption that false beliefs arise from failure to spot misinformation is rather ironically an assumption that humans are rational creatures who base their beliefs on information they gather, when this is not universally the case. These considerations do not necessarily undermine the goal of promoting digital literacy; being better at spotting misinformation has never hurt anyone, and other aspects of digital literacy like those focused on digital citizenship come off mostly unscathed. I’m not sure that educational design can effectively incorporate this concern. It takes personal connections and an individuals own initiative to change the core assumptions that lead to conspiratorial or anti-science beliefs, and I’m not convinced that that change even can originate in a classroom, but if an educational framework can effectively do so, it will be probably be penned by an educator, not a saxophone performance major.

Another underlying assumption is an acceptance of a highly individualistic framework. The section that most clearly exemplifies this is under Digital Wellbeing: “A digitally literate person will have healthy boundaries with digital technologies, use them intentionally and will not use digital technologies in ways that harm others.” One can easily present a cynical framing of this passage. The government stands by as enormously wealthy companies spend unfathomable amounts of money building systems to capture as much of your attention as possible, actively damaging your mental health and general societal wellbeing to generate as much revenue as they can, and the government responds to that situation by saying “hey, you should get digitally literate and have some healthier boundaries.” Here, we see a systematic problem that is being solved by attempting to systematically confer individual skill in dealing with it. It reflects an unthinking acceptance of the status quo, which is not something particularly surprising coming from a government document by a branch of the government that does not have the capability of changing this status quo. I find that note worth bringing up, since I want it clear that I wouldn’t expect this document to be a radical solution to all of our digital problems via demands for sweeping changes. The cynical framing also rejects individualism too enthusiastically, I think. As I have previously mentioned, no one is hurt by learning good digital literacy, and we should possess these skills whether or not social media spaces are well-regulated. I simply hope that through this interrogation of two underlying assumptions we can more deeply understand the subject of digital literacy and the structural forces and limits that apply to this particular attempt to improve it.