The digital age poses new challenges for epistemology. Digital technologies have become central to how we form, revise, and maintain our beliefs. How should we approach this recent development as epistemologists? What is the epistemological significance of our increasing reliance on anonymous online sources, social media, personalized news feeds, and search engines? What does the widespread use of AI and opaque algorithms mean for our lives as knowers, testifiers, and reasoners? Do new epistemic responsibilities arise in the digital world? How can we, as epistemologists, contribute to making sense of these developments?
One thing we can do is help identify the epistemic risks associated with these technological trends. As some have already noted, technologies using AI and opaque algorithms might very well, e.g., perpetuate and accentuate biases against marginalized groups, promote epistemic bubbles and echo chambers, help the spread of toxic misinformation (propaganda, hoaxes, conspiracy theories, “fake” news, “deepfakes”), and produce outputs that lack justification. Some of these risks constitute obstacles to acquiring knowledge or justified beliefs about important matters. Others may constitute or perpetuate various forms of epistemic injustice. Epistemic injustices may, e.g., be present in labeled data sets that are used to train artificial neural networks. These are some of the questions we’d like to discuss at this year’s annual meeting of the Canadian Epistemological Society.
This is the first day of the three-day annual Canadian Society for Epistemology conference.