AI Ethics and Governance

Explore the ethical implications and governance frameworks for artificial intelligence.

Question 1 of 10

Question 1: Algorithmic Bias

What is algorithmic bias?

AWhen AI systems reflect or amplify existing prejudices in their training data or design
BWhen AI favors complex algorithms over simple ones
CWhen AI systems make random errors
DWhen humans prefer one algorithm over another

Question 2: Narrow vs General AI Ethics

What is the difference between narrow AI and general AI in the context of ethical concerns?

ANarrow AI presents fewer ethical concerns because it's limited to specific tasks
BNarrow AI requires more oversight while general AI is self-regulating
CNarrow AI affects individuals while general AI affects societies
DNarrow AI has immediate impacts while general AI concerns are only theoretical

Question 3: Impact Assessments

What is the purpose of algorithmic impact assessments?

ATo measure the business value of AI systems
BTo test the accuracy of algorithms
CTo evaluate potential negative impacts of algorithmic systems on various stakeholders
DTo determine how fast an algorithm can process data

Question 4: Explainability Principle

What is the principle of “”explainability”” in AI ethics?

AThe requirement that complex AI systems be able to explain their decisions in human-understandable terms
BThe need for AI systems to be explainable to other AI systems
CThe ability of AI to explain human behavior
DThe requirement that AI developers explain their code to regulators

Question 5: Black Box Problem

What is the “”black box problem”” in AI ethics?

AWhen AI hardware malfunctions
BWhen AI systems make decisions through processes that are not transparent or understandable
CWhen AI training data is encrypted
DWhen AI produces unexpected errors

Question 6: Distributive Justice

What is a core concern of “”distributive justice”” in AI ethics?

AWhether AI benefits and harms are distributed fairly across society
BWhether AI processing power is distributed efficiently
CHow AI processes are distributed across servers
DHow AI code is distributed to developers

Question 7: Meaningful Human Control

What is the concept of “”meaningful human control”” in AI governance?

AEnsuring humans perform all AI training
BRequiring that humans maintain decision-making authority over AI systems
CPreventing AI from controlling human systems
DDelegating only simple tasks to AI

Question 8: Value Alignment

What does “”value alignment”” refer to in AI ethics?

AEnsuring AI systems act in accordance with human values and goals
BAligning AI processing with business value
CMaking sure AI provides economic value
DEnsuring all team members share the same values

Question 9: Ethical Washing

What is “”ethical washing”” in AI development?

AThe process of checking AI code for ethical issues
BAttempting to make unethical AI systems appear ethical through superficial measures
CRemoving biased data from training sets
DTesting AI for vulnerabilities

Question 10: Ethical Approaches

What is the difference between deontological and consequentialist approaches to AI ethics?

ADeontological focuses on following rules while consequentialist focuses on outcomes
BDeontological is for commercial AI while consequentialist is for academic AI
CDeontological considers short-term impacts while consequentialist considers long-term
DDeontological is stricter while consequentialist is more lenient