Show Navigation
Conversation
Notices
-
@angle Rather than trying to fix things at the code level (or worse, predict what future AI code will do) it's probably better to have mitigations at a higher level, such as:
- Is this system independently auditable?
- Is there adequate test coverage of this?
- What are the assumptions and data sets which went into this? Are they any good? Who does oversight on that?
- Who was this system made by? Do I trust them?
- Whose interests does this system serve?
- If this system goes wrong, what is the plan B? Can I fix it? Do I have any sort of redress?