Mission
The Phenomenological AI Safety Research Institute (PAISRI) exists to perform and encourage AI safety research using phenomenological methods. We believe this work is valuable because the the development of AGI (artificial general intelligence) creates existential risks for humanity, and AGI systems are likely to exhibit mental phenomena, so AI safety can best be approached using phenomenological methods that directly engage with the mental aspect of AGI. We are primarily focused on solving AGI alignment because we believe it offers the best long-term approach to building safe AI, but also consider issues of AI safety more generally.
Values
At PAISRI we value
- Compassion for all life
- Careful, rigorous, and precise thought
- Effective operations
- Cooperation
Friends
We’re located in San Francisco, California and in frequent contact with other researchers and organizations working on issues related to existential risks and AI safety. We are especially fans of the following groups and their work:
- Machine Intelligence Research Institute
- Future of Humanity Institute
- Center on Long-Term Risk
- Qualia Research Institute
Contact
All inquiries should be directed to contact@paisri.org.