研究方向—人工智能
We are seeking a postdoctoral researcher to study the governance of foundation models as part of a new focused initiative on AI policy. This initiative aims to supply evidence-based research that will sharpen the U.S. and global AI governance conversation, including current legislative and regulatory proposals.
In response to the increasingly significant role of foundation models in our lives, governments around the world are proactively developing policies to govern these technologies and their downstream uses. Stanford stands at the forefront of this dynamic field, blending technical knowledge with comprehensive legal and policy insights with high-impact research, including On the Opportunities and Risks of Foundation Models, Foundation Model Transparency Index, and Considerations for Governing Open Foundation Models. The researcher will work in a rich, interdisciplinary milieu that bridges computer science, law, and public policy with a unique platform to lead AI policy research that is both technically sound and institutionally pragmatic, placing you at the vanguard of shaping the future of AI governance.
The postdoc would be housed at the Stanford Institute for Human-Centered AI, jointly advised by Professor Percy Liang (computer science / Center for Research on Foundation Models) and Professor Daniel E. Ho (law, political science, computer science by courtesy / Regulation, Evaluation and Governance Lab ).
Example Projects:
Provide a deep analysis of the opportunities and risks of open foundation models, including historical impact, benefits, management, and responsibilities of open-source or open innovation approaches, to guide policy and innovation efforts.
Study requirements on transparency into the practices of foundation model developers and adverse event reporting mechanisms as concrete policy proposals to make precise the impact of these technologies.
Engage in investigations of proposed AI regulatory frameworks, such as the EU AI Act and , licensing requirements, and regulatory thresholds, assessing their methodologies and impacts.
Develop evidence-based research to inform and influence policy-making processes, ensuring regulations are technologically feasible and practically implementable.
Qualifications:
Ph.D. in a related discipline (e.g. computer science, public policy, information science) or J.D.
Demonstrated interest in bridging academic research with policy impact.
Strong collaborative skills and ability to work well in a complex, multidisciplinary environment across multiple teams, with the ability to prioritize effectively.
Being highly self-motivated to leverage the distributed supervision structure.
Salary:
$80,000-$90,000. The pay offered to the selected candidate will be determined based on factors including (but not limited to) the qualifications of the selected candidate, budget availability, and internal equity.
Appointment Term:
This is a 1-year position, with potential for renewal.
Appointment Start Date:
As soon as possible
To apply:
To apply, please submit an application, including a brief cover letter, CV, links to top 2 research papers, and 2 letters of recommendation (SlideRoom will prompt you to enter your recommenders’ email addresses to request the letter).
Application deadline:
Rolling, with a closing date of February 15, 2023. Strong preference will be given to early applicants.
Stanford Departments and Centers:
Institute for Human-Centered Artificial Intelligence (HAI)
Center for Research on Foundation Models (CRFM)
Regulation, Evaluation and Governance Lab (RegLab)
Questions:
HAI-policy@stanford.edu
The job duties listed are typical examples of work performed by positions in this job classification and are not designed to contain or be interpreted as a comprehensive inventory of all duties, tasks, and responsibilities. Specific duties and responsibilities may vary depending on department or program needs without changing the general nature and scope of the job or level of responsibility. Employees may also perform other duties as assigned.
Stanford is an equal employment opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.