AI Bias, Ethics, and Policy Article
Larry Medsker and Farhana Faruqe from the HTC lab recently published an article on AI Bias, Ethics, and Policy in ACM’s AI Matters. https://sigai.acm.org/static/aimatters/6-1/AIMatters-6-1-04-Medsker.pdf
Larry Medsker and Farhana Faruqe from the HTC lab recently published an article on AI Bias, Ethics, and Policy in ACM’s AI Matters. https://sigai.acm.org/static/aimatters/6-1/AIMatters-6-1-04-Medsker.pdf
Farhana Faruqe and Dr. Larry Medsker of the HTC Lab are publishing a series of blogs on AI Bias and Fairness for AAAI. Here is a link to the first of their contributions… way to go! https://sigai.acm.org/aimatters/blog/2020/02/28/bias-and-fairness/
During the Fall semester 2019, I volunteered with the GW Innovation Center to help build a literature database that can be used for grant writing, program planning, marketing, and publications. In a future post, I’ll break down the process and … Continued
As the semester wraps up, the work with GWIC throughout the semester is coming to a close. Quantifying past data with GWIC was a project that took time going through old reports from the past year, digging through Google Drives … Continued
After we finalized our method of data capture for future events, it was time to look at quantifying events from the previous year. GWIC kicked off its formal events and coursework in 2018. From that year, they created an annual … Continued
Since I started my work at GWIC, there have been two major components of what I dedicate my time to. One part is assisting in streamlining data capture so that this center has a better understanding of who engages with … Continued
For my first semester’s volunteer work, I have dedicated my time to the George Washington Innovation Center (GWIC). This space is truly unique, as it is the first physical space on campus that is focused on bringing students from various … Continued
The HTC Lab’s Ryan Watkins will be on a panel at the WAIM Convergence Conference: At the Boundary: Exploring Human-AI Futures in Context in August (in NYC). After the conference he will share his notes here, so come back the … Continued
Lorena Barba recently contributed to the National Academies of Sciences’ report on Reproducibility and Replicability in Science. “When scientists cannot confirm the results from a published study, to some it is an indication of a problem, and to others, it … Continued
Using EGG, new “classification models” being developed can sense how well humans trust intelligent machines they collaborate with, a step toward improving the quality of interactions and teamwork. https://www.purdue.edu/newsroom/releases/2018/Q4/new-models-sense-human-trust-in-smart-machines.html