BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//pretalx.com//pyconde-pydata-berlin-2023//talk//CBHYXG
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-pyconde-pydata-berlin-2023-CBHYXG@pretalx.com
DTSTART;TZID=CET:20230419T105000
DTEND;TZID=CET:20230419T112000
DESCRIPTION:Is your model prejudicial? Is your model deviating from the pre
 dictions it ought to have made? Has your model misunderstood the concept? 
 In the world of artificial intelligence and machine learning\, the word "f
 airness" is particularly common. It is described as having the quality of 
 being impartial or fair. Fairness in ML is essential for contemporary busi
 nesses. It helps build consumer confidence and demonstrates to customers t
 hat their issues are important. Additionally\, it aids in ensuring adheren
 ce to guidelines established by authorities. So guaranteeing that the idea
  of responsible AI is upheld. In this talk\, let's explore how certain sen
 sitive features are influencing the model and introducing bias into it. We
 'll also look at how we can make it better.
DTSTAMP:20260415T102947Z
LOCATION:B07-B08
SUMMARY:Thou Shall Judge But With Fairness: Methods to Ensure an Unbiased M
 odel - Nandana Sreeraj
URL:https://pretalx.com/pyconde-pydata-berlin-2023/talk/CBHYXG/
END:VEVENT
END:VCALENDAR
