{"id":15237,"date":"2024-05-13T11:11:21","date_gmt":"2024-05-13T11:11:21","guid":{"rendered":"\/?p=15237"},"modified":"2024-05-13T11:14:44","modified_gmt":"2024-05-13T11:14:44","slug":"fairness-and-privacy","status":"publish","type":"post","link":"\/?p=15237","title":{"rendered":"Fairness and privacy, Removing context encourages unfairness"},"content":{"rendered":"<p>Han Zhao @hanzhao_ml How to ensure fairness (statistical parity) and privacy (DP) simultaneously? What are the costs of privacy and fairness upon accuracy? Excited to share our #ICML2024 work answering the two questions above!<\/p>\n<p>paper: https:\/\/arxiv.org\/pdf\/2405.04034<br \/>\ncode: https:\/\/github.com\/rxian\/fair-regression<br \/>\nReplying to @hanzhao_ml<\/p>\n<hr \/>\n<div data-rbd-draggable-context-id=\"1\" data-rbd-draggable-id=\"buhps\">\n<div class=\"\" data-block=\"true\" data-editor=\"2m2tk\" data-offset-key=\"buhps-0-0\">\n<div class=\"public-DraftStyleDefault-block public-DraftStyleDefault-ltr\" data-offset-key=\"buhps-0-0\"><span data-offset-key=\"buhps-0-0\">Han, Link to abstracts to give context. If I could, I would outlaw use of PDF on the Internet, because it strips the information needed for your type of algorithm, then copies propagate. LLMs are like that too. Richard Collins, The Internet Foundation<\/span><\/div>\n<\/div>\n<\/div>\n<div data-rbd-draggable-context-id=\"1\" data-rbd-draggable-id=\"3jglb\">\n<div class=\"\" data-block=\"true\" data-editor=\"2m2tk\" data-offset-key=\"3jglb-0-0\">\n<div class=\"public-DraftStyleDefault-block public-DraftStyleDefault-ltr\" data-offset-key=\"3jglb-0-0\"><span data-offset-key=\"3jglb-0-0\">\u00a0<\/span><\/div>\n<\/div>\n<\/div>\n<div data-rbd-draggable-context-id=\"1\" data-rbd-draggable-id=\"arqns\">\n<div class=\"\" data-block=\"true\" data-editor=\"2m2tk\" data-offset-key=\"arqns-0-0\">\n<div class=\"public-DraftStyleDefault-block public-DraftStyleDefault-ltr\" data-offset-key=\"arqns-0-0\"><span data-offset-key=\"arqns-0-0\">https:\/\/arxiv.org\/abs\/2405.04034<\/span><\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Han Zhao @hanzhao_ml How to ensure fairness (statistical parity) and privacy (DP) simultaneously? What are the costs of privacy and fairness upon accuracy? Excited to share our #ICML2024 work answering the two questions above! paper: https:\/\/arxiv.org\/pdf\/2405.04034 code: https:\/\/github.com\/rxian\/fair-regression Replying to @hanzhao_ml Han, Link to abstracts to give context. If I could, I would outlaw use <br \/><a class=\"read-more-button\" href=\"\/?p=15237\">Read More &raquo;<\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[73,72],"tags":[],"class_list":["post-15237","post","type-post","status-publish","format-standard","hentry","category-all-knowledge","category-all-languages"],"_links":{"self":[{"href":"\/index.php?rest_route=\/wp\/v2\/posts\/15237","targetHints":{"allow":["GET"]}}],"collection":[{"href":"\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=15237"}],"version-history":[{"count":3,"href":"\/index.php?rest_route=\/wp\/v2\/posts\/15237\/revisions"}],"predecessor-version":[{"id":15240,"href":"\/index.php?rest_route=\/wp\/v2\/posts\/15237\/revisions\/15240"}],"wp:attachment":[{"href":"\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=15237"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=15237"},{"taxonomy":"post_tag","embeddable":true,"href":"\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=15237"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}