GCPUG hosted a meetup on 20 February 2019 at Google Singapore. If you are interested in the talk, refer to the following meetup page for more details:
There are 3 talks for this session
Topic: Using Machine Learning to meet Data Privacy requirements.
Speaker: Jason Quek from Avalon Solutions
Synoposis: General Data Protection Regulation states that European citizens have the right to request for access to their personal data, and that photos of themselves are considered personal data. With a large database of photos, companies run the risk of running afoul of regulations which can cost many millions in fines.
During this session, we will demonstrate how we use Machine Learning to solve this problem at different levels through facial recognition models and categorization of unstructured data, e.g. DNS entries. As data privacy becomes a larger issue in Singapore, some of the techniques can be reused to ensure that data breaches can be detected and not compromised.
Topic: Using BigQuery for Large Data Processing
Speaker: Nito Buendia, Customer Solution Engineer Google Bio: Nito is a Customer Solutions Engineer at Google; working on building scalable automation and data solutions for Google’s largest advertisers. During his tenure at Google, he has mainly focused on Automation and delivering solutions for teams to operate at 10x scale. He is passionate about empowering businesses through digitization and technology.
Sypnosis: BigQuery is Google Cloud’s highly scalable, enterprise-ready data warehouse; which will allow you to focus on analyzing large data sets without worrying about the infrastructure. On this session, we will learn what BigQuery is and the benefits that can bring to your business and data projects. We will also explore some enterprise architecture examples and showcase how you can optimize and improve the performance of analyses.
Topic: Responsible AI Practices: Fairness in ML
Speaker: Bio: Andrew Zaldivar is a Developer Advocate for Google AI. His job is to help to bring the benefits of AI to everyone. Andrew develops, evaluates, and promotes tools and techniques that can help communities build responsible AI systems, writing posts for the Google Developers blog and speaking at a variety of conferences. Before joining Google AI, Andrew was a Senior Strategist in Google’s Trust & Safety group and worked on protecting the integrity of some of Google’s key products by using machine learning to scale, optimize and automate abuse-fighting efforts. Prior to joining Google, Andrew completed his Ph.D. in Cognitive Neuroscience from the University of California, Irvine and was an Insight Data Science fellow.
Sypnosis: AI systems are enabling new experiences and abilities for people around the globe. The risk is that any unfairness in such systems can also have a wide-scale impact. Thus, as the impact of AI increases across sectors and societies, it is critical to work towards systems that are fair and inclusive for all. This talk will share some of our current work and recommended practices for building fairer and more inclusive AI systems.