5/13/21 — Reading Reflection 5 — Towards Fair and Privacy-Preserving Federated Deep Models

Anna Stephanie Kucinski
3 min readMay 13, 2021

Deep learning models can be incredibly useful for different types of recognition challenges or other problems. The larger their data set, often the better their model and results will be. However, when creating a deep learning model, creators need to consider the limitations of their resources and their impacts on the design. Training with large data is expensive and time-consuming, and further, there are large privacy concerns with the data they do use. Federated learning tries to aid in improving privacy issues with model training, but stakeholders that contribute large amounts of data fear that small competitors will take advantage of their data without contributing to a similar extent. Lyu et al. try to address these concerns and propose that the current considerations of fairness in contributions should be reevaluated and that the models contributors receive should be dependant on their contributions.

  • I am not sure how I feel about giving different contributors different models. It gives a lot of power and benefit to these already established companies and contributors for the sake of preserving their market shares. Privacy is a large concern and there should be good faith when entering an agreement like federated learning, but I think aiming for financial fairness isn’t a healthy way of promoting FL either.
  • I was not aware that Bitcoin was a blockchain. I don’t have much experience or knowledge of cryptocurrency or how they’re made, but knowing about what blockchain is and its accountability legitimizes it a lot more for me.

The Design of the FPPDL

  • Privacy Preservation — Assuming parties don’t trust each other is a safe way to try to include as many FL scenarios as possible. However, similarly to what is mentioned before, building trust in the first place and promoting positive collaboration I feel could be the most beneficial for both parties, but especially for the smaller ones.
  • Fairness — I agree that those who put more in should generally get more out. I am not sure the types of parties who would be involved in this, but if there were a large disparity between some companies I think it may be good to consider equity over equality to consider possible barriers and privilege.

Implementation of the FPPDL

  • Initial Benchmarking — AI’s checking on data for other AI’s. When seeing a group or a chain of AI’s doing work for each other it may be a good idea to ensure that the proper checks for bias are in place and that the process that is happening is understandable at every step. Much like explainable AI.
  • Privacy-Preserving Collaborative Deep Learning — Making sure this is working properly is very important for the credibility of all parties involved. Further, it may be good to accompany this point system with human checks and discussion. These point systems may add a feeling of safety and trust for larger parties, but communication between them and smaller parties and the possible reasons for the changes of points promotes collaboration and doesn’t reduce the smaller parties to just a number in the eyes of the big parties. There also needs to be security that these points won’t be altered or that there isn’t a way to game the system for either type of party’s benefit. If manipulated by the larger company it gives them an excuse to provide them with less information, funding, model, etc. A smaller company manipulating it can result in the realizing of the fears of the larger party
  • Note on power dynamics — With the concepts of larger and smaller parties and contributors who have more of an impact on the information given than others, there can be a power difference that should be addressed in fairness and the implementation of these FPPDL. Should someone have more influence on collaborative fairness, and if so how should that be decided?

--

--