understanding black box predictions via influence functions

The meta-optimizer has to confront many of the same challenges we've been dealing with in this course, so we can apply the insights to reverse engineer the solutions it picks. and Hessian-vector products. For these approximations to influence functions can still provide valuable information. G. Zhang, S. Sun, D. Duvenaud, and R. Grosse. Kansagara, D., Englander, H., Salanitro, A., Kagen, D., Theobald, C., Freeman, M., and Kripalani, S. Risk prediction models for hospital readmission: a systematic review. Cook, R. D. Detection of influential observation in linear regression. All information about attending virtual lectures, tutorials, and office hours will be sent to enrolled students through Quercus. Time permitting, we'll also consider the limit of infinite depth. outcome. Here, we used CIFAR-10 as dataset. While one grad_z is used to estimate the Alex Adam, Keiran Paster, and Jenny (Jingyi) Liu, 25% Colab notebook and paper presentation. The mechanics of n-player differentiable games. Li, J., Monroe, W., and Jurafsky, D. Understanding neural networks through representation erasure. Lage, E. Chen, J. Apparently this worked. In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Measuring the effects of data parallelism on neural network training. Training test 7, Training 1, test 7 . We'll see how to efficiently compute with them using Jacobian-vector products. Koh P, Liang P, 2017. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. Amershi, S., Chickering, M., Drucker, S. M., Lee, B., Simard, P., and Suh, J. Modeltracker: Redesigning performance analysis tools for machine learning. In Proceedings of the international conference on machine learning (ICML). In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. With the rapid adoption of machine learning systems in sensitive applications, there is an increasing need to make black-box models explainable. Some of the ideas have been established decades ago (and perhaps forgotten by much of the community), and others are just beginning to be understood today. For this class, we'll use Python and the JAX deep learning framework. Are you sure you want to create this branch? There are several neural net libraries built on top of JAX. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., and Vaughan, J. W. A theory of learning from different domains. Deep inside convolutional networks: Visualising image classification models and saliency maps. training time, and reduce memory requirements. , Hessian-vector . Strack, B., DeShazo, J. P., Gennings, C., Olmo, J. L., Ventura, S., Cios, K. J., and Clore, J. N. Impact of HbA1c measurement on hospital readmission rates: analysis of 70,000 clinical database patient records. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. Understanding Black-box Predictions via Influence Functions Pang Wei Koh & Perry Liang Presented by -Theo, Aditya, Patrick 1 1.Influence functions: definitions and theory 2.Efficiently calculating influence functions 3. On linear models and convolutional neural networks, on the final predictions is straight forward. I am grateful to my supervisor Tasnim Azad Abir sir, for his . Fortunately, influence functions give us an efficient approximation. In. We'll see first how Bayesian inference can be implemented explicitly with parameter noise. There are various full-featured deep learning frameworks built on top of JAX and designed to resemble other frameworks you might be familiar with, such as PyTorch or Keras. , . Understanding Black-box Predictions via Inuence Functions 2. To scale up influence functions to modern [] In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. This code replicates the experiments from the following paper: Understanding Black-box Predictions via Influence Functions. On the origin of implicit regularization in stochastic gradient descent. . In. Besides just getting your networks to train better, another important reason to study neural net training dynamics is that many of our modern architectures are themselves powerful enough to do optimization. For the final project, you will carry out a small research project relating to the course content. We try to understand the effects they have on the dynamics and identify some gotchas in building deep learning systems. An evaluation of the human-interpretability of explanation. Deep learning via hessian-free optimization. We'll mostly focus on minimax optimization, or zero-sum games. However, as stated , . compress your dataset slightly to the most influential images important for >> For toy functions and simple architectures (e.g. Infinite Limits and Overparameterization [Slides]. The main choices are. Interpreting black box predictions using Fisher kernels. ( , ) Inception, . where the theory breaks down, How can we explain the predictions of a black-box model? This could be because we explicitly build optimization into the architecture, as in MAML or Deep Equilibrium Models. Liu, Y., Jiang, S., and Liao, S. Efficient approximation of cross-validation for kernel methods using Bouligand influence function. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. x\Y#7r~_}2;4,>Fvv,ZduwYTUQP }#&uD,spdv9#?Kft&e&LS 5[^od7Z5qg(]}{__+3"Bej,wofUl)u*l$m}FX6S/7?wfYwoF4{Hmf83%TF#}{c}w( kMf*bLQ?C}?J2l1jy)>$"^4Rtg+$4Ld{}Q8k|iaL_@8v A. However, in a lower Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. on to the next image. Are you sure you want to create this branch? Understanding Black-box Predictions via Influence Functions Unofficial implementation of the paper "Understanding Black-box Preditions via Influence Functions", which got ICML best paper award, in Chainer. For details and examples, look here. That can increase prediction accuracy, reduce Understanding short-horizon bias in stochastic meta-optimization. Why Use Influence Functions? The list Fast exact multiplication by the hessian. One would have expected this success to require overcoming significant obstacles that had been theorized to exist. We'll consider the two most common techniques for bilevel optimization: implicit differentiation, and unrolling. , loss , input space . 2018. A spherical analysis of Adam with batch normalization. If you have questions, please contact Pang Wei Koh (pangwei@cs.stanford.edu). %PDF-1.5 SVM , . Ribeiro, M. T., Singh, S., and Guestrin, C. "why should I trust you? When can we take advantage of parallelism to train neural nets? Optimizing neural networks with Kronecker-factored approximate curvature. You can get the default config by calling ptif.get_default_config(). The degree of influence of a single training sample z on all model parameters is calculated as: Where is the weight of sample z relative to other training samples. An empirical model of large-batch training. Therefore, if we bring in an idea from optimization, we need to think not just about whether it will minimize a cost function faster, but also whether it does it in a way that's conducive to generalization. Requirements chainer v3: It uses FunctionHook. Understanding black-box predictions via influence functions. Koh, Pang Wei. Hopefully this understanding will let us improve the algorithms. Appendix: Understanding Black-box Predictions via Inuence Functions Pang Wei Koh1Percy Liang1 Deriving the inuence functionIup,params For completeness, we provide a standard derivation of theinuence functionIup,params in the context of loss minimiza-tion (M-estimation). Programming languages & software engineering, Programming languages and software engineering, Designing AI Systems with Steerable Long-Term Dynamics, Using platform models responsibly: Developer tools with human-AI partnership at the center, [ICSE'22] TOGA: A Neural Method for Test Oracle Generation, Characterizing and Predicting Engagement of Blind and Low-Vision People with an Audio-Based Navigation App [Pre-recorded CHI 2022 presentation], Provably correct, asymptotically efficient, higher-order reverse-mode automatic differentiation [video], Closing remarks: Empowering software developers and mathematicians with next-generation AI, Research talks: AI for software development, MDETR: Modulated Detection for End-to-End Multi-Modal Understanding, Introducing Retiarii: A deep learning exploratory-training framework on NNI, Platform for Situated Intelligence Workshop | Day 2. For more details please see Cook, R. D. and Weisberg, S. Characterizations of an empirical influence function for detecting influential cases in regression. In this paper, we use influence functions a classic technique from robust statistics to trace a models prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. PW Koh, P Liang. . In this paper, we use influence functions a classic technique from robust statistics to trace a models prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. we develop a simple, efficient implementation that requires only oracle access to gradients Bilevel optimization refers to optimization problems where the cost function is defined in terms of the optimal solution to another optimization problem. I recommend you to change the following parameters to your liking. Understanding black-box predictions via influence functions. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I. J., Harp, A., Irving, G., Isard, M., Jia, Y., Jzefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Man, D., Monga, R., Moore, S., Murray, D. G., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P. A., Vanhoucke, V., Vasudevan, V., Vigas, F. B., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. The previous lecture treated stochasticity as a curse; this one treats it as a blessing. and even creating visually-indistinguishable training-set attacks. Measuring and regularizing networks in function space. calculations, which could potentially be 10s of thousands. the original paper linked here. The final report is due April 7. C. Maddison, D. Paulin, Y.-W. Teh, B. O'Donoghue, and A. Doucet. non-convex non-differentialble . Understanding Black-box Predictions via Influence Functions - YouTube AboutPressCopyrightContact usCreatorsAdvertiseDevelopersTermsPrivacyPolicy & SafetyHow YouTube worksTest new features 2022. We'll use linear regression to understand two neural net training phenomena: why it's a good idea to normalize the inputs, and the double descent phenomenon whereby increasing dimensionality can reduce overfitting. the algorithm will then calculate the influence functions for all images by He, M. Narayanan, S. Gershman, B. Kim, and F. Doshi-Velez. stream Inception-V3 vs RBF SVM(use SmoothHinge) The inception networks(DNN) picked up on the distinctive characteristics of the fish. Influence functions efficiently estimate the effect of removing a single training data point on a model's learned parameters. We have a reproducible, executable, and Dockerized version of these scripts on Codalab. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions. vector to calculate the influence. How can we explain the predictions of a black-box model? Class will be held synchronously online every week, including lectures and occasionally tutorials. In order to have any hope of understanding the solutions it comes up with, we need to understand the problems. In this lecture, we consider the behavior of neural nets in the infinite width limit. numbers above the images show the actual influence value which was calculated. The idea is to compute the parameter change if z were upweighted by some small , giving us new parameters ^,z argmin(1 )1 nn i=1L(zi,)+L(z,). Understanding Black-box Predictions via Influence Functions International Conference on Machine Learning (ICML), 2017. You signed in with another tab or window. Most importantnly however, s_test is only Simonyan, K., Vedaldi, A., and Zisserman, A. Dependencies: Numpy/Scipy/Scikit-learn/Pandas Idea: use Influence Functions to observe the influence of the test samples from the training samples. Understanding Black-box Predictions via Influence Functions (2017) 1. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., and Darrell, T. Decaf: A deep convolutional activation feature for generic visual recognition. In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Google Scholar Krizhevsky A, Sutskever I, Hinton GE, 2012. Up to now, we've assumed networks were trained to minimize a single cost function. D. Maclaurin, D. Duvenaud, and R. P. Adams. Understanding black-box predictions via influence functions. , . below is divided into parameters affecting the calculation and parameters This leads to an important optimization tool called the natural gradient. The security of latent Dirichlet allocation. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. calculated. (a) train loss, Hessian, train_loss + Hessian . Implicit Regularization and Bayesian Inference [Slides]. Li, B., Wang, Y., Singh, A., and Vorobeychik, Y. Highly overparameterized models can behave very differently from more traditional underparameterized ones. Imagenet classification with deep convolutional neural networks. Understanding Black-box Predictions via Influence Functions Proceedings of the 34th International Conference on Machine Learning . To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. A classic result tells us that the influence of upweighting z on the parameters ^ is given by. prediction outcome of the processed test samples. Pang Wei Koh, Percy Liang; Proceedings of the 34th International Conference on Machine Learning, . Influence functions are a classic technique from robust statistics to identify the training points most responsible for a given prediction. Adaptive Gradient Methods, Normalization, and Weight Decay [Slides]. In many cases, the distance between two neural nets can be more profitably defined in terms of the distance between the functions they represent, rather than the distance between weight vectors. Riemannian metrics for neural networks I: Feed-forward networks. to trace a model's prediction through the learning algorithm and back to its training data, The model was ResNet-110. Understanding the Representation and Computation of Multilayer Perceptrons: A Case Study in Speech Recognition. reading both values from disk and calculating the influence base on them. In. Components of inuence. 2017. How can we explain the predictions of a black-box model? Yuwen Xiong, Andrew Liao, and Jingkang Wang. Assignments for the course include one problem set, a paper presentation, and a final project. One would have expected this success to require overcoming significant obstacles that had been theorized to exist. Then, it'll calculate all s_test values and save those to disk. Three mechanisms of weight decay regularization. lehman2019inferringE. functions. Disentangled graph convolutional networks. A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. Check if you have access through your login credentials or your institution to get full access on this article. This alert has been successfully added and will be sent to: You will be notified whenever a record that you have chosen has been cited. But keep in mind that some of the key concepts in this course, such as directional derivatives or Hessian-vector products, might not be so straightforward to use in some frameworks. . The datasets for the experiments can also be found at the Codalab link. 2172: 2017: . For one thing, the study of optimizaton is often prescriptive, starting with information about the optimization problem and a well-defined goal such as fast convergence in a particular norm, and figuring out a plan that's guaranteed to achieve it. Reconciling modern machine-learning practice and the classical bias-variance tradeoff. Noisy natural gradient as variational inference. multilayer perceptrons), you can use straight-up JAX so that you understand everything that's going on. your individual test dataset. It is known that in a high complexity class such as exponential time, one can convert worst-case hardness into average-case hardness. Despite its simplicity, linear regression provides a surprising amount of insight into neural net training. Helpful is a list of numbers, which are the IDs of the training data samples NIPS, p.1097-1105. This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang. On the importance of initialization and momentum in deep learning. Wojnowicz, M., Cruz, B., Zhao, X., Wallace, B., Wolff, M., Luan, J., and Crable, C. "Influence sketching": Finding influential samples in large-scale regressions. Delta-STN: Efficient bilevel optimization of neural networks using structured response Jacobians. Gradient-based hyperparameter optimization through reversible learning. We show that even on non-convex and non-differentiable models Wei, B., Hu, Y., and Fung, W. Generalized leverage and its applications. values s_test and grad_z for each training image are computed on the fly ? Often we want to identify an influential group of training samples in a particular test prediction for a given We study the task of hardness amplification which transforms a hard function into a harder one. After all, the optimization landscape is nonconvex, highly nonlinear, and high-dimensional, so why are we able to train these networks? calculates the grad_z values for all images first and saves them to disk. Students are encouraged to attend class each week. can speed up the calculation significantly as no duplicate calculations take In. Dependencies: Numpy/Scipy/Scikit-learn/Pandas We'll also consider self-tuning networks, which try to solve bilevel optimization problems by training a network to locally approximate the best response function. Why neural nets generalize despite their enormous capacity is intimiately tied to the dynamics of training. The most barebones way of getting the code to run is like this: Here, config contains default values for the influence function calculation Please try again. Understanding black-box predictions via influence functions Computing methodologies Machine learning Recommendations On second-order group influence functions for black-box predictions With the rapid adoption of machine learning systems in sensitive applications, there is an increasing need to make black-box models explainable. Understanding Black-box Predictions via Inuence Functions Figure 1. Biggio, B., Nelson, B., and Laskov, P. Support vector machines under adversarial label noise. Data poisoning attacks on factorization-based collaborative filtering. The implicit and explicit regularization effects of dropout. Natural gradient works efficiently in learning. This Validations 4. Using machine teaching to identify optimal training-set attacks on machine learners. Automatically creates outdir folder to prevent runtime error, Merge branch 'expectopatronum-update-readme', Understanding Black-box Predictions via Influence Functions, import it as a package after it's in your, Combined, the original paper suggests that. A tag already exists with the provided branch name. Understanding Black-box Predictions via Influence Functions International Conference on Machine Learning (ICML), 2017. Copyright 2023 ACM, Inc. Understanding black-box predictions via influence functions. We'll then consider how the gradient noise in SGD optimization can contribute an implicit regularization effect, Bayesian or non-Bayesian. Understanding Black-box Predictions via Influence Functions. As a result, the practical success of neural nets has outpaced our ability to understand how they work. Understanding Black-box Predictions via Influence Functions. If the influence function is calculated for multiple How can we explain the predictions of a black-box model? On the accuracy of influence functions for measuring group effects. Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang. A tag already exists with the provided branch name. sample. Google Scholar Digital Library; Josua Krause, Adam Perer, and Kenney Ng. ImageNet large scale visual recognition challenge. When testing for a single test image, you can then Subsequently, Lectures will be delivered synchronously via Zoom, and recorded for asynchronous viewing by enrolled students. Negative momentum for improved game dynamics. Gradient descent on neural networks typically occurs on the edge of stability. S. McCandish, J. Kaplan, D. Amodei, and the OpenAI Dota Team. Stochastic gradient descent as approximate Bayesian inference. Shrikumar, A., Greenside, P., Shcherbina, A., and Kundaje, A. Not just a black box: Learning important features through propagating activation differences.

Findmarkers Volcano Plot, Register2park Denied Parking, Brad Keywell Wife, Articles U

0 replies

understanding black box predictions via influence functions

Want to join the discussion?
Feel free to contribute!

understanding black box predictions via influence functions