Learning, Incentives, and Optimization
for Networked Systems
The emergence of edge and fog computing allows an ever-larger array of devices in the Internet-of-Things, ranging from cloud servers to smartphones to low-power sensors, to run an ever greater range of mobile applications that take advantage of these devices' data collection, processing, and communication capabilities. Running such applications requires devices to cooperate, which includes incentivizing devices (who may have different owners) to work with each other, planning for uncertainty in application needs or device resource availability, and optimizing the contributions of heterogeneous devices. By learning, incentivizing, and optimizing device interactions in networked systems, our research enables new applications to be deployed and existing applications to achieve better scale and efficiency.
Our research considers mechanisms deployed at multiple layers of the traditional computing stack, ranging from interactions between human application users and device owners to scheduling algorithms for wireless networks. It can be roughly divided into three parts, which respectively consider markets for network and compute resources, distributed learning algorithms designed to run on networked devices, and prediction algorithms and models for data with networked structure.
Network and Computing Marketplaces
The proliferation of IoT and machine learning applications has substantially increased utilization of network and computing resources, leading to increased operating costs that may not be economically sustainable. Our work on smart data pricing (SDP) develops economic theories of the markets for data and computing resources that underlie the success of the Internet, and the mechanisms that govern applications' competition for device resources. Such markets aim to efficiently allocate resources to devices according to their heterogeneous, and potentially dynamic, needs. A common challenge is the dynamics of user demands over time, which often leads to dynamic pricing mechanisms in which users can pay for dynamic amounts of resources that match their demands at different times.
Planning for uncertainty
Many emerging applications that utilize edge and fog computing are based on machine learning and artificial intelligence, which have achieved remarkable success in a range of areas like vision, gaming, and autonomous vehicles. Running such machine learning algorithms in practice, however, requires considering how heterogeneous devices may contribute to such algorithms, accounting for the data they collect, the local computations they run, and the updates that they make to common models or inferences. Our work designs and analyzes new machine learning algorithms that both incentivize devices to make more useful contributions and optimally leverage these contributions in training or making inferences from learning models.
Learning with limited resources
The pervasiveness of Internet-connected devices in the Internet-of-Things allows these devices and their human owners to cooperate with each other by forming spatial and social relationships respectively in physical (e.g., smart cities) and virtual (e.g., online social networks) communities. Our work models these relationships to predict future user and device interactions from historical data and design mechanisms for devices to run these applications. Predicting and incentivizing user interactions in such settings is particularly difficult since there are large numbers of users present with dynamic and heterogeneous spatial and social relationships. For example, vehicles can move towards many different locations in a city, and each direction of movement will have a different impact on road congestion and other vehicles' chosen movements. Our ongoing work uses machine learning approaches to handle these complexities.
Incentivizing spatial mobility