We consider a multi-operator multi-access edge computing (MEC) network for applications with dependent tasks. Each task includes jobs executed based on logical precedence modelled as a directed acyclic graph, where each vertex is a job, each edge – precedence constraint, such that the job can be started only after its preceding jobs are completed. Tasks are executed by MEC servers with the assistance of workers – nearby edge devices. Each MEC server acts as a master deciding on jobs assigned to its workers. The master's decision problem is complex, as its workers can be associated with other masters in proximity. Thus, the available workers' resources depend on job assignments of all neighboring masters. Yet, as masters select their decisions simultaneously, no master knows concurrent decisions of its neighbors. Besides, some masters can belong to competing operators that have no incentives to exchange information about their decisions. To address these challenges, we formulate a novel framework based on the graphical stochastic Bayesian game, where masters play under uncertainty about their neighbors' decisions. We prove that the game admits a perfect Bayesian equilibrium (PBE), and develop new Bayesian reinforcement learning and Bayesian deep reinforcement learning algorithms enabling each master to reach the PBE independently.