Inferring causal relations of a system is a fundamental problem of statistics. A widely studied approach employs structural causal models that model noisy functional relations among a set of interacting variables. The underlying causal structure is naturally represented by a directed graph whose edges indicate direct causal dependencies. Under the assumption of linear relations with homoscedastic Gaussian errors this causal graph and, thus also, causal effects are identifiable from mere observational data. Over the past decade, two main lines of research evolved, learning the causal graph as well as estimating causal effects when the graph is known. However, a two-step method, that first learns a graph and then treats the graph as known yields confidence intervals that are overly optimistic and can drastically fail to account for the uncertain causal structure. In this talk, I will address this issue and present a framework based on test inversion that allows us to give confidence regions for total causal effects that capture both sources of uncertainty: causal structure and numerical size of nonzero effects.