The rapidly increasing prevalence of information and communications technology has led to a new phenomenon which is called Big Data. This development is fueled by ubiquitous small devices like sensors, increasing interconnection and increasing storage capacity and computing power. Big Data promises many opportunities to governments, businesses and individuals. Central to Big Data is the permanent availability of highly diverse data sets and a wealth of possible gains from utilizing and analyzing those. Germany’s chancellor Angela Merkel even calls data the “resources of the 21st century”. However, the increase in size and complexity of the data sources can be problematic, too, since traditional data management tools cannot cope with the new circumstances. One sub-problem in Big Data is data integration. Data integration describes the process of transforming the highly diverse and numerous data sets so they can be used in the analysis and storage platforms. One specific use case in Big Data is Open Data which encompasses efforts to offer all kinds of data sets to its users for free. Famous examples for Open Data platforms are Wikipedia and Socrata. Open Data leverages the opportunities of Big Data to collect and offer diverse data sets from various users. As this is a user-driven approach Open Data usually does not have to deal with amounts of data so large they can only be processed by computers, but with amounts where manual user intervention is still possible. Data integration in Open Data means combining the computing power of machines with the data-specific knowledge of the contributors to enhance uniformity, linkage and detailedness of the stored data. From a psychological point of view, to actually make the user participate in the integration process, the system has to make recommendations or ask questions about the data as fast as possible. This approach is called publish-time data integration since the actual integration work is done when the user submits his data. There has been plenty of research in developing and evaluating individual operators which do one specific task of data transformation. However, no research has been done in evaluating how multiple of these operators work together and to what extent they can profit from each other to improve both quality and performance of their calculations. This will be addressed in this thesis. We develop a prototype which implements seven different integration operators: Clustering, Column Properties, Primary Keys, Column Renaming, Inclusion Dependencies, Duplicate Detection and Categorization. We evaluate their potential to be used in different orders to maximize integration quality. The quality is measured by various metrics which are calculated alongside the data integration process.