Project Goal

This page is home to the DeExcelarator project, aiming at the development of a comprehensive approach for information and knowledge extraction from spreadsheet files.

Approach

Our approach focuses on the cell granularity, utilizing machine learning techniques and heuristics. We have incorporated and further extended features from various related work.

Corpus

We have built a corpus of 828,252 annotated cells. It consists of spreadsheets from three corpora (FUSE, ENRON, and EUSES), covering different domains.

Tools

Our work includes a Java desktop application tool for annotating regions of cells in Excel spreadsheets. There is ongoing work on the development of a processing pipeline able to handle large corpora of spreadsheets.


About

Spreadsheet applications are one of the most used tools for content generation and presentation in the industry and the Web. In spite of this success, there does not exist a comprehensive approach to automatically extract and reuse the richness of data maintained in this format. The biggest obstacle is the lack of awareness about the structure of the data in spreadsheets. Differently from other file types, such as XML and JSON, spreadsheets do not have attached metadata describing the (structural) function of the individual units of data. This makes it difficult for machines to interpret the information maintained in these files, although the same task is rather easy for humans.

In this page, we summarize the current status of our work on table identification and layout recognition in spreadsheets. For the first stage, we have focused on discovering the layout of the data, using machine learning for classification and a set of heuristics. We work on the cell level, considering a wide range of features not covered before by related work. We evaluated the performance of our classifiers on a large dataset covering three different corpora from various domains. Finally, our work includes a novel technique for detecting and repairing incorrectly classified cells in a post-processing step. The experimental results show that our approach delivers very high accuracy bringing us a crucial step closer towards automatic information extraction from spreadsheet files.

Layout Inference

The layout inference process is defined as a series of steps, illustrated in the figure below. Initially, the application reads the spreadsheet file and extracts the features of each non-blank cell. In the next step, cells are classified with high accuracy based on their features. Finally, a post-processing step improves the quality of the results even further by applying a set of rules that are able to identify (most probably) misclassified cells and relabel them. This figure also includes the Table Reconstruction task, which forms a separate topic and is therefore left as future work.

Classification Labels

We define five building blocks for spreadsheet tables: Headers, Attributes, Metadata, Data and Derived (see figure below). A "Header" (H) cell represents the label of a column and can be flat or hierarchical (stacked). Hierarchical structures can be also found in the left-most or right-most columns of a table, which we call "Attributes" (A), a term first introduced in (Chen 2013). Attributes can be seen as instances from the same or different (relational) dimensions placed in one or multiple columns in a way that conveys the existence of a hierarchy. We label cells as "Metadata" (M) when they provide additional information about the table as a whole or its specific sections. Examples of Metadata are the table name, creation date, and the unit for the values of a column. The remaining cells form the actual payload of the table and are labeled as "Data". Additionally, we use the label "Derived" (B) to distinguish those cells that are aggregations of other Data cells' values. Derived cells can have a different structure from the core Data cells, therefore we need to treat them separately.

The golden standard

We annotated a collection of 465 Excel sheets (216 files) from three spreadsheet corpora: FUSE, ENRON, and EUSES. To accomplish this task, we developed a specialized tool for interactive annotations in Microsoft Excel spreadsheets. In the figures bellow we present statistics about these annotations.

Annotated Sheets

The selection of the files was random, but with the intention to maintain some kind of proportionality to the original size of the considered corpora. Therefore, the largest number of sheets are from FUSE, followed by ENRON and then EUSES.

Annotated Tables

We have also annotated the tables in the selected spreadsheets. They will be relevant for future work on automatic table identification.

Annotated Cells

The following figure displays the contribution of each corpus as a percentage of the number of cells per defined annotation label. On the top of each column in the presented chart,we display the total number of cells for the label. In total 828,252 non-blank cells were annotated.

Annotation Tool Video

Since the number of annotated Data cells is overwhelmingly larger than the other classes (labels), 808,179 cells (97.6% of the total annotated cells), we decided to downsample. This not only reduces the possibility of a biased feature selection process, but also speeds up the training and validation processes. For this purpose, from the Data region of each annotated table we consider only the first, last and three random rows in between. By applying these technique the Data class was reduced to 32,905 cells. Together with the annotated cells from the other classes, the final golden standard, used for training and validation, consist of 52,970 instances.


Cell Features

The initial set consisted of 81 cell features. Here we have also included string features such as file and sheet name. We were able to retrieve them using Apache POI, which is the most complete JAVA library for working with Excel spreadsheets. Nevertheless, some of the features required a custom implementation on top of this library. Furthermore, using Weka, a well known toolkit for data mining tasks, we binarized, cleaned, and evaluated these features. The final set contains 43 features, listed in the table below.

Selected Features

We have grouped the selected features into 5 categories based on their characteristics. The content features describe the cell value, but not its format. The features in the second group (column) characterize the styling aspects of the cell except of the font, which is the subject of third group. Formulas referencing other cells are the subject of the forth group. Finally, we have defined features that describe the location (neighborhood) of the cell. The adjacent cells are referred to as “neighbors”. Note, hidden and blank cells are not counted as neighbors.

Independently of the category, Features can be numeric or boolean. We mark the numeric ones with a hashtag (#) as suffix. We use question mark (?) for boolean features.

Content Cell Style Font Reference Spatial
Length# Indentations# Font_Size# Ref_Aggregation_Formula? Row_Number#
Number_Of_Tokens# Default_Horizontal_Align? Default_Font_Color? Ref_Formula_Val_Numeric? Column_Number#
Leading_Spaces# Center_Horizontal_Align? Is_Bold? Has_0_Neighbors?
Is_Numeric? Bottom_Vertical_Align? Single_Underline? Has_1_Neighbors?
Is_Formula? Default_Fill_Pattern? Has_2_Neighbors?
Starts_With_Number? Is_Wraptext? Has_3_Neighbors?
Starts_With_Special_Char? Number_Of_Cells# Has_4_Neighbors?
Is_Title_Case? Has_No_Top_Border?
Is_Upper_Case? Has_Thin_Top_Border?
Contains_Only_Alphabetic? Has_No_Bottom_Border?
Contains_Special_Chars? Has_No_Left_Border?
Contains_Punctuations? Has_No_Right_Border?
Contains_Colon? Has_Medium_Right_Border?
Contains_Words_Like_Total? Number_Of_Defined_Borders#
Contains_Words_Like_Table?
Numeric_Val_In_Year_Range?

Classification & Evaluation

For our evaluation, we consider various classification algorithms, most of which have been successfully applied to similar tasks in the literature. Specifically, we consider CART (SimpleCART in Weka), C4.5 (J48 in Weka), Random Forest and Support Vector Machines (SMO in Weka), which uses the sequential minimal optimization algorithm developed to train the classifier. Here we consider both a polynomial kernel and an RBF kernel. We evaluate the classification performance using 10-fold cross validation. The best scores are achieved by Random Forest classifier. More detailed results can be found below.

Classifier Evaluation

The results from the 10-fold cross validation are summarized in the following table. For each one of the considered classification algorithms we provide the precision, recall, and f1 measures per label. The values are displayed as percentages.

Classifier Metric Attribute Data Header Metadata Derived Weighted
Avg.
Precision 96.9 98.6 97.9 97.8 97.7 98.2
Random Forest Recall 96.8 99.2 98.0 94.1 94.2 98.2
F1 96.8 98.9 98.0 95.9 95.9 98.2
Precision 95.1 98.1 95.9 93.9 96.4 97.1
J48 Recall 94.8 98.5 96.1 92.1 93.6 97.1
F1 94.9 98.3 96.0 93.0 95.0 97.1
Precision 94.3 97.6 95.1 92.1 95.2 96.4
Simple Cart Recall 94.3 98.0 95.3 89.8 92.2 96.4
F1 94.3 97.8 95.2 91.0 93.7 96.4
Precision 89.7 95.1 93.7 89.8 93.6 94.0
SMO Poly Recall 94.7 96.9 90.1 83.1 84.9 94.0
F1 92.2 96.0 91.9 86.3 89.1 94.0
Precision 88.6 93.7 91.6 91.9 94.9 92.8
SMO Rbf Recall 91.9 97.4 89.0 70.3 80.9 92.8
F1 90.2 95.5 90.3 79.6 87.3 92.7
Confusion Matrix

The following table represents the confusion matrix from the evaluation of the Random Forest classifier, which had the highest accuracy in the 10-fold cross validation. The values having a bold fond represent the number of correctly classified cells for each label. The ones marked with light red color are the "most" problematic missclassifications, since they are happening much more often than for other combinations of labels. The second column, which displays the cases cells were predicted as Data, contains many such values.

a d h m b ← Classified as
4038 123 15 33 13   a = Attribute
130 32581 83 51 60   d = Data
20 119 9195 23 2   h = Header
37 122 94 3110 7   m = Metadata
28 197 18 6 2865   b = Derived
Full Dataset Evaluation

We decided to train and run 10-fold cross validation using Random Forest classifier on the full dataset of annotated cells (828,252) with the selected features. The F1 measure does not change much for the Attributes (96.6%) and Header (97.7%) cells. The classifier scores 99.9% on F1 measure for Data cells, since the full dataset contains a vast number of instances of this class. The F1 measure decreases for Metadata and Derived, 93.5% and 94.9% respectively.

To provide a more concrete picture on the accuracy of the classification, in the chart below we display the percentage of sheets for which the classifier has misclassified 2 or less cells. We have stacked those cases that have 0 misclassification with those that have 1 or 2.


Downloads

Use the following links to access the components used for this reasearch project.

* The Spreadsheet Annotator tool it is still in development phase. It has been tested so far in Windows 7 SP1 with Microsoft Office Excel 2013.

Contact


Mail:
elvis.koci@tu-dresden.de
Address:
Dep. of Computer Science
Technische Universität Dresden
Room 3044
Noethnitzer Str. 46
01062 Dresden
(directions)
Phone:
+49 (351) 463 38283
Fax:
+49 (351) 463 38359

EU
maintained by Maik Thiele (maik.thiele@tu-dresden.de)