Table class for working with data and schema.

# Table.load(source, schema = NULL, strict = FALSE, headers = 1, ...)

Format

R6Class object.

Value

Object of R6Class .

Methods

Table$new(source, schema, strict, headers)

Use Table.load to instantiate Table class.

iter(keyed, extended, cast=TRUE, relations=FALSE, stream=FALSE)

Iter through the table data and emits rows cast based on table schema. Data casting could be disabled.

keyed Iter keyed rows - TRUE/FALSE extended Iter extended rows - TRUE/FALSE cast Disable data casting if FALSE relations List object of foreign key references from a form of JSON {resource1: [{field1: value1, field2: value2},...],...}. If provided foreign key fields will checked and resolved to its references stream Return Readable Stream of table rows if TRUE
read(keyed, extended, cast=TRUE, relations=FALSE, limit)

Read the whole table and returns as array of rows. Count of rows could be limited.

keyed Flag to emit keyed rows - TRUE/FALSE extended Flag to emit extended rows - TRUE/FALSE cast Disable data casting if FALSE relations List object of foreign key references from a form of JSON {resource1: [{field1: value1, field2: value2},...],...}. If provided foreign key fields will checked and resolved to its references limit Integer limit of rows to return if specified
infer(limit=100)

Infer a schema for the table. It will infer and set Table Schema to table$schema based on table data.

limit Limit rows samle size - number
save(target)

Save data source to file locally in CSV format with , (comma) delimiter.

target String path where to save a table data

Properties

headers

Returns data source headers

schema

Returns schema class instance

Details

A table is a core concept in a tabular data world. It represents a data with a metadata (Table Schema). Tabular data consists of a set of rows. Each row has a set of fields (columns). We usually expect that each row has the same set of fields and thus we can talk about the fields for the table as a whole. In case of tables in spreadsheets or CSV files we often interpret the first row as a header row, giving the names of the fields. By contrast, in other situations, e.g. tables in SQL databases, the field names are explicitly designated.

In order to talk about the representation and processing of tabular data from text-based sources, it is useful to introduce the concepts of the physical and the logical representation of data.

The physical representation of data refers to the representation of data as text on disk, for example, in a CSV or JSON file. This representation may have some type information (JSON, where the primitive types that JSON supports can be used) or not (CSV, where all data is represented in string form).

The logical representation of data refers to the "ideal" representation of the data in terms of primitive types, data structures, and relations, all as defined by the specification. We could say that the specification is about the logical representation of data, as well as about ways in which to handle conversion of a physical representation to a logical one.

We'll explicitly refer to either the physical or logical representation in places where it prevents ambiguity for those engaging with the specification, especially implementors.

For example, constraints should be tested on the logical representation of data, whereas a property like missingValues applies to the physical representation of the data.

Jsolite package is internally used to convert json data to list objects. The input parameters of functions could be json strings, files or lists and the outputs are in list format to easily further process your data in R environment and exported as desired. More details about handling json you can see jsonlite documentation or vignettes here.

Future package is also used to load and create Table and Schema class asynchronously. To retrieve the actual result of the loaded Table or Schema you have to call value(future) to the variable you stored the loaded Table/Schema. More details about future package and sequential and parallel processing you can find here.

Examples section of each function show how to use jsonlite and future packages with tableschema.r.

Language

The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this package documents are to be interpreted as described in RFC 2119.

See also

Methods

Public methods


Method new()

Usage

Table$new(src, schema = NULL, strict = FALSE, headers = 1)

Arguments

schema

data schema in all forms supported by Schema class

strict

strictness option TRUE or FALSE, to pass to Schema constructor

headers

data source headers, one of:

  • row number containing headers (source should contain headers rows)

  • list of headers (source should NOT contain headers rows)


Method infer()

Usage

Table$infer(limit = 100)


Method iter()

Usage

Table$iter(keyed, extended, cast = TRUE, relations = FALSE, stream = FALSE)


Method read()

Usage

Table$read(
  keyed = FALSE,
  extended = FALSE,
  cast = TRUE,
  relations = FALSE,
  limit = NULL
)


Method save()

Usage

Table$save(connection)


Method clone()

The objects of this class are cloneable with this method.

Usage

Table$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.