Contents

Inputs and outputs

These properties can be used to retrieve data from or pass data back to KNIME Analytics Platform. The length of the input and output lists depends on the number of input and output ports of the node.

Example: If you have a Python Script (Labs) node configured with two input tables and one input object, you can access the two tables via knime_io.input_tables[0] and knime_io.input_tables[1], and the input object via knime_io.input_objects[0].

knime_io.flow_variables: Dict[str, Any] = {}

A dictionary of flow variables provided by the KNIME workflow. New flow variables can be added to the output of the node by adding them to the dictionary. Supported flow variable types are numbers, strings, booleans and lists thereof.

knime_io.input_objects: List = <knime_table._FixedSizeListView object>

A list of input objects of this script node using zero-based indices. This list has a fixed size, which is determined by the number of input object ports configured for this node. Input objects are Python objects that are passed in from another Python script node’s``output_object`` port. This can, for instance, be used to pass trained models between Python nodes. If no input is given, the list exists but is empty.

knime_io.input_tables: List[ReadTable] = <knime_table._FixedSizeListView object>

The input tables of this script node. This list has a fixed size, which is determined by the number of input table ports configured for this node. Tables are available in the same order as the port connectors are displayed alongside the node (from top to bottom), using zero-based indexing. If no input is given, the list exists but is empty.

knime_io.output_images: List = <knime_table._FixedSizeListView object>

The output images of this script node. This list has a fixed size, which is determined by the number of output images configured for this node. The value passed to the output port should be an array of bytes encoding an SVG or PNG image.

Example:

data = knime_io.input_tables[0].to_pandas()
buffer = io.BytesIO()

pyplot.figure()
pyplot.plot('x', 'y', data=data)
pyplot.savefig(buffer, format='svg')

knime_io.output_images[0] = buffer.getvalue()
knime_io.output_objects: List = <knime_table._FixedSizeListView object>

The output objects of this script node. This list has a fixed size, which is determined by the number of output object ports configured for this node. Each output object can be an arbitrary Python object as long as it can be pickled. Use this to, for example, pass a trained model to another Python script node.

Example:

model = torchvision.models.resnet18()
...
# train/finetune model
...
knime_io.output_objects[0] = model
knime_io.output_tables: List[WriteTable] = <knime_table._FixedSizeListView object>

The output tables of this script node. This list has a fixed size, which is determined by the number of output table ports configured for this node. You should assign a WriteTable or BatchWriteTable to each output port of this node. See the factory methods knime_io.write_table() and knime_io.batch_write_table() below.

Example:

knime_io.output_tables[0] = knime_io.write_table(my_pandas_df)

Factory methods

Use these methods to fill the knime_io.output_tables.

knime_io.batch_write_table() BatchWriteTable

Factory method to create an empty BatchWriteTable that can be filled sequentially batch by batch (see Example).

Example:

table = knime_io.batch_write_table()
table.append(df_1)
table.append(df_2)
knime_io.output_tables[0] = table
knime_io.write_table(data: Union[ReadTable, pandas.DataFrame, pyarrow.Table], sentinel: Optional[Union[str, int]] = None) WriteTable

Factory method to create a WriteTable given a pandas.DataFrame or a pyarrow.Table. If the input is a pyarrow.Table, its first column must contain unique row identifiers of type ‘string’.

Example:

knime_io.output_tables[0] = knime_io.write_table(my_pandas_df, sentinel="min")
Parameters
  • data – A ReadTable, pandas.DataFrame or a pyarrow.Table

  • sentinel

    Interpret the following values in integral columns as missing value:

    • "min" min int32 or min int64 depending on the type of the column

    • "max" max int32 or max int64 depending on the type of the column

    • a special integer value that should be interpreted as missing value

Classes

class knime_table.Batch

A batch is a part of a table containing data. A batch should always fit into system memory, thus all methods accessing the data will be processed immediately and synchronously.

It can be sliced before the data is accessed as pandas.DataFrame or pyarrow.RecordBatch.

__getitem__(slicing: Union[slice, Tuple[slice, Union[slice, List[int], List[str]]]]) SlicedDataView

Creates a view of this batch by slicing specific rows and columns. The slicing syntax is similar to that of numpy arrays, but columns can also be addressed as index lists or via a list of column names.

Parameters
  • row_slice – A slice object describing which rows to use.

  • column_slice – Optional. A slice object, a list of column indices, or a list of column names.

Returns

A SlicedDataView that can be converted to pandas or pyarrow.

Example:

full_batch = batch[:] # Slice/Get the full batch

# Slicing works for rows and columns. Column slices can be defined with int's or the column names
row_sliced_batch = batch[:100] # Get first 100 rows of the batch
column_sliced_batch = batch[:, ["name", "age"]] # Get all rows of the columns "name" and "age"
row_and_column_sliced_batch = batch[:100, 1:5] # Get the first 100 rows of columns 1,2,3,4

# The resulting`sliced_batches` cannot be sliced further. But they can be converted to pandas or pyarrow.
abstract property column_names: Tuple[str, ...]

Returns the list of column names.

abstract property num_columns: int

Returns the number of columns in the table.

abstract property num_rows: int

Returns the number of rows in the table.

If the table is not completely available yet because batches are still appended to it, querying the number of rows blocks until all data is available.

property shape: Tuple[int, int]

Returns a tuple in the form (numRows, numColumns) representing the shape of this table.

If the table is not completely available yet because batches are still appended to it, querying the shape blocks until all data is available.

abstract to_pandas(sentinel: Optional[Union[str, int]] = None) pandas.DataFrame

Access the batch or table as a pandas.DataFrame.

Parameters

sentinel

Replace missing values in integral columns by the given value, one of:

  • "min" min int32 or min int64 depending on the type of the column

  • "max" max int32 or max int64 depending on the type of the column

  • An integer value that should be inserted for each missing value

Raises

IndexError – If rows or columns were requested outside of the available shape

abstract to_pyarrow(sentinel: Optional[Union[str, int]] = None) Union[pyarrow.RecordBatch, pyarrow.Table]

Access this batch or table as a pyarrow.RecordBatch or pyarrow.table. The returned type depends on the type of the underlying object. When called on a ReadTable, returns a pyarrow.Table.

Parameters

sentinel

Replace missing values in integral columns by the given value, one of:

  • "min" min int32 or min int64 depending on the type of the column

  • "max" max int32 or max int64 depending on the type of the column

  • An integer value that should be inserted for each missing value

Raises

IndexError – If rows or columns were requested outside of the available shape

class knime_table.ReadTable

A KNIME ReadTable provides access to the data provided from KNIME, either in full (must fit into memory) or split into row-wise batches.

__getitem__(slicing: Union[slice, Tuple[slice, Union[slice, List[int], List[str]]]]) SlicedDataView

Creates a view of this ReadTable by slicing rows and columns. The slicing syntax is similar to that of numpy arrays, but columns can also be addressed as index lists or via a list of column names.

The returned sliced_table cannot be sliced further. But they can be converted to pandas or pyarrow.

Parameters
  • row_slice – A slice object describing which rows to use.

  • column_slice – Optional. A slice object, a list of column indices, or a list of column names.

Returns

a SlicedDataView that can be converted to pandas or pyarrow.

Example:

row_sliced_table = table[:100] # Get the first 100 rows
column_sliced_table = table[:, ["name", "age"]] # Get all rows of the columns "name" and "age"
row_and_column_sliced_table = table[:100, 1:5] # Get the first 100 rows of columns 1,2,3,4

df = row_and_column_sliced_table.to_pandas()
__len__() int

Returns the number of batches of this table

abstract batches() Iterator[Batch]

Returns an generator for the batches in this table. If the generator is advanced to a batch that is not available yet, it will block until the data is present. len(my_read_table) gives the static amount of batches within the table, which is not updated.

Example:

processed_table = knime_io.batch_write_table()
for batch in knime_io.input_tables[0].batches():
    input_batch = batch.to_pandas()
    # process the batch
    processed_table.append(input_batch)
abstract property column_names: Tuple[str, ...]

Returns the list of column names.

abstract property num_batches: int

Returns the number of batches in this table.

If the table is not completely available yet because batches are still appended to it, querying the number of batches blocks until all data is available.

abstract property num_columns: int

Returns the number of columns in the table.

abstract property num_rows: int

Returns the number of rows in the table.

If the table is not completely available yet because batches are still appended to it, querying the number of rows blocks until all data is available.

property shape: Tuple[int, int]

Returns a tuple in the form (numRows, numColumns) representing the shape of this table.

If the table is not completely available yet because batches are still appended to it, querying the shape blocks until all data is available.

abstract to_pandas(sentinel: Optional[Union[str, int]] = None) pandas.DataFrame

Access the batch or table as a pandas.DataFrame.

Parameters

sentinel

Replace missing values in integral columns by the given value, one of:

  • "min" min int32 or min int64 depending on the type of the column

  • "max" max int32 or max int64 depending on the type of the column

  • An integer value that should be inserted for each missing value

Raises

IndexError – If rows or columns were requested outside of the available shape

abstract to_pyarrow(sentinel: Optional[Union[str, int]] = None) Union[pyarrow.RecordBatch, pyarrow.Table]

Access this batch or table as a pyarrow.RecordBatch or pyarrow.table. The returned type depends on the type of the underlying object. When called on a ReadTable, returns a pyarrow.Table.

Parameters

sentinel

Replace missing values in integral columns by the given value, one of:

  • "min" min int32 or min int64 depending on the type of the column

  • "max" max int32 or max int64 depending on the type of the column

  • An integer value that should be inserted for each missing value

Raises

IndexError – If rows or columns were requested outside of the available shape

class knime_table.WriteTable

A table that can be filled as a whole.

abstract property column_names: Tuple[str, ...]

Returns the list of column names.

abstract property num_batches: int

Returns the number of batches in this table.

If the table is not completely available yet because batches are still appended to it, querying the number of batches blocks until all data is available.

abstract property num_columns: int

Returns the number of columns in the table.

abstract property num_rows: int

Returns the number of rows in the table.

If the table is not completely available yet because batches are still appended to it, querying the number of rows blocks until all data is available.

property shape: Tuple[int, int]

Returns a tuple in the form (numRows, numColumns) representing the shape of this table.

If the table is not completely available yet because batches are still appended to it, querying the shape blocks until all data is available.

class knime_table.BatchWriteTable

A table that can be filled batch by batch.

abstract append(data: Union[Batch, pandas.DataFrame, pyarrow.RecordBatch], sentinel: Optional[Union[str, int]] = None)

Appends a batch with the given data to the end of this table. The number of columns, as well as their data types, must match that of the previous batches in this table. Note that this cannot take a pyarrow.Table as input. With pyarrow, it can only process batches, which can be created as follows from some input table.

Example:

processed_table = knime_io.batch_write_table()
for batch in knime_io.input_tables[0].batches():
    input_batch = batch.to_pandas()
    # process the batch
    processed_table.append(input_batch)
Parameters
  • data – A batch, a pandas.DataFrame or a pyarrow.RecordBatch

  • sentinel

    Only if data is a pandas.DataFrame or pyarrow.RecordBatch. Interpret the following values in integral columns as missing value:

    • "min" min int32 or min int64 depending on the type of the column

    • "max" max int32 or max int64 depending on the type of the column

    • a special integer value that should be interpreted as missing value

Raises

ValueError – If the new batch does not have the same columns as previous batches in this Writetable.

abstract property column_names: Tuple[str, ...]

Returns the list of column names.

static create() BatchWriteTable

Create an empty BatchWriteTable

abstract property num_batches: int

Returns the number of batches in this table.

If the table is not completely available yet because batches are still appended to it, querying the number of batches blocks until all data is available.

abstract property num_columns: int

Returns the number of columns in the table.

abstract property num_rows: int

Returns the number of rows in the table.

If the table is not completely available yet because batches are still appended to it, querying the number of rows blocks until all data is available.

property shape: Tuple[int, int]

Returns a tuple in the form (numRows, numColumns) representing the shape of this table.

If the table is not completely available yet because batches are still appended to it, querying the shape blocks until all data is available.

Python Extension Development

Nodes

These classes can be used by developers to implement their own Python nodes for KNIME. For a more detailed description see the Pure Python Node Extensions Guide

class knime_node.PythonNode

Extend this class to provide a pure Python based node extension to KNIME Analytics Platform.

Users can either use the decorators @kn.input_table, @kn.input_binary, @kn.output_table, @kn.output_binary, and @kn.output_view, or populate the input_ports, output_ports, and output_view attributes.

Use the Python logging facilities and its .warning and .error methods to write warnings and errors to the KNIME console. .info and .debug will only show up in the KNIME console if the log level in KNIME is configured to show these.

Example:

import logging
import knime_extension as knext

LOGGER = logging.getLogger(__name__)

@knext.node(name="Pure Python Node", node_type=knext.NodeType.LEARNER, icon_path="../icons/icon.png", category="/")
@knext.input_table(name="Input Data", description="We read data from here")
@knext.output_table(name="Output Data", description="Whatever the node has produced")
class TemplateNode(knext.PythonNode):
    # A Python node has a description.

    def configure(self, configure_context, table_schema):
        LOGGER.info(f"Configuring node")
        return table_schema

    def execute(self, exec_context, table):
        return table
abstract configure(config_context: ConfigurationContext, *inputs)

Configure this Python node.

Parameters
  • config_context – The ConfigurationContext providing KNIME utilities during execution

  • *inputs – Each input table spec or binary port spec will be added as parameter, in the same order that the ports were defined.

Returns

Either a single spec, or a tuple or list of specs. The number of specs must match the number of defined output ports, and they must be returned in this order.

Raises

InvalidConfigurationError – If the input configuration does not satisfy this node’s requirements.

abstract execute(exec_context: ExecutionContext, *inputs)

Execute this Python node.

Parameters
  • exec_context – The ExecutionContext providing KNIME utilities during execution

  • *inputs – Each input table or binary port object will be added as parameter, in the same order that the ports were defined. Tables will be provided as a kn.Table, while binary data will be a plain Python bytes object.

Returns

Either a single output object (table or binary), or a tuple or list of objects. The number of output objects must match the number of defined output ports, and they must be returned in this order. Tables must be provided as a kn.Table or kn.BatchOutputTable, while binary data should be returned as plain Python bytes object.

A node has a type:

class knime_node.NodeType(value)

Defines the different node types that are available for Python based nodes.

LEARNER = 'Learner'

A node learning a model that is typically consumed by a PREDICTOR.

MANIPULATOR = 'Manipulator'

A node that manipulates data.

PREDICTOR = 'Predictor'

A node that predicts something typically using a model provided by a LEARNER.

SINK = 'Sink'

A node consuming data.

SOURCE = 'Source'

A node producing data.

VISUALIZER = 'Visualizer'

A node that visualizes data.

A node’s configure method receives a configuration context that lets you interact with KNIME

class knime_node.ConfigurationContext(java_config_ctx, flow_variables)

The ConfigurationContext provides utilities to communicate with KNIME during a node’s configure() method.

property flow_variables: Dict[str, Any]

The flow variables coming in from KNIME as a dictionary with string keys. The dictionary can be edited and supports flow variables of the following types:

  • bool

  • list(bool)

  • float

  • list(float)

  • int

  • list(int)

  • str

  • list(str)

set_warning(message: str) None

Sets a warning on the node.

Parameters

message – the warning message to display on the node

A node’s execute method receives an execution context that lets you interact with KNIME and e.g. check whether the user has cancelled the execution of your Python node.

class knime_node.ExecutionContext(java_ctx, flow_variables)

The ExecutionContext provides utilities to communicate with KNIME during a node’s execute() method.

property flow_variables: Dict[str, Any]

The flow variables coming in from KNIME as a dictionary with string keys. The dictionary can be edited and supports flow variables of the following types:

  • bool

  • list(bool)

  • float

  • list(float)

  • int

  • list(int)

  • str

  • list(str)

is_canceled() bool

Returns true if this node’s execution has been canceled from KNIME. Nodes can check for this property and return early if the execution does not need to finish. Raising a RuntimeError in that case is encouraged.

set_progress(progress: float, message: Optional[str] = None)

Set the progress of the execution.

Note that the progress that can be set here is 80% of the total progress of a node execution. The first and last 10% are reserved for data transfer and will be set by the framework.

Parameters
  • progress – a floating point number between 0.0 and 1.0

  • message – an optional message to display in KNIME with the progress

set_warning(message: str) None

Sets a warning on the node.

Parameters

message – the warning message to display on the node

Decorators

These decorators can be used to easily configure your Python node.

knime_node.node(name: str, node_type: NodeType, icon_path: str, category: str, after: Optional[str] = None, id: Optional[str] = None) Callable

Use this decorator to annotate a PythonNode class or function that creates a PythonNode instance that should correspond to a node in KNIME.

knime_node.input_binary(name: str, description: str, id: str)

Use this decorator to define a bytes-serialized port object input of a node.

Parameters
  • name

  • description

  • id – A unique ID identifying the type of the Port. Only Ports with equal ID can be connected in KNIME

knime_node.input_table(name: str, description: str)

Use this decorator to define an input port of type “Table” of a node.

knime_node.output_binary(name: str, description: str, id: str)

Use this decorator to define a bytes-serialized port object output of a node.

Parameters
  • name

  • description

  • id – A unique ID identifying the type of the Port. Only Ports with equal ID can be connected in KNIME

knime_node.output_table(name: str, description: str)

Use this decorator to define an output port of type “Table” of a node.

knime_node.output_view(name: str, description: str)

Use this decorator to specify that this node produces a view

Tables

Table and Schema are the two classes that are used to communicate tabular data (Table) during execute, or the table structure (Schema) in configure between Python and KNIME.

class knime_node.Table

This class serves as public API to create KNIME tables either from pandas or pyarrow. These tables can than be sent back to KNIME. This class has to be instantiated by calling either from_pyarrow() or from_pandas()

__getitem__(slicing: Union[slice, List[int], List[str], Tuple[Union[slice, List[int], List[str]], slice]]) _TabularView

Creates a view of this Table by slicing rows and columns. The slicing syntax is similar to that of numpy arrays, but columns can also be addressed as index lists or via a list of column names.

The syntax is [column_slice, row_slice]. Note that this is the exact opposite order than in the Python Script (Labs) node’s ReadTable.

Parameters
  • column_slice – A column index, a column name, a slice object, a list of column indices, or a list of column names.

  • row_slice – Optional: A slice object describing which rows to use.

Returns

A _TabularView representing a slice of the original Table

Example:

row_sliced_table = table[:, :100] # Get the first 100 rows
column_sliced_table = table[["name", "age"]] # Get all rows of the columns "name" and "age"
row_and_column_sliced_table = table[1:5, :100] # Get the first 100 rows of columns 1,2,3,4
static from_pandas(data: pandas.DataFrame, sentinel: Optional[Union[str, int]] = None)

Factory method to create a Table given a pandas.DataFrame. The index of the data frame will be used as RowKey by KNIME.

Example:

Table.from_pandas(my_pandas_df, sentinel="min")
Parameters
  • data – A pandas.DataFrame

  • sentinel

    Interpret the following values in integral columns as missing value:

    • "min" min int32 or min int64 depending on the type of the column

    • "max" max int32 or max int64 depending on the type of the column

    • a special integer value that should be interpreted as missing value

static from_pyarrow(data: pyarrow.Table, sentinel: Optional[Union[str, int]] = None)

Factory method to create a Table given a pyarrow.Table. The first column of the pyarrow.Table must contain unique row identifiers of type ‘string’.

Example:

Table.from_pyarrow(my_pyarrow_table, sentinel="min")
Parameters
  • data – A pyarrow.Table

  • sentinel

    Interpret the following values in integral columns as missing value:

    • "min" min int32 or min int64 depending on the type of the column

    • "max" max int32 or max int64 depending on the type of the column

    • a special integer value that should be interpreted as missing value

abstract property schema: Schema

The schema of this table, containing column names, types, and potentially metadata

to_batches() Iterator[Table]

Returns a generator over the batches in this table. A batch is part of the table with all columns, but only a subset of the rows. A batch should always fit into memory (max size currently 64mb). The table being passed to execute() is already present in batches, so accessing the data this way is very efficient.

Example:

output_table = BatchOutputTable.create()
for batch in my_table.to_batches():
    input_batch = batch.to_pandas()
    # process the batch
    output_table.append(Table.from_pandas(input_batch))
to_pandas(sentinel: Optional[Union[str, int]] = None) pandas.DataFrame

Access this table as a pandas.DataFrame.

Parameters

sentinel

Replace missing values in integral columns by the given value, one of:

  • "min" min int32 or min int64 depending on the type of the column

  • "max" max int32 or max int64 depending on the type of the column

  • An integer value that should be inserted for each missing value

to_pyarrow(sentinel: Optional[Union[str, int]] = None) pyarrow.Table

Access this table as a pyarrow.Table.

Parameters

sentinel

Replace missing values in integral columns by the given value, one of:

  • "min" min int32 or min int64 depending on the type of the column

  • "max" max int32 or max int64 depending on the type of the column

  • An integer value that should be inserted for each missing value

class knime_node.BatchOutputTable

An output table generated by combining smaller tables (also called batches).

Does not provide means to continue to work with the data but is meant to be used as a return value of a Node’s execute() method.

abstract append()

Append a batch to this output table

static create()

Create an empty BatchOutputTable

static from_batches(generator)

Create output table where each batch is provided by a generator

abstract property num_batches: int

The number of batches written to this output table

class knime_schema.Schema(ktypes: List[KnimeType], names: List[str], metadata: Optional[List] = None)

A schema defines the data types and names of the columns inside a table. Additionally, it can hold metadata for the individual columns.

__getitem__(slicing: Union[slice, List[int], List[str]]) _ColumnarView

Creates a view of this Table or Schema by slicing columns. The slicing syntax is similar to that of numpy arrays, but columns can also be addressed as index lists or via a list of column names.

Parameters

column_slice – A column index, a column name, a slice object, a list of column indices, or a list of column names. For single indices, the view will create a “Column” object. For slices or lists of indices, a new Schema will be returned.

Returns

A _ColumnarView representing a slice of the original Schema or Table.

Examples:

Get columns 1,2,3,4: sliced_schema = schema[1:5]

Get the columns “name” and “age”: sliced_schema = schema[["name", "age"]]

property column_names: List[str]

Return the list of column names

classmethod from_columns(columns: List[Column])

Create a schema from a list of columns

classmethod from_knime_dict(table_schema: dict) Schema

Construct a Schema from a dict that was retrieved from KNIME in JSON encoded form as the input to a node’s configure() method.

KNIME provides table information with a RowKey column at the beginning, which we drop before returning the created schema.

classmethod from_types(ktypes: List[KnimeType], names: List[str], metadata: Optional[List] = None)

Create a schema from a list of column data types, names and metadata

property num_columns

The number of columns in this schema

to_knime_dict() Dict

Convert this Schema into dict which can then be JSON encoded and sent to KNIME as result of a node’s configure() method.

Because KNIME expects a row key column as first column of the schema, but we don’t include this in the KNIME Python table schema, we insert a row key column here.

Raises

RuntimeError – if duplicate column names are detected

class knime_schema.Column(ktype: KnimeType, name: str, metadata=None)

A column inside a table schema consists of the knime datatype, a column name and optional metadata.

__init__(ktype: KnimeType, name: str, metadata=None)

Construct a Column from type, name and optional metadata.

Parameters
  • ktype – The knime type of the column

  • name – The name of the column. May not be empty.

Raises
  • TypeError – if the type is no KNIME type

  • ValueError – if the name is empty

Data Types

These are helper functions to create KNIME compatible datatypes. For instance, if a new column is created.

knime_schema.int32()

Create a KNIME integer type with 32 bits

knime_schema.int64()

Create a KNIME integer type with 64 bits

knime_schema.double()

Create a KNIME floating point type with double precision (64 bits)

knime_schema.bool_()

Create a KNIME boolean type

knime_schema.blob(dict_encoding_key_type: Optional[DictEncodingKeyType] = None)

Create a KNIME blob type for binary data of variable length

Parameters

dict_encoding_key_type – The key type to use for dictionary encoding. If this is None (the default), no dictionary encoding will be used. Dictionary encoding helps to reduce storage space and read/write performance for columns with repeating values such as categorical data.

knime_schema.list_(inner_type: KnimeType)

Create a KNIME type that is a list of the given inner types

Parameters

inner_type – The type of the elements in the list. Must be a KnimeType

knime_schema.struct(*inner_types)

Create a KNIME structured data type where each given argument represents a field of the struct.

Parameters

inner_types – The argument list of this method defines the fields in this structured data type. Each inner type must be a KNIME type

knime_schema.logical(value_type)

Create a KNIME logical data type of the given Python value type.

Parameters

value_type – The type of the values inside this column. A knime_types.PythonValueFactory must be registered for this type.

Raises

TypeError – if no PythonValueFactory has been registered for this value type with knime_types.register_python_value_factory