API Reference

This is the full API reference for all user-facing classes and functions in the pliers package.

Converters (pliers.converters)

The Converter hierarchy contains Transformer classes that take an object of arbitrary class (but almost always a Stim subclass) as input, and return a Stim instance (of different class) as output.

Classes:

ComplexTextIterator([name])

Iterates elements in a ComplexTextStim as TextStims.

ExtractorResultToSeriesConverter([name])

Converts an ExtractorResult instance to a list of SeriesStims.

IBMSpeechAPIConverter([username, password, ...])

Uses the IBM Watson Text to Speech API to run speech-to-text transcription on an audio file.

GoogleSpeechAPIConverter([language_code, ...])

Uses the Google Speech API to do speech-to-text transcription.

GoogleVisionAPITextConverter([...])

Detects text within images using the Google Cloud Vision API.

MicrosoftAPITextConverter([language, ...])

Detects text within images using the Microsoft Vision API.

RevAISpeechAPIConverter([access_token, ...])

Uses the Rev AI speech-to-text API to transcribe an audio file.

TesseractConverter([name])

Uses the Tesseract library to extract text from images.

VideoFrameCollectionIterator([name])

Iterates frames in a DerivedVideoStim as ImageStims.

VideoFrameIterator([name])

Iterates frames in a VideoStim as ImageStims.

VideoToAudioConverter([name])

Convert a VideoStim to an AudioStim by extracting the audio track using moviepy.

VideoToComplexTextConverter([steps])

Converts a VideoStim directly to a ComplexTextStim.

VideoToTextConverter([steps])

Converts a VideoStim directly to a TextStim.

WitTranscriptionConverter([api_key, rate_limit])

Speech-to-text transcription via the Wit.ai API.

Functions:

get_converter(in_type, out_type, *args, **kwargs)

Scans the list of available Converters and returns an instantiation of the first one whose input and output types match those passed in.

Dataset utilities (pliers.datasets)

The datasets module contains utilities for working with datasets (mostly remote text datasets).

Functions:

fetch_dictionary(name[, url, format, index, ...])

Retrieve a dictionary of text norms from the web or local storage.

Diagnostic utilities (pliers.diagnostics)

The diagnostics module contains functions for computing basic metrics that may be of use in determining the quality of Extractor results.

Classes:

Diagnostics(data[, columns])

Functions:

correlation_matrix(df)

Returns a pandas DataFrame with the pair-wise correlations of the columns.

eigenvalues(df)

Returns a pandas Series with eigenvalues of the correlation matrix.

condition_indices(df)

Returns a pandas Series with condition indices of the df columns.

variance_inflation_factors(df)

Computes the variance inflation factor (VIF) for each column in the df.

mahalanobis_distances(df[, axis])

Returns a pandas Series with Mahalanobis distances for each sample on the axis.

variances(df)

Returns a pandas Series with variances for each column

Extractors (pliers.extractors)

The Extractor hierarchy contains Transformer classes that take a Stim of any type as input and return extracted feature information (rather than another Stim instance).

Classes:

Base extractors and associated objects

Extractor([name])

Base class for all pliers Extractors.

ExtractorResult(data, stim, extractor[, ...])

Stores feature data produced by an Extractor.

Audio feature extractors

AudiosetLabelExtractor([hop_size, top_n, ...])

Extract probability of 521 audio event classes based on AudioSet corpus using a YAMNet architecture.

BeatTrackExtractor([feature, hop_length])

Dynamic programming beat tracker (beat_track) from audio using the Librosa library.

ChromaCENSExtractor([n_chroma])

Extracts a chroma variant "Chroma Energy Normalized" (CENS) chromogram from audio (via Librosa).

ChromaCQTExtractor([n_chroma])

Extracts a constant-q chromogram from audio using the Librosa library.

ChromaSTFTExtractor([n_chroma])

Extracts a chromagram from an audio's waveform using the Librosa library.

HarmonicExtractor([feature, hop_length])

Extracts the harmonic elements from an audio time-series using the Librosa library.

MeanAmplitudeExtractor([name])

Mean amplitude extractor for blocks of audio with transcription.

MelspectrogramExtractor([n_mels])

Extracts mel-scaled spectrogram from audio using the Librosa library.

MFCCExtractor([n_mfcc])

Extracts Mel Frequency Ceptral Coefficients from audio using the Librosa library.

OnsetDetectExtractor([feature, hop_length])

Detects the basic onset (onset_detect) from audio using the Librosa library.

OnsetStrengthMultiExtractor([feature, ...])

Computes the spectral flux onset strength envelope across multiple channels (onset_strength_multi) from audio using the Librosa library.

PercussiveExtractor([feature, hop_length])

Extracts the percussive elements from an audio time-series using the Librosa library.

PolyFeaturesExtractor([order])

Extracts the coefficients of fitting an nth-order polynomial to the columns of an audio's spectrogram (via Librosa).

RMSExtractor([feature, hop_length])

Extracts root mean square (RMS) from audio using the Librosa library.

SpectralCentroidExtractor([feature, hop_length])

Extracts the spectral centroids from audio using the Librosa library.

SpectralBandwidthExtractor([feature, hop_length])

Extracts the p'th-order spectral bandwidth from audio using the Librosa library.

SpectralContrastExtractor([n_bands])

Extracts the spectral contrast from audio using the Librosa library.

SpectralFlatnessExtractor([feature, hop_length])

Computes the spectral flatness from audio using the Librosa library.

SpectralRolloffExtractor([feature, hop_length])

Extracts the roll-off frequency from audio using the Librosa library.

STFTAudioExtractor([frame_size, hop_size, ...])

Short-time Fourier Transform extractor.

TempogramExtractor([win_length])

Extracts a tempogram from audio using the Librosa library.

TempoExtractor([feature, hop_length])

Detects the tempo (tempo) from audio using the Librosa library.

TonnetzExtractor([feature, hop_length])

Extracts the tonal centroids (tonnetz) from audio using the Librosa library.

ZeroCrossingRateExtractor([feature, hop_length])

Extracts the zero-crossing rate of audio using the Librosa library.

Image feature extractors

BrightnessExtractor([name])

Gets the average luminosity of the pixels in the image

ClarifaiAPIImageExtractor([access_token, ...])

Uses the Clarifai API to extract tags of images.

ClarifaiAPIVideoExtractor([access_token, ...])

Uses the Clarifai API to extract tags from videos.

FaceRecognitionFaceEncodingsExtractor(...)

Uses the face_recognition package to extract a 128-dimensional encoding for every face detected in an image.

FaceRecognitionFaceLandmarksExtractor(...)

Uses the face_recognition package to extract the locations of named features of faces in the image.

FaceRecognitionFaceLocationsExtractor(...)

Uses the face_recognition package to extract bounding boxes for all faces in an image.

GoogleVisionAPIFaceExtractor([...])

Identifies faces in images using the Google Cloud Vision API.

GoogleVisionAPILabelExtractor([...])

Labels objects in images using the Google Cloud Vision API.

GoogleVisionAPIPropertyExtractor([...])

Extracts image properties using the Google Cloud Vision API.

GoogleVisionAPISafeSearchExtractor([...])

Extracts safe search detection using the Google Cloud Vision API.

GoogleVisionAPIWebEntitiesExtractor([...])

Extracts web entities using the Google Cloud Vision API.

MicrosoftAPIFaceExtractor([face_id, ...])

Extracts face features (location, emotion, accessories, etc.).

MicrosoftVisionAPIExtractor([features, ...])

Base MicrosoftVisionAPIExtractor class.

MicrosoftVisionAPITagExtractor([...])

Extracts image tags using the Microsoft API

MicrosoftVisionAPICategoryExtractor([...])

Extracts image categories using the Microsoft API

MicrosoftVisionAPIImageTypeExtractor([...])

Extracts image types (clipart, etc.) using the Microsoft API

MicrosoftVisionAPIColorExtractor([...])

Extracts image color attributes using the Microsoft API

MicrosoftVisionAPIAdultExtractor([...])

Extracts the presence of adult content using the Microsoft API

SaliencyExtractor([name])

Determines the saliency of the image using Itti & Koch (1998) algorithm implemented in pySaliencyMap

SharpnessExtractor([name])

Gets the degree of blur/sharpness of the image

TFHubExtractor(url_or_path[, features, ...])

A generic class for Tensorflow Hub extractors :param url_or_path: url or path to TFHub model. You can browse models at https://tfhub.dev/. :type url_or_path: str :param features: list of feature names matching output dimensions.

VibranceExtractor([name])

Gets the variance of color channels of the image

Text feature extractors

BertExtractor([pretrained_model, tokenizer, ...])

Returns encodings from the last hidden layer of BERT or similar models (ALBERT, DistilBERT, RoBERTa, CamemBERT). Excludes special tokens. Base class for other Bert extractors. :param pretrained_model: A string specifying which transformer model to use. Can be any pretrained BERT or BERT-derived (ALBERT, DistilBERT, RoBERTa, CamemBERT etc.) models listed at https://huggingface.co/transformers/pretrained_models.html or path to custom model. :type pretrained_model: str :param tokenizer: Type of tokenization used in the tokenization step. If different from model, out-of-vocabulary tokens may be treated as unknown tokens. :type tokenizer: str :param model_class: Specifies model type. Must be one of 'AutoModel' (encoding extractor) or 'AutoModelWithLMHead' (language model). These are generic model classes, which use the value of pretrained_model to infer the model-specific transformers class (e.g. BertModel or BertForMaskedLM for BERT, RobertaModel or RobertaForMaskedLM for RoBERTa). Fixed by each subclass. :type model_class: str :param framework: name deep learning framework to use. Must be 'pt' (PyTorch) or 'tf' (tensorflow). Defaults to 'pt'. :type framework: str :param return_input: if True, the extractor returns encoded token and encoded word as features. :type return_input: bool :param model_kwargs: Named arguments for transformer model. See https://huggingface.co/transformers/main_classes/model.html :type model_kwargs: dict :param tokenizer_kwargs: Named arguments for tokenizer. See https://huggingface.co/transformers/main_classes/tokenizer.html :type tokenizer_kwargs: dict.

BertSequenceEncodingExtractor([...])

Extract contextualized sequence encodings using pretrained BERT

BertLMExtractor([pretrained_model, ...])

Returns masked words predictions from BERT (or similar, e.g.

BertSentimentExtractor([pretrained_model, ...])

Extracts sentiment for sequences using BERT (or similar, e.g.

ComplexTextExtractor([name])

Base ComplexTextStim Extractor class; all subclasses can only be applied to ComplexTextStim instance.

DictionaryExtractor(dictionary[, variables, ...])

A generic dictionary-based extractor that supports extraction of arbitrary features contained in a lookup table.

LengthExtractor([name])

Extracts the length of the text in characters.

NumUniqueWordsExtractor([tokenizer])

Extracts the number of unique words used in the text.

PartOfSpeechExtractor([batch_size])

Tags parts of speech in text with nltk.

PredefinedDictionaryExtractor(variables[, ...])

A generic Extractor that maps words onto values via one or more pre-defined dictionaries accessed via the web.

SpaCyExtractor([extractor_type, features, model])

A generic class for Spacy Text extractors

TextVectorizerExtractor([vectorizer])

Uses a scikit-learn Vectorizer to extract bag-of-features from text.

VADERSentimentExtractor()

Uses nltk's VADER lexicon to extract (0.0-1.0) values for the positve, neutral, and negative sentiment of a TextStim.

WordCounterExtractor([case_sensitive, log_scale])

Extracts number of times each unique word has occurred within text

WordEmbeddingExtractor(embedding_file[, ...])

An extractor that uses a word embedding file to look up embedding vectors for text.

Video feature extractors

FarnebackOpticalFlowExtractor([pyr_scale, ...])

Extracts total amount of dense optical flow between every pair of video frames.

** Deep Learning Models ***

TensorFlowKerasApplicationExtractor([...])

Labels objects in images using a pretrained Inception V3 architecture implemented in TensorFlow / Keras.

TFHubImageExtractor(url_or_path[, input_dtype])

TFHub Extractor class for image models.

TFHubTextExtractor(url_or_path[, features, ...])

TFHub extractor class for text models :param url_or_path: url or path to TFHub model. You can browse models at https://tfhub.dev/. :type url_or_path: str :param features: list of labels or other feature names. The number of items must match the number of features in the model output. For example, if a text encoder outputting 768-dimensional encoding is passed output_key (str): key to desired embedding in output dictionary (see documentation at https://www.tensorflow.org/hub/common_saved_model_apis/text). Set to None is the output is not a dictionary, or to output all keys (e.g. base BERT), this must be a list containing 768 items. Each dimension in the model output will be returned as a separate feature in the ExtractorResult. Alternatively, the model output can be packed into a single feature (i.e. a vector) by passing a single-element list (e.g. ['encoding']) or a string. If no value is passed, the extractor will automatically compute the number of features in the model output and return an equal number of features in pliers, labeling each feature with a generic prefix + its positional index in the model output (feature_0, feature_1, ... ,feature_n). :type features: optional :param output_key: key to desired embedding in output dictionary (see documentation at https://www.tensorflow.org/hub/common_saved_model_apis/text). Set to None is the output is not a dictionary, or to output all keys :type output_key: str :param preprocessor_url_or_path: if the model requires preprocessing through another TFHub model, specifies the url or path to the preprocessing module. Information on required preprocessing and appropriate models is generally available on the TFHub model webpage :type preprocessor_url_or_path: str :param preprocessor_kwargs: dictionary or named arguments for preprocessor model hub.KerasLayer call :type preprocessor_kwargs: dict.

TFHubExtractor(url_or_path[, features, ...])

A generic class for Tensorflow Hub extractors :param url_or_path: url or path to TFHub model. You can browse models at https://tfhub.dev/. :type url_or_path: str :param features: list of feature names matching output dimensions.

** Misc-type Extractors ***

MetricExtractor([functions, var_names, ...])

Extracts summary metrics from SeriesStim using numpy, scipy or custom

Functions:

merge_results(results[, format, timing, ...])

Merges a list of ExtractorResults instances and returns a pandas DF.

Filters (pliers.filters)

The Filter hierarchy contains Transformer classes that take a Stim of one type as input and return a Stim of the same type as output (but with some changes to its data).

Classes:

FrameSamplingFilter([every, hertz, top_n])

Samples frames from video stimuli, to improve efficiency.

ImageCroppingFilter([box])

Crops an image.

PillowImageFilter([image_filter])

Uses the ImageFilter module from PIL to run a pre-defined image enhancement filter on an ImageStim.

PunctuationRemovalFilter([name])

Removes punctuation from a TextStim.

TokenizingFilter([tokenizer])

Tokenizes a TextStim into several word TextStims.

TokenRemovalFilter([tokens, language])

Removes tokens (e.g., stopwords, common words, punctuation) from a TextStim.

WordStemmingFilter([stemmer, tokenize, ...])

Nltk-based word stemming and lemmatization Filter.

AudioResamplingFilter([target_sr, resample_type])

Librosa-based audio resampling Filter.

Graph construction (pliers.graph)

The graph module contains tools for constructing and executing graphs of pliers Transformers.

Classes:

Graph([nodes, spec])

Graph-like structure that represents an entire pliers workflow.

Node(transformer[, name])

A graph node/vertex.

Stimuli (pliers.stimuli)

The Stim hierarchy contains pliers representations of any object from which features can potentially be extracted.

Classes:

AudioStim([filename, onset, sampling_rate, ...])

Represents an audio clip.

ComplexTextStim([filename, onset, duration, ...])

A collection of text stims (e.g., a story), typically ordered and with onsets and/or durations associated with each element.

CompoundStim(elements)

A container for an arbitrary set of Stim elements.

ImageStim([filename, onset, duration, data, url])

Represents a static image.

TextStim([filename, text, onset, duration, ...])

Any simple text stimulus--most commonly a single word.

TweetStimFactory([consumer_key, ...])

An object from which to generate TweetStims, creates an Api instance from the python-twitter library

TweetStim(status)

Represents the text and associated media from a single tweet.

TranscribedAudioCompoundStim(audio, text)

An AudioStim with an associated text transcription.

VideoFrameCollectionStim([filename, ...])

A collection of video frames.

VideoFrameStim(video, frame_num[, duration, ...])

A single frame of video.

VideoStim([filename, onset, url, clip])

A video.

Functions:

load_stims(source[, dtype, fail_silently])

Load one or more stimuli directly from file, inferring/extracting metadata as needed.

Transformers (pliers.transformers)

The transformers module contains the base Transformer class from which all other pliers transformers inherit, as well as Transformer subclasses that have multiple subclasses spanning different modules (e.g., Google Cloud extractors that span audio, image, etc.).

Classes:

BatchTransformerMixin([batch_size])

A mixin that overrides the default implicit iteration behavior.

GoogleAPITransformer([discovery_file, ...])

Base GoogleAPITransformer class.

GoogleVisionAPITransformer([discovery_file, ...])

Base class for transformers using the Google Vision API.

Transformer([name])

Base class for all pliers Transformers.

Functions:

get_transformer(name[, base])

Scans list of currently available Transformer classes and returns an instantiation of the first one whose name perfectly matches (case-insensitive).