Segment Feed

Overview

Segments are groups of users defined by meaningful merchant-defined attributes such as age, gender, behavior, and spend. Both Recommend and Engage make use of segments in determining recs and content. Merchants can provide pre-defined segments via a feed. This document describes the feed format and the process for providing that data to Algonomy.

This feed does not need to be uploaded daily. Upload it only when there are updates to segment data. This is typically scheduled weekly, monthly, or at another suitable interval. Alternatively, segments can also be sent via API or JavaScript tags. Contact your Algonomy representative for more information.

A segment is defined as a combination of a segment ID and a segment name, and each combination must be unique. For more information about segments, refer to Segments.

Segment Feed Format

Unlike catalog and content feeds, which use pipe-delimited csv file formats with .txt extensions, segment feeds are sent as JSON data.

Segments are passed in with user IDs in a JSON object consisting of two items: the user ID and the associated values for it. The associated values are held in another object, which is a list of hashes composed of the segment ID and name.

  • Segment ID: Unique identifier for the segment.

  • Segment name: Human-readable name associated with the segment ID. This is optional. If you don't have a segment name to pass in, use an empty string ("") in its place.

Note: Once a segment ID is mapped to a name, it cannot be reused with a different name.

Feed File

Feed File

Details

Filename pattern

Segment Feed

A JSON-formatted data file of segment data for your site.

segments_sitename_YYYY_MM_DD.txt

File Specifications

The Personalization Platform supports both full feeds (complete overwrite) and delta feeds (incremental updates) for the User Profile Service.

In addition to segments, the platform also supports the ingestion of user attributes, user linkings, and user preferences through the User Profile Batch Upload or User Profile Enrichment APIs. Using the batch upload process, you can load segment data for each user by associating user IDs with the segments they belong to. This data is used to update user profiles within the platform.

The feed file requires one JSON object per line. Each object will contain two fields:

Name

Type

Required?

Definition

userId

ASCII

Yes

A unique identifier for the user.

Value

text

Yes

A valid JSON object that gives the unique segment identifiers and the human-friendly name of each segment for the individual user as a key : value pair. The segment value can be a string or a numeric value.

Keys in this object may be required to conform to a maximum number of characters, depending on the total size of the segment data (which is strongly correlated to the number of records). So for example, if there are a large number of records to ingest, then your Algonomy team may ask for keys to be kept inside of three or four characters.

Sample

Copy
{"userId": "13579", "value":{"seg2134":"Dog Owner", "seg50plus":"High Spender", "Platinum": "Platinum Member"}}

Note: Each unique segment identifier is meant to have a unique context or value.

Upload Segment Feed Data

You can upload segment feed data using either FTP or SFTP, depending on your use case and scale.

Segment feeds can be uploaded using the standard FTP-based ingestion process, where the JSON file is transferred directly to the FTP server and then processed using custom commands such as HIVEUPLOAD. This method is suitable for simple, manual uploads or smaller datasets and follows the same approach as other feed uploads. For more details on connecting to the FTP server and uploading files, refer to Importing Data with FTP.

Segment feeds can also be uploaded using SFTP-based offline ingestion, which is the recommended approach for secure and large-scale data processing. In this method, files are uploaded as compressed archives along with a metadata file that defines the ingestion command (such as UPSBATCH with the rr-segments attribute). SFTP ingestion provides better structure, automation, and error handling through directory-based processing and asynchronous execution. For more details, refer to SFTP Offline Data Ingestion.