Import Workflows
Import jobs are the primary mechanism for loading data into Farseer variables. This guide covers both configuration styles and real-world workflow patterns.
Creating an Import Job
Style 1: Simple Columns (Recommended)
The columns parameter provides a declarative way to define import structure:
const importJob = await client.createImportJob({
title: 'Revenue Import Q1',
description: 'Quarterly revenue data',
columns: [
{ type: 'DIMENSION_TABLE', dimensionTableName: 'OPEX KEY' },
{ type: 'DIMENSION_TABLE', dimensionTableName: 'Entities' },
{ type: 'DIMENSION_TABLE', dimensionTableName: 'Years' },
{ type: 'DIMENSION_TABLE', dimensionTableName: 'Months' },
{ type: 'DIMENSION_TABLE', dimensionTableName: 'Versions' },
{ type: 'VARIABLE', variableName: 'OPEX PLAN Rebooked' },
],
labels: ['auto', 'opex plan rebooked', '2024', 'Plan']
});
Style 2: Manual Metadata (Advanced)
For full control over import configuration, build metadata manually:
async function getImportMetadata(client: FarseerClient) {
// DIMENSIONS
const VersionsTable = await client.getDimensionTable('Versions');
const YearsTable = await client.getDimensionTable('Years');
const MonthsTable = await client.getDimensionTable('Months');
const EntitiesTable = await client.getDimensionTable('Entities');
const OpexKeyTable = await client.getDimensionTable('OPEX KEY');
// VARIABLES
const OpexPlanRebookedVar = await client.getVariable('OPEX PLAN Rebooked');
return {
handleMissingTags: HandleMissingTags.SKIP_ROW,
importZeros: true,
overwriteCells: true,
columnMetas: [
{
name: 'Versions',
dataContext: farseer.ColumnMetaDataContext.IDENTIFIER,
tableTagId: VersionsTable.id,
},
{
name: 'Years',
dataContext: farseer.ColumnMetaDataContext.IDENTIFIER,
tableTagId: YearsTable.id,
},
{
name: 'Months',
dataContext: farseer.ColumnMetaDataContext.IDENTIFIER,
tableTagId: MonthsTable.id,
},
{
name: 'Entities',
dataContext: farseer.ColumnMetaDataContext.IDENTIFIER,
tableTagId: EntitiesTable.id,
},
{
name: 'OPEX KEY',
dataContext: farseer.ColumnMetaDataContext.IDENTIFIER,
tableTagId: OpexKeyTable.id,
},
{
name: 'OPEX PLAN Rebooked',
dataContext: farseer.ColumnMetaDataContext.VALUES,
applyTagsIds: [OpexPlanRebookedVar.id],
},
],
};
}
const importJob = await client.createImportJob({
title: 'Rebook opex keys',
description: 'Rebook opex keys',
metadata: await getImportMetadata(client),
labels: ['auto', 'opex keys rebook']
});
The Import Flow
Every import follows this pattern:
// 1. Create the job
const job = await client.createImportJob({ title, columns, labels });
// 2. Add data rows
await job.addRows(rows);
// 3. Undo previous import with same labels (optional)
await job.undoPrevious();
// 4. Commit and process (automatically flushes remaining rows)
const report = await job.commit();
How row buffering works
addRows() buffers rows and sends them in batches of 500. For example:
- 300 rows — nothing is sent yet (all buffered)
- 750 rows — 500 are sent immediately, 250 remain buffered
- 1200 rows — 1000 are sent (2 batches), 200 remain buffered
commit() automatically calls flushRows() internally, so the remaining buffered rows are always sent before the import is processed.
flushRows() explicitlyYou only need to call flushRows() manually if you want to inspect the import job before committing — for example during development when you comment out commit() or set a breakpoint. Without an explicit flushRows(), the buffered rows won't be visible in the Farseer UI yet.
await job.addRows(rows);
await job.flushRows(); // Now all rows are visible in the UI
// await job.commit(); // Commented out for debugging
Using Labels for Idempotent Imports
Labels enable undoPrevious() to find and undo the last import with matching labels:
const labels = [
'auto', // Marks as automated import
'sales-import', // Workflow identifier
`sales-import_2024-01`, // Specific run identifier
'source:erp' // Data source tracking
];
Real-World Workflows
Workflow 1: Export, Transform, Import
A common pattern: export data, transform it with Arquero, then import back.
import * as aq from 'arquero';
import { ExportRequestFormatEnum } from 'farseer-client';
async function exportTransformImport(client: FarseerClient, year: string) {
// 1. Calculate on-demand cells first
console.log('Calculating on-demand cells...');
const calc = client.cells.calculateOnDemandCells();
const timeout = new Promise(r => setTimeout(r, 10 * 60 * 1000));
await Promise.race([calc, timeout]);
// 2. Export variable data
const blob = await client.export.exportFormula({
format: ExportRequestFormatEnum.Csv,
formula: `sum("Revenue", "Y${year}", "Actual")`
});
// 3. Parse and transform with Arquero
const data = aq.fromCSV(await blob.text());
const transformed = data
.filter(d => d?.['Amount'] > 0)
.derive({
Forecast: d => d?.['Amount'] * 1.05,
Year: aq.escape((d: any) => `Y${parseInt(year) + 1}`)
});
// 4. Import transformed data
const importJob = await client.createImportJob({
title: `Forecast Import ${year}`,
columns: [
{ type: 'DIMENSION_TABLE', dimensionTableName: 'Product' },
{ type: 'DIMENSION_TABLE', dimensionTableName: 'Years' },
{ type: 'VARIABLE', variableName: 'Revenue' },
],
labels: ['auto', 'forecast', `forecast_${year}`]
});
const rows = transformed.objects().map(r => [
r['Product'], r['Year'], r['Forecast']
]);
await importJob.addRows(rows);
await importJob.flushRows();
await importJob.undoPrevious();
await importJob.commit();
}
Workflow 2: Sync Dimensions from External Source
import { ModelTableColumnType } from 'farseer-client';
async function syncDimensionsFromDB(client: FarseerClient, sqlData: any[]) {
const rows = sqlData.map(item => [
item.ProductCode,
item.ProductName,
item.Category
]);
await client.model.load({
model: {
tables: [{
name: 'Product',
columns: [
{ name: 'Name', type: ModelTableColumnType.PrimaryKey },
{ name: 'Description', type: ModelTableColumnType.Description },
{ name: 'Category', type: ModelTableColumnType.ForeignKey, foreignKeyTableName: 'Category' }
],
rows
}],
variables: [],
dependencies: { tables: [], members: [], variables: [] }
},
options: {
clear: false,
deleteEnabled: { rows: false, tables: false, variables: false, columns: false, cells: false },
dryRun: false
}
});
console.log(`Synced ${rows.length} products`);
}
Workflow 3: File-Based Import with Validation
async function importFromExcel(client: FarseerClient, filePath: string[]) {
// 1. Load Excel file from Farseer
const fileItem = await client.getItemByPath(filePath);
const table = await client.data.loadXlsxFile(fileItem);
// 2. Validate against dimension members
await client.initTagMap();
const products = await client.getDimensionMembersForTable('Product');
const productNames = products.map(p => p.name);
const rows = table.objects();
const validRows = rows.filter((r: any) => productNames.includes(r.Product));
if (validRows.length < rows.length) {
console.log(`Warning: ${rows.length - validRows.length} rows with invalid products skipped`);
}
// 3. Import validated data
const importJob = await client.createImportJob({
title: 'Excel import',
columns: [
{ type: 'DIMENSION_TABLE', dimensionTableName: 'Product' },
{ type: 'DIMENSION_TABLE', dimensionTableName: 'Years' },
{ type: 'VARIABLE', variableName: 'Revenue' },
],
labels: ['auto', 'excel import']
});
const importRows = validRows.map((r: any) => [r['Product'], r['Year'], r['Revenue']]);
await importJob.addRows(importRows);
await importJob.flushRows();
await importJob.undoPrevious();
await importJob.commit();
}
Batch Size Configuration
The default batch size for addRows() is 500. You can change it globally:
FarseerClient.IMPORT_JOB_ROW_BATCH_SIZE = 1000;