Building an Event-Driven Agentic Table Application¶
This tutorial shows how to build an Agentic Table application that responds to user interactions through event handlers. An Agentic Table is an interactive spreadsheet component in the Unique platform that can be automated using Python event handlers.
Agentic Tables work through an event-driven architecture. When users interact with a table in the Unique platform, events are sent to your webhook server. Here are the main events you'll handle:
Use this to initialize your table structure (headers, column styles)
ADD_META_DATA
Triggered when a user adds question files or source files
Use this to process uploaded files and populate table data
Implement custom logic for handling different file types
UPDATE_CELL
Triggered when a user edits an editable cell
Use this to implement business rules and automation
React to specific column changes with custom workflows
GENERATE_ARTIFACT
Triggered when a user clicks to export -> generate a document
Use this to create exportable reports or documents from table data
SHEET_COMPLETED
Triggered when a user marks a sheet as completed
Use this for final validation, archival, or triggering downstream processes
Here's how the event flow works:
sequenceDiagram
participant User
participant UniquePlatform
participant YourWebhook
participant AgenticTableService
User->>UniquePlatform: Interacts with table
UniquePlatform->>YourWebhook: Sends event webhook
YourWebhook->>UniquePlatform: Register agent (lock table)
YourWebhook->>YourWebhook: Route to specific handler
YourWebhook->>AgenticTableService: Process event (update cells, status, etc.)
AgenticTableService->>UniquePlatform: Apply changes
YourWebhook->>UniquePlatform: Deregister agent (unlock table)
UniquePlatform->>User: Display updated table
Essential Setup: The Event Handler and Application¶
Before diving into individual handlers, let's set up the core infrastructure. The main event handler receives all events and routes them to specialized handlers:
importloggingfromunique_sdk.api_resources._agentic_tableimportActivityStatusfromunique_toolkit.agentic_table.serviceimportAgenticTableServicefromunique_toolkit.agentic_table.schemasimportMagicTableAction,MagicTableEventlogger=logging.getLogger(__name__)asyncdefagentic_table_event_handler(event:MagicTableEvent)->int:""" Main event handler that routes table events to specialized handlers. """# Initialize the service to interact with the tableat_service=AgenticTableService(user_id=event.user_id,company_id=event.company_id,table_id=event.payload.table_id,)# Register agent - locks the table during processingawaitat_service.register_agent()try:# Route events based on action typeifevent.payload.action==MagicTableAction.SHEET_CREATED:awaitat_service.deregister_agent()awaithandle_sheet_created(at_service,event.payload)elifevent.payload.action==MagicTableAction.ADD_META_DATA:downloader=get_downloader(event.user_id,event.company_id,event.payload.chat_id)file_content_getter=get_file_content_getter(event.user_id,event.company_id,event.payload.chat_id)reference_builder=get_augmented_text_with_references(event.user_id,event.company_id,event.payload.chat_id,event.payload.assistant_id)awaithandle_metadata_added(at_service,event.payload,downloader,file_content_getter,reference_builder)elifevent.payload.action==MagicTableAction.UPDATE_CELL:awaithandle_cell_updated(at_service,event.payload)elifevent.payload.action==MagicTableAction.GENERATE_ARTIFACT:uploader=get_uploader(event.user_id,event.company_id,event.payload.chat_id)awaithandle_artifact_generated(at_service,event.payload,uploader)elifevent.payload.action==MagicTableAction.SHEET_COMPLETED:logger.info(f"Sheet completed: {event.payload.sheet_name}")# Implement your completion logic hereelse:awaitat_service.set_activity(activity=event.payload.action,status=ActivityStatus.FAILED,text=f"Unknown action: {event.payload.action}",)raiseException(f"Unknown action: {event.payload.action}")return0# SuccessexceptExceptionase:logger.error(f"Error in handler: {e}")return1# Failurefinally:# Always deregister - unlocks the tableawaitat_service.deregister_agent()
frompathlibimportPathfromunique_toolkit.agentic_table.schemasimportMagicTableEventTypesfromunique_toolkit.app.fast_api_factoryimportbuild_unique_custom_appfromunique_toolkit.app.unique_settingsimportUniqueSettings# Initialize settings_SETTINGS=UniqueSettings.from_env(env_file=Path(__file__).parent/"unique.env")_SETTINGS.init_sdk()# Create app with the event handler_MINIMAL_APP=build_unique_custom_app(title="Unique Minimal Agentic Table App",settings=_SETTINGS,event_handler=agentic_table_event_handler,event_constructor=MagicTableEvent,subscribed_event_names=[ev.valueforevinMagicTableEventTypes],)# Run the applicationif__name__=="__main__":importuvicornuvicorn.run("fastapi_app_agentic_table:_MINIMAL_APP",host="0.0.0.0",port=5001,reload=True,log_level="debug",)
Before implementing the handlers, we need to define what our table looks like. This example demonstrates a configuration pattern for defining columns. While this is hardcoded here, you can easily initialize it from a configuration payload, database, or user settings.
The column definitions specify:
Order: Position of the column (0-indexed)
Name: Column header text
Width: Column width in pixels
Renderer: Type of cell renderer (dropdown, checkbox, collaborator selector, etc.)
fromenumimportStrEnumfrompydanticimportBaseModelfromunique_sdkimportCellRendererTypesclassExampleColumnNames(StrEnum):QUESTION="Question"SECTION="Section"ANSWER="Answer"CRITICAL_CONSISTENCY="Critical Consistency"STATUS="Status"REVIEWER="Reviewer"classColumnDefinition(BaseModel):""" Defines a single table column's structure and styling. Attributes: order: Column position (0-indexed) name: Column header text width: Column width in pixels renderer: Optional cell renderer type (dropdown, checkbox, etc.) editable: Whether the column is editable """order:intname:ExampleColumnNameswidth:intrenderer:CellRendererTypes|None=Noneeditable:bool=TrueclassColumnDefinitions(BaseModel):""" Container for all column definitions in the table. Provides helper methods to access columns by name. """columns:list[ColumnDefinition]@propertydefcolumn_map(self)->dict[str,ColumnDefinition]:"""Map of column names to their definitions."""return{column.name:columnforcolumninself.columns}defget_column_by_name(self,name:str)->ColumnDefinition:"""Get column definition by name."""returnself.column_map[name]defget_column_name_by_order(self,order:int)->ExampleColumnNames:"""Get column definition by order."""returnself.columns[order].namedefget_column_names(self)->list[str]:"""Get list of all column names."""returnlist(self.column_map.keys())example_configuration={"columns":[{"order":0,"name":ExampleColumnNames.QUESTION,"width":300,"renderer":None,"editable":False,},{"order":1,"name":ExampleColumnNames.SECTION,"width":150,"renderer":None,"editable":False,},{"order":2,"name":ExampleColumnNames.ANSWER,"width":400,"renderer":CellRendererTypes.SELECTABLE_CELL_RENDERER,"editable":True,},{"order":3,"name":ExampleColumnNames.CRITICAL_CONSISTENCY,"width":200,"renderer":None,"editable":True,},{"order":4,"name":ExampleColumnNames.STATUS,"width":150,"renderer":CellRendererTypes.REVIEW_STATUS_DROPDOWN,"editable":True,},{"order":5,"name":ExampleColumnNames.REVIEWER,"width":150,"renderer":CellRendererTypes.COLLABORATOR_DROPDOWN,"editable":True,},]}example_column_definitions=ColumnDefinitions.model_validate(example_configuration)
This creates a typed interface for your table that ensures consistency across all operations. You'll reference these definitions in your handlers to access columns by name.
When: Triggered when a user creates a new table sheet.
Goal: Initialize the table by setting up column headers and applying styling.
What we do with at_service:
set_activity() - Shows status messages to users (e.g., "Initializing table schema...")
set_cell() - Sets individual cell values (used here to write column headers in row 0)
set_column_style() - Applies styling to columns (width, renderer type, editability)
This handler prepares the empty table structure so it's ready to receive data. After this runs, users will see a properly formatted table with headers and the correct column configurations.
Sheet Created Handler Implementation (Click to expand)
fromloggingimportgetLoggerfromunique_sdk.api_resources._agentic_tableimportActivityStatusfromunique_toolkit.agentic_table.schemasimportMagicTableSheetCreatedPayloadfromunique_toolkit.agentic_table.serviceimportAgenticTableServicefrom.agentic_table_example_column_definitionimportexample_column_definitionslogger=getLogger(__name__)asyncdefhandle_sheet_created(at_service:AgenticTableService,payload:MagicTableSheetCreatedPayload)->None:""" Example handler for the sheet creation event. This demo shows how to initialize a new table by: - Setting column headers in row 0 - Applying column styles (width, renderer type, editability) The table is ready to receive data after initialization. Args: at_service: Service instance for table operations payload: Event payload with table_id and sheet_name """logger.info(f"Initializing Source of Wealth table: {payload.sheet_name}")# Set activity statusawaitat_service.set_activity(text="Initializing table schema...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)# Set headers in row 0forcol_definexample_column_definitions.columns:awaitat_service.set_cell(row=0,column=col_def.order,text=col_def.name.value)logger.info(f"Set {len(example_column_definitions.columns)} column headers")# Apply column stylesforcol_definexample_column_definitions.columns:awaitat_service.set_column_style(column=col_def.order,width=col_def.width,cell_renderer=col_def.renderer,editable=col_def.editable,)logger.info("Applied column styles with all CellRendererTypes")# Set completion statusawaitat_service.set_activity(text="Table schema initialized successfully",activity=payload.action,status=ActivityStatus.COMPLETED,)
# Retrieve Content objects for uploaded filescontent=file_content_getter_fn(file_id)# Access metadataifcontent.metadata:section=content.metadata.get("section")department=content.metadata.get("department")# EXAMPLE: Use ContentRegistry to group files by metadata keys# (This is just one approach - implement your own filtering logic!)content_registry=ContentRegistry(keys=["Finance","Legal","Technical"],contents=all_contents)# Retrieve all files with specific metadata keyfinance_files=content_registry.get_contents_by_metadata_key("Finance")# Alternative: Implement your own filteringfinance_files=[cforcinall_contentsifc.metadataandc.metadata.get("department")=="Finance"]
Use Cases:
- Route different source files to different table rows based on categories
- Filter content by department, section, or custom tags
- Build conditional logic based on file properties
The framework provides a reference system that converts inline citations into clickable links in the UI. This is crucial when AI agents generate text with citations to source documents.
Why This Matters:
- Users can click references to view source documents
- Creates audit trails for AI-generated content
- Improves transparency and trust in automated workflows
- Enables verification of information
# Step 1: Create temporary IDs for your content itemsreference_registry=create_id_map(relevant_contents,prefix="chunk")# Returns: {"chunk_a1b2c3d4": content1, "chunk_x9y8z7w6": content2, ...}# Step 2: Generate text with inline citations (from AI or your logic)text_with_citations="Based on the analysis [chunk_a1b2c3d4], we conclude..."# Step 3: Convert to clickable referencesaugmented_text=augmented_text_with_references_fn(text_with_citations,reference_registry,prefix="chunk",citation_pattern=r"\[chunk_([a-zA-Z0-9\-]+)\]")# Returns: "Based on the analysis [1&message_123], we conclude..."# Frontend renders as: "Based on the analysis [1]↗, we conclude..." (clickable)
Use Cases:
- Link AI-generated answers to source documents
- Create audit trails for compliance and review
- Enable fact-checking and verification workflows
- Build transparent, explainable AI systems
- Track provenance of information across your pipeline
What we do with at_service:
set_activity() - Provides progress updates during processing
get_sheet() - Retrieves table data to understand what's already populated
get_num_rows() - Gets the current number of rows
set_multiple_cells() - Batch updates cells with referenced content
The key takeaway is the batch operation pattern - when dealing with large datasets, always use set_multiple_cells() instead of individual set_cell() calls.
Metadata Added Handler Implementation (Click to expand)
importiofromloggingimportgetLoggerfromtypingimportCallableimportpandasaspdfromunique_sdk.api_resources._agentic_tableimportActivityStatusfromunique_toolkit.agentic_table.schemasimport(MagicTableAddMetadataPayload,MagicTableCell,)fromunique_toolkit.agentic_table.serviceimportAgenticTableServicefromunique_toolkit.contentimportContent,ContentChunkfrom.agentic_table_example_column_definitionimport(ExampleColumnNames,example_column_definitions,)from.agentic_table_helper_functionsimportContentRegistry,create_id_maplogger=getLogger(__name__)asyncdefhandle_question_files(at_service:AgenticTableService,payload:MagicTableAddMetadataPayload,downloader_fn:Callable[[str],bytes],)->int:""" Handle question files by downloading and parsing CSV to populate the table. Args: at_service: Service instance for table operations payload: Event payload with metadata and file IDs downloader_fn: Function to download file contents Returns: Number of rows added to the table Raises: Exception: If CSV processing fails """# Check if question files were providedifnotpayload.metadata.question_file_ids:logger.warning("No question files provided in metadata")return0awaitat_service.set_activity(text="Downloading CSV file...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)# Get the first question file (CSV)file_id=payload.metadata.question_file_ids[0]logger.info(f"Downloading file: {file_id}")# Download file contentfile_content=downloader_fn(file_id)awaitat_service.set_activity(text="Parsing CSV file...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)file_content_stream=io.BytesIO(file_content)# Parse CSV filedf=pd.read_csv(file_content_stream)df=df.fillna("")# Convert NA values to empty stringslogger.info(f"Parsed CSV with {len(df)} rows and {len(df.columns)} columns")logger.info(df.head())awaitat_service.set_activity(text=f"Populating table with {len(df)} rows...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)# Create batch cellscells=[]forrow_idx,row_dataindf.iterrows():forcol_definexample_column_definitions.columns:cell_value=row_data.get(col_def.name.value,"")ifnotcell_value:continuecells.append(MagicTableCell(row_order=int(row_idx)# type: ignore[arg-type]+1,# +1 for header rowcolumn_order=col_def.order,text=str(cell_value),sheet_id=payload.table_id,))logger.info(f"Created {len(cells)} cells for batch upload")# Batch upload all cellsawaitat_service.set_multiple_cells(cells=cells)logger.info(f"Successfully populated table with {len(df)} rows")returnlen(df)asyncdefhandle_source_files(at_service:AgenticTableService,payload:MagicTableAddMetadataPayload,file_content_getter_fn:Callable[[str],Content|None],augmented_text_with_references_fn:Callable[[str,dict[str,Content|ContentChunk],str,str],str],)->int:""" Handle source files by retrieving content and organizing by metadata. This handler demonstrates two key framework capabilities: 1. Retrieving file content and accessing metadata 2. Creating clickable references that link table cells to source documents The example shows: - How to fetch Content objects for uploaded files - How to use ContentRegistry to group files by metadata keys - How to generate text with inline citations and convert them to clickable references - How to populate table cells with referenced content Args: at_service: Service instance for table operations payload: Event payload with metadata and file IDs file_content_getter_fn: Function to retrieve file content objects augmented_text_with_references_fn: Function to convert citations to references Returns: Number of content items processed """# Check if source files were providedifnotpayload.metadata.source_file_ids:logger.warning("No source files provided in metadata")return0awaitat_service.set_activity(text="Processing source files metadata...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)cells_to_update:list[MagicTableCell]=[]num_rows=awaitat_service.get_num_rows()# STEP 1: Retrieve Content objects for all source files# Each Content object contains:# - content.id: Unique identifier# - content.metadata: Custom key-value pairs (e.g., {"section": "Finance"})# - content.title: File name or title# - content.text: Extracted text content# - content.chunks: List of ContentChunk objects for chunked documentsall_contents=[]forfile_idinpayload.metadata.source_file_ids:content=file_content_getter_fn(file_id)ifcontentisNone:logger.warning(f"No content found for file: {file_id}")continueifcontent.metadataisNone:logger.warning(f"No metadata found for file: {file_id}")continueall_contents.append(content)# STEP 2: Organize content by metadata keys# This example assumes source files have metadata like:# {"Team": "true"}, {"Finance": "true"}, {"Technical": "true"}, etc.sections_of_interest=["Team","Finance","Technical","Planning",]# ContentRegistry groups files by metadata keys# You can later retrieve all files tagged with "Finance", "Team", etc.content_registry=ContentRegistry(keys=sections_of_interest,contents=all_contents)# STEP 3: Process each row in the table# This demonstrates row-by-row processing where each row might need different source filesforrow_indexinrange(1,num_rows+1):# Retrieve the current row to check what data it hasrow_cells=awaitat_service.get_sheet(start_row=row_index,end_row=row_index+1)retrieved_cells:dict[ExampleColumnNames,MagicTableCell]={example_column_definitions.get_column_name_by_order(cell.column_order):cellforcellinrow_cells.magic_table_cells}logger.info(f"Retrieved cells: {retrieved_cells}")answer_cell=retrieved_cells.get(ExampleColumnNames.ANSWER)# Check if the answer cell exists. This means that the answer was already generated.ifanswer_cellisnotNone:logger.info(f"Answer found for row {row_index}: {answer_cell.text}")else:# Get the section for this row (e.g., "Finance", "Team")section_name=retrieved_cells.get(ExampleColumnNames.SECTION)ifsection_nameisNone:logger.warning(f"No section found for row {row_index}")continue# STEP 4: Retrieve relevant content based on row metadata# Use the ContentRegistry to get all files tagged with this sectionrelevant_contents=content_registry.get_contents_by_metadata_key(section_name.text)iflen(relevant_contents)==0:logger.warning(f"No contents found for section '{section_name.text}'")continuelogger.info(f"Found {len(relevant_contents)} content items for section '{section_name}'")# STEP 5: Create a reference registry for citation mapping# This creates temporary IDs like "chunk_a1b2c3d4" for each content item# These IDs will be used in inline citations: [chunk_a1b2c3d4]chunk_prefix="chunk"reference_registry=create_id_map(relevant_contents,chunk_prefix)logger.info(f"Reference registry: {reference_registry.keys()}")# STEP 6: Generate text with inline citations# In a real application, this would be AI-generated text with citations# Here we simulate it by listing the content titles with citation markerssimulated_text_generation_with_references=("The following are the contents of the section: \n")forchunk_id,contentinreference_registry.items():# Add inline citation in format [chunk_xxx]simulated_text_generation_with_references+=(f"{content.title} [{chunk_id}]\n")# STEP 7: Convert inline citations to clickable references# This transforms [chunk_a1b2c3d4] into numbered references like [1], [2]# The frontend will render these as clickable links to the source filesaugmented_text=augmented_text_with_references_fn(simulated_text_generation_with_references,reference_registry,# type: ignore[arg-type]chunk_prefix,r"\[chunk_([a-zA-Z0-9\-]+)\]",# Citation pattern to match)# STEP 8: Update the table cell with referenced textcells_to_update.append(MagicTableCell(row_order=row_index,column_order=example_column_definitions.get_column_by_name(ExampleColumnNames.ANSWER).order,text=augmented_text,sheet_id=payload.table_id,))# Apply any cell updatesifcells_to_update:awaitat_service.set_multiple_cells(cells=cells_to_update)awaitat_service.set_activity(text=f"Successfully processed {len(all_contents)} source files",activity=payload.action,status=ActivityStatus.COMPLETED,)returnlen(all_contents)asyncdefhandle_metadata_added(at_service:AgenticTableService,payload:MagicTableAddMetadataPayload,downloader_fn:Callable[[str],bytes],file_content_getter_fn:Callable[[str],Content|None],augmented_text_with_references_fn:Callable[[str,dict[str,Content|ContentChunk],str,str],str],)->None:""" Example handler for the metadata addition event. This demo shows how to populate a table from uploaded files: - Process question files: Downloads CSV files and populates the table - Process source files: Retrieves content and groups by metadata Args: at_service: Service instance for table operations payload: Event payload with metadata and file IDs downloader_fn: Function to download file contents file_content_getter_fn: Function to retrieve file content objects """logger.info(f"Processing metadata for sheet: {payload.sheet_name}")try:# Handle question files (CSV processing)num_question_rows=awaithandle_question_files(at_service=at_service,payload=payload,downloader_fn=downloader_fn,)# Handle source files (content and metadata processing)num_source_rows=awaithandle_source_files(at_service=at_service,payload=payload,file_content_getter_fn=file_content_getter_fn,augmented_text_with_references_fn=augmented_text_with_references_fn,)# This is different from the LogEntry which shows in the cell historyawaitat_service.set_activity(text=f"Successfully loaded {num_question_rows} rows from CSV and {num_source_rows} source file metadata rows",activity=payload.action,status=ActivityStatus.COMPLETED,)exceptExceptionase:logger.exception(f"Error processing files: {e}",exc_info=True)awaitat_service.set_activity(text=f"Failed to process files: {str(e)}",activity=payload.action,status=ActivityStatus.FAILED,)raise
When: Triggered when a user edits an editable cell.
Goal: Implement business rules that react to cell changes. In this example, we monitor a specific column and lock rows when they reach a certain state.
What we do with at_service:
set_cell() - Updates the cell with additional context (log entries)
update_row_verification_status() - Changes the verification status of entire rows (can be used to lock/unlock or mark as verified)
This handler demonstrates workflow automation. The example checks if the "Critical Consistency" column is changed to "Consistent" and then:
Adds a log entry documenting the change (creates an audit trail)
Marks the row as verified (which can trigger visual indicators or prevent further edits)
You could extend this pattern to:
Regenerate AI responses when questions are modified
Trigger validation workflows
Update dependent cells automatically
Send notifications or trigger external systems
Implement approval chains or review processes
The power here is in defining your business logic - the framework just provides the hooks.
Cell Updated Handler Implementation (Click to expand)
fromunique_toolkit.agentic_table.schemasimportMagicTableUpdateCellPayloadfromunique_toolkit.agentic_table.serviceimportAgenticTableServicefromloggingimportgetLoggerfromdatetimeimportdatetimefromunique_toolkit.language_model.schemasimportLanguageModelMessageRolefromunique_sdkimportRowVerificationStatusfromunique_toolkit.agentic_table.schemasimportLogEntryfrom.agentic_table_example_column_definitionimport(example_column_definitions,ExampleColumnNames,)logger=getLogger(__name__)asyncdefhandle_cell_updated(at_service:AgenticTableService,payload:MagicTableUpdateCellPayload)->None:""" Example handler for the cell update event. This demo shows a simple workflow automation: when the Critical Consistency column changes to "Consistent", it adds a log entry and updates the row verification status. Args: at_service: Service instance for table operations payload: Event payload with row, column, and new value """logger.info(f"Cell updated at row {payload.row_order}, "f"column {payload.column_order}: {payload.data}")critical_consistency_col=example_column_definitions.get_column_by_name(ExampleColumnNames.CRITICAL_CONSISTENCY)# Check if the Critical Consistency column was updatedifpayload.column_order==critical_consistency_col.order:status_value=payload.data.strip()logger.info(f"Status changed to: {status_value}")# Check if status is Completed or Verified (lock row)ifstatus_value.lower()in["consistent"]:logger.info(f"Locking row {payload.row_order} due to status: {status_value}")# Note: Column-level locking affects all rows. In a production system,# you might track locked rows in metadata and validate edits server-side.# Here we demonstrate the pattern with a log entry.# Add log entry to document the status change and lockinglog_entries=[LogEntry(text=f"Row {payload.row_order} marked as {status_value}. Further edits should be restricted.",created_at=datetime.now().isoformat(),actor_type=LanguageModelMessageRole.ASSISTANT,)]awaitat_service.set_cell(row=payload.row_order,column=payload.column_order,text=status_value,log_entries=log_entries,)# Update row verification statusawaitat_service.update_row_verification_status(row_orders=[payload.row_order],status=RowVerificationStatus.VERIFIED)logger.info(f"Row {payload.row_order} verified and logged")
fromdatetimeimportdatetimefromloggingimportgetLoggerfromtypingimportCallablefromunique_sdk.api_resources._agentic_tableimportActivityStatusfromunique_toolkit._common.docx_generatorimport(DocxGeneratorConfig,DocxGeneratorService,)fromunique_toolkit.agentic_table.schemasimport(MagicTableGenerateArtifactPayload,MagicTableSheet,)fromunique_toolkit.agentic_table.serviceimportAgenticTableServicefromunique_toolkit.content.schemasimportContentfrom.agentic_table_example_column_definitionimport(ExampleColumnNames,example_column_definitions,)logger=getLogger(__name__)asyncdefhandle_artifact_generated(at_service:AgenticTableService,payload:MagicTableGenerateArtifactPayload,uploader_fn:Callable[[bytes,str,str],Content],)->None:""" Example handler for the artifact generation event. This demo shows how to export table data as a Word document: - Fetches all table data - Organizes it by sections - Generates a markdown report - Converts to DOCX and uploads it - Links the artifact back to the table Args: at_service: Service instance for table operations payload: Event payload with artifact type uploader_fn: Function to upload the generated file """logger.info(f"Generating artifact of type: {payload.data.artifact_type}")awaitat_service.set_activity(text="Starting report generation...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)try:# Read and organize datasheet=awaitat_service.get_sheet(start_row=0,end_row=None)rows_data=organize_sheet_data(sheet)# Build markdown reportmarkdown=build_markdown_report(rows_data)# Generate DOCXawaitat_service.set_activity(text="Generating document...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)docx_generator=DocxGeneratorService(config=DocxGeneratorConfig(template_content_id="content-template-generic",))content_fields=docx_generator.parse_markdown_to_list_content_fields(markdown)docx_file=docx_generator.generate_from_template(content_fields)ifnotdocx_file:raiseException("Failed to generate DOCX file")# Upload to chattimestamp=datetime.now().strftime("%Y%m%d_%H%M%S")filename=f"Table_Report_{timestamp}.docx"content=uploader_fn(docx_file,"application/vnd.openxmlformats-officedocument.wordprocessingml.document",filename,)# Set artifact referenceawaitat_service.set_artifact(artifact_type=payload.data.artifact_type,content_id=content.id,mime_type="application/vnd.openxmlformats-officedocument.wordprocessingml.document",name=filename,)awaitat_service.set_activity(text=f"Report generated successfully: {filename}",activity=payload.action,status=ActivityStatus.COMPLETED,)exceptExceptionase:logger.error(f"Error generating artifact: {e}")awaitat_service.set_activity(text=f"Report generation failed: {str(e)}",activity=payload.action,status=ActivityStatus.FAILED,)raisedeforganize_sheet_data(sheet:MagicTableSheet)->dict[int,dict[int,str]]:""" Convert flat cell list to nested dictionary structure. Returns: Dictionary with structure {row_order: {column_order: cell_text}} """rows_data:dict[int,dict[int,str]]={}forcellinsheet.magic_table_cells:ifcell.row_ordernotinrows_data:rows_data[cell.row_order]={}rows_data[cell.row_order][cell.column_order]=cell.textreturnrows_datadefbuild_markdown_report(rows_data:dict[int,dict[int,str]])->str:""" Build a markdown report grouped by sections. Returns: Markdown string with sections and question details """markdown_lines=["# Table Report","",f"**Generated:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}","","---","",]# Get column indicesquestion_col=example_column_definitions.get_column_by_name(ExampleColumnNames.QUESTION).ordersection_col=example_column_definitions.get_column_by_name(ExampleColumnNames.SECTION).orderanswer_col=example_column_definitions.get_column_by_name(ExampleColumnNames.ANSWER).orderconsistency_col=example_column_definitions.get_column_by_name(ExampleColumnNames.CRITICAL_CONSISTENCY).orderstatus_col=example_column_definitions.get_column_by_name(ExampleColumnNames.STATUS).orderreviewer_col=example_column_definitions.get_column_by_name(ExampleColumnNames.REVIEWER).order# Get data rows (excluding header row 0)data_rows={k:vfork,vinrows_data.items()ifk>0}# Group by sectionsections:dict[str,list[dict[int,str]]]={}forrow_dataindata_rows.values():section=row_data.get(section_col,"General")ifsectionnotinsections:sections[section]=[]sections[section].append(row_data)# Add each sectionforsection_name,section_rowsinsections.items():markdown_lines.extend([f"## {section_name}","",])forrow_datainsection_rows:question=row_data.get(question_col,"N/A")answer=row_data.get(answer_col,"N/A")consistency=row_data.get(consistency_col,"N/A")status=row_data.get(status_col,"N/A")reviewer=row_data.get(reviewer_col,"Unassigned")markdown_lines.extend([f"**Question:** {question}","",f"**Answer:** {answer}","",f"**Consistency:** {consistency}","",f"**Status:** {status}","",f"**Reviewer:** {reviewer}","","---","",])return"\n".join(markdown_lines)
The tutorial examples use several helper functions that encapsulate common patterns for working with files, metadata, and references. These are available in the toolkit and demonstrate best practices for your own applications.
# Create the downloader with authentication contextdownloader=get_downloader_fn(user_id,company_id,chat_id)# Download file contentfile_bytes=downloader(file_id)# Process the bytes (e.g., parse CSV)csv_data=pd.read_csv(io.BytesIO(file_bytes))
Use in your handlers: Pass this function to handlers that need to download and process file contents.
# Create the uploader with authentication contextuploader=get_uploader_fn(user_id,company_id,chat_id)# Upload a filecontent=uploader(file_bytes,"application/vnd.openxmlformats-officedocument.wordprocessingml.document","report.docx")# Use the returned content IDcontent_id=content.id
Use in your handlers: Pass this function to artifact generation handlers.
# Create the reference builder with authentication contextreference_builder=get_augmented_text_with_references_fn(user_id,company_id,chat_id,assistant_id)# Create a reference registryreference_registry=create_id_map(content_items,prefix="chunk")# Convert citations to referencestext_with_citations="According to the report [chunk_abc123]..."augmented_text=reference_builder(text_with_citations,reference_registry,prefix="chunk",citation_pattern=r"\[chunk_([a-zA-Z0-9\-]+)\]")# Result: "According to the report [1]..." (clickable in UI)
Use in your handlers: Pass this function to handlers that generate AI content with source citations.
An example utility class demonstrating how to organize Content objects by metadata keys. This is just one approach - you should implement your own filtering logic based on your specific needs.
# Create a registry with metadata keyscontent_registry=ContentRegistry(keys=["Finance","Legal","Technical"],contents=all_content_objects)# Retrieve files by metadata keyfinance_files=content_registry.get_contents_by_metadata_key("Finance")legal_files=content_registry.get_contents_by_metadata_key("Legal")
This example shows: Grouping by metadata key existence (e.g., file has {"Finance": "true"})
Your implementation might:
- Filter by metadata values instead of keys (e.g., {"status": "approved"})
- Support complex queries (AND/OR conditions, ranges, regex)
- Combine multiple metadata attributes (e.g., section AND department)
- Implement scoring/ranking logic for content relevance
- Use caching strategies for large content sets
Key point: Build your own registry class that matches your business logic and metadata structure!
# Create unique IDs for content itemsid_map=create_id_map(content_list,prefix="chunk")# Returns: {"chunk_a1b2c3d4": content1, "chunk_x9y8z7w6": content2, ...}# Use in text generationtext=f"See source [{list(id_map.keys())[0]}]"
Pattern: Always use consistent prefixes and citation patterns throughout your application.
fromcollectionsimportdefaultdictfromloggingimportgetLoggerfromtypingimportCallable,TypeVarfromunique_toolkit.chatimportChatMessageRolefromunique_toolkit.contentimportContentChunkfromunique_toolkit.content.schemasimportContent,ContentReferencelogger=getLogger(__name__)defget_file_content_getter_fn(user_id:str,company_id:str,chat_id:str)->Callable[[str],Content|None]:""" Factory function to create a content retriever with authentication context. The returned function allows you to: - Retrieve full Content objects by file ID - Access file metadata (custom key-value pairs attached during upload) - Access file chunks (for chunked documents) - Access file text and other properties Returns: A function that retrieves Content by file_id, or None if not found """fromunique_toolkit.content.functionsimportsearch_contentsdefget_content_fn(file_id:str)->Content|None:# Search for content by exact ID matchwhere={"id":{"equals":file_id}}contents=search_contents(user_id=user_id,company_id=company_id,chat_id=chat_id,where=where)assertlen(contents)<=1iflen(contents)==0:logger.warning(f"No content info found for file: {file_id}")returnNoneifcontents[0].metadataisNone:logger.warning(f"No metadata found for file: {file_id}")logger.info(f"Metadata for file {file_id}: {contents[0].metadata}")returncontents[0]returnget_content_fndefget_downloader_fn(user_id:str,company_id:str,chat_id:str)->Callable[[str],bytes]:""" Factory function to create a file downloader with authentication context. Returns a function that downloads files by content_id. """fromunique_toolkit.content.functionsimportdownload_content_to_bytesreturnlambdafile_id:download_content_to_bytes(user_id=user_id,company_id=company_id,chat_id=chat_id,content_id=file_id)defget_uploader_fn(user_id:str,company_id:str,chat_id:str)->Callable[[bytes,str,str],Content]:""" Factory function to create a file uploader with authentication context. Returns a function that uploads files to the chat. """fromunique_toolkit.content.functionsimportupload_content_from_bytesdefuploader(content:bytes,mime_type:str,content_name:str)->Content:returnupload_content_from_bytes(user_id=user_id,company_id=company_id,content=content,mime_type=mime_type,content_name=content_name,chat_id=chat_id,skip_ingestion=True,)returnuploaderdefconvert_content_chunk_to_reference(*,message_id:str,content_or_chunk:Content|ContentChunk,sequence_number:int|None=None,start_page:int|None=None,end_page:int|None=None,)->ContentReference:title=content_or_chunk.titleorcontent_or_chunk.keyorcontent_or_chunk.idpage_suffix=Noneifstart_page:ifend_page:page_suffix=f": {start_page} - {end_page}"else:page_suffix=f": {start_page}"title=f"{title}{page_suffix}"ifpage_suffixelsetitlereturnContentReference(message_id=message_id,url=f"unique://content/{content_or_chunk.id}",source_id=content_or_chunk.id,name=title,sequence_number=sequence_numberor0,source="agentic-table",)defget_augmented_text_with_references_fn(user_id:str,company_id:str,chat_id:str,assistant_id:str)->Callable[[str,dict[str,Content|ContentChunk],str,str],str]:""" Factory function to create a reference builder with authentication context. The returned function converts inline citations in text (e.g., [chunk_abc123]) into clickable references in the Unique UI. These references: - Appear as numbered citations (e.g., [1], [2]) in the frontend - Are clickable and navigate to the source content - Include metadata like title, page numbers, and source ID This is useful when: - AI agents generate text with citations to source documents - You want to create audit trails linking table cells to source files - You need to show provenance of data in the table Returns: A function that converts inline citations to numbered references """importrefromunique_toolkit.chat.functionsimportcreate_message,modify_message# Default pattern matches citations like [chunk_abc123] or [chunk_xyz-456]_DEFAULT_CITATION_PATTERN=r"\[chunk_([a-zA-Z0-9\-]+)\]"defreference_builder(text:str,reference_registry:dict[str,Content|ContentChunk],prefix:str="chunk",citation_pattern:str=_DEFAULT_CITATION_PATTERN,)->str:""" Converts inline citations in text to numbered references with full content metadata. This function: 1. Extracts all citation IDs from the text (e.g., [chunk_abc123]) 2. Looks up each citation in the reference registry 3. Converts them to numbered references (e.g., [1&message_id]) 4. Creates a message with the processed text and reference metadata Args: text: The text containing inline citations in format [chunk_xxx]. reference_registry: Dictionary mapping citation IDs to their full Content or ContentChunk objects. citation_pattern: Regex pattern to extract citation IDs from text (default matches [chunk_xxx]). Returns: The processed text with inline citations converted to numbered references. """# Create a new assistant message to hold the referencesmessage=create_message(user_id=user_id,company_id=company_id,chat_id=chat_id,assistant_id=assistant_id,role=ChatMessageRole.ASSISTANT,)assertmessage.idisnotNone# Extract all citation IDs from the text (e.g., "abc123" from "[chunk_abc123]")chunk_ids=re.findall(citation_pattern,text)logger.info(f"Found {len(chunk_ids)} chunk IDs in text")logger.info(f"Chunk IDs: {chunk_ids}")# Track which citations we've already processed to avoid duplicatesprocessed_citations={}# Collect all reference metadata to attach to the messagemessage_references=[]# Process each citation found in the textforchunk_idinchunk_ids:# Check if we've already processed this citationifchunk_idinprocessed_citations:# Reuse the same reference notation for duplicate citationsreference_notation=processed_citations[chunk_id]else:# Look up the full content/chunk object for this citationreferenced_content=reference_registry.get(f"{prefix}_{chunk_id}")ifreferenced_content:# This is a valid citation - create a numbered referencesequence_number=len(processed_citations)+1# Add the reference metadata to the messagemessage_references.append(convert_content_chunk_to_reference(message_id=message.id,content_or_chunk=referenced_content,sequence_number=sequence_number,))# Format: [sequence_number&message_id] (e.g., [1&msg_123])reference_notation=f"[{sequence_number}&{message.id}]"processed_citations[chunk_id]=reference_notationelse:# Citation ID not found in registry - mark as invalidreference_notation="[???]"# Replace the inline citation with the reference notationtext=text.replace(f"[chunk_{chunk_id}]",reference_notation)# Update the message with the processed text and all referencesmodify_message(assistant_message_id=message.id,user_message_id=message.id,user_message_text=text,assistant=True,user_id=user_id,company_id=company_id,chat_id=chat_id,references=message_references,content=text,)returntextreturnreference_builderclassContentRegistry:""" An EXAMPLE utility class for organizing Content objects by metadata keys. This demonstrates ONE WAY to manage content with metadata. You should implement your own filtering logic based on your specific requirements. Example use case: If your source files have metadata like {"section": "Finance"} or {"section": "Legal"}, this class groups them by those keys so you can retrieve all Finance-related files when processing a Finance row in your table. This is intentionally simple to show the pattern. For production use, consider: - Filtering by metadata VALUES, not just keys (e.g., {"status": "approved"}) - Complex queries (AND/OR conditions, ranges, regex patterns) - Multiple metadata attributes (e.g., section AND department) - Caching strategies for large content sets - Custom scoring/ranking logic for content relevance Build your own registry class that fits your business logic! """def__init__(self,keys:list[str],contents:list[Content]):""" Initialize with metadata keys and a list of Content objects. This example implementation groups content by checking if metadata keys exist. Your implementation might filter by metadata values, use complex queries, or implement completely different logic. Args: keys: List of metadata keys to group by (e.g., ["Finance", "Legal"]) contents: List of Content objects to organize """self.keys=keysself.contents=contentsgrouped:dict[str,list[Content]]=defaultdict(list)# Group content by metadata keysforcontentinself.contents:ifcontent.metadataisNone:logger.warning(f"No metadata found for content: {content.id}")continue# Check if any of our target keys exist in this content's metadataforkeyinkeys:ifkeyincontent.metadata:logger.info(f"Found metadata key: {key} for content: {content.id}")grouped[key].append(content)self.contents_by_key=dict(grouped)defget_contents_by_metadata_key(self,key:str)->list[Content]:""" Retrieve all content items that have the specified metadata key. Args: key: The metadata key to filter by Returns: List of Content objects with that metadata key, or empty list if none found """returnself.contents_by_key.get(key,[])T=TypeVar("T")defcreate_id_map(items:list[T],prefix:str)->dict[str,T]:""" Create a mapping of generated IDs to items for use in reference systems. This helper generates unique IDs for a list of items (Content or ContentChunk objects) so they can be cited in text and later resolved back to their full objects. Args: items: List of items to create IDs for (typically Content or ContentChunk objects) prefix: Prefix for generated IDs (e.g., "chunk" creates IDs like "chunk_a1b2c3d4") Returns: Dictionary mapping generated IDs to items Example: >>> contents = [content1, content2, content3] >>> id_map = create_id_map(contents, "chunk") >>> # Returns: {"chunk_a1b2c3d4": content1, "chunk_x9y8z7w6": content2, ...} """fromuuidimportuuid4return{f"{prefix}_{uuid4().hex[:8]}":itemforiteminitems}
Before processing any event, you must call at_service.register_agent(). This locks the table to prevent concurrent modifications. Always deregister in a finally block to ensure proper cleanup even if errors occur.
Use at_service.set_activity() liberally to communicate with users. These appear as status banners in the UI and are essential for long-running operations. Include:
IN_PROGRESS status with descriptive text during processing
COMPLETED status when successful
FAILED status with error messages if something goes wrong
The set_multiple_cells() method is crucial for performance when dealing with multiple cell updates. It's dramatically faster than individual set_cell() calls and reduces network overhead.
reference_builder=get_augmented_text_with_references_fn(user_id,company_id,chat_id,assistant_id)# Create ID mapping for your contentid_map=create_id_map(content_items,prefix="chunk")# Convert citations to referencesaugmented_text=reference_builder(text_with_citations,id_map,prefix="chunk",citation_pattern=r"\[chunk_([a-zA-Z0-9\-]+)\]")
Key Benefits:
- Citations become clickable links in the UI
- Users can verify source information
- Creates audit trails for AI-generated content
- Improves transparency and trust
Remember that all handlers in this tutorial are examples. The framework provides events and tools - how you use them depends on your business requirements. You can integrate with any external system, use AI models, implement complex workflows, or create custom validation rules.
All helper functions use the factory pattern with authentication context:
- Centralizes authentication logic
- Makes handlers testable
- Simplifies function signatures
- Enables dependency injection
Key Takeaway: The framework provides the infrastructure (events, table operations, content management) while you implement the business logic (AI agents, validation rules, custom workflows).
importloggingfrompathlibimportPathfromunique_sdk.api_resources._agentic_tableimportActivityStatusfromdocs.examples_from_docs.agentic_table_example_artifact_generated_event_handlerimport(handle_artifact_generated,)fromdocs.examples_from_docs.agentic_table_example_cell_updated_event_handlerimport(handle_cell_updated,)fromdocs.examples_from_docs.agentic_table_example_metadata_added_event_handlerimport(handle_metadata_added,)fromdocs.examples_from_docs.agentic_table_example_sheet_created_event_handlerimport(handle_sheet_created,)fromdocs.examples_from_docs.agentic_table_helper_functionsimport(get_augmented_text_with_references_fn,get_downloader_fn,get_file_content_getter_fn,get_uploader_fn,)fromunique_toolkit.agentic_table.schemasimport(MagicTableAction,MagicTableEvent,MagicTableEventTypes,)fromunique_toolkit.agentic_table.serviceimportAgenticTableServicefromunique_toolkit.app.fast_api_factoryimportbuild_unique_custom_appfromunique_toolkit.app.unique_settingsimportUniqueSettings# Configure logging at module level so it works regardless of how the app is startedlogging.basicConfig(level=logging.INFO,format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",)logger=logging.getLogger(__name__)# Default event handlerasyncdefagentic_table_event_handler(event:MagicTableEvent)->int:""" Default event handler that serves as a controller for the Agentic Table. """# Initialize Agentic Table Service to interact with agentic table.at_service=AgenticTableService(user_id=event.user_id,company_id=event.company_id,table_id=event.payload.table_id,)# Initialize the configuration from the event for your custom application# config = YourConfigClass.model_validate(event.payload.configuration)# You can now potentially use this configuration to initialize any other services needed for your application....# Register the agent# This locks the table from any modifications until the agent is completed.# Once registered the sheet status is shown as "Updating"awaitat_service.register_agent()try:# We use if-else statements here instead of match/case because it enables more precise typing and type narrowing for the payload based on the action (which functions as a discriminator).# Depending on the event received/action, run the corresponding functionalityifevent.payload.action==MagicTableAction.SHEET_CREATED:# This event is triggered when a new sheet is created.# You can use this for housekeeping tasks like displaying the table headers, etc.## Payload type (MagicTableSheetCreatedPayload):logger.info(f"Sheet created: {event.payload.sheet_name}")# Deregister the agent to unblock upcoming eventsawaitat_service.deregister_agent()# Handle the sheet creation event (This is usally to setup the headers and column styles)awaithandle_sheet_created(at_service,event.payload)# In a standard workflow the user will select sources and a question file# This will trigger the add_metadata event and it can come quite quickly after the sheet_created event# therefore we need to deregister the agent to unblock the upcoming eventselifevent.payload.action==MagicTableAction.ADD_META_DATA:# This event is triggered when a new question or question file or source file is added.## Payload type (MagicTableAddMetadataPayload):logger.info(f"Metadata added: {event.payload.metadata}")downloader_fn=get_downloader_fn(event.user_id,event.company_id,event.payload.chat_id)file_content_getter_fn=get_file_content_getter_fn(event.user_id,event.company_id,event.payload.chat_id)augmented_text_with_references_fn=get_augmented_text_with_references_fn(event.user_id,event.company_id,event.payload.chat_id,event.payload.assistant_id)awaithandle_metadata_added(at_service,event.payload,downloader_fn,file_content_getter_fn,augmented_text_with_references_fn)elifevent.payload.action==MagicTableAction.UPDATE_CELL:# This event is triggered when a cell is updated.## Payload type (MagicTableUpdateCellPayload):logger.info(f"Cell updated: {event.payload.column_order}, {event.payload.row_order}, {event.payload.data}")awaithandle_cell_updated(at_service,event.payload)elifevent.payload.action==MagicTableAction.GENERATE_ARTIFACT:# This event is triggered when a report generation button is clicked.## Payload type (MagicTableGenerateArtifactPayload):logger.info(f"Artifact generated: {event.payload.data}")uploader_fn=get_uploader_fn(event.user_id,event.company_id,event.payload.chat_id)awaithandle_artifact_generated(at_service,event.payload,uploader_fn)elifevent.payload.action==MagicTableAction.SHEET_COMPLETED:# This event is triggered when the sheet is marked as completed.## Payload type (MagicTableSheetCompletedPayload):logger.info(f"Sheet completed: {event.payload.sheet_name}")# Here you can call a handler function that will handle the sheet completion event.elifevent.payload.action==MagicTableAction.LIBRARY_SHEET_ROW_VERIFIED:# This event is triggered when a row in a "Library" sheet is verified.# This is a special sheet type and is only relevant within the context of Rfp Agent.# You can ignore this event/block if you are not working with the library feature.## Payload type (MagicTableLibrarySheetRowVerifiedPayload):logger.info(f"Library sheet row verified: {event.payload.metadata.row_order}")# Here you can call a handler function that will handle the library sheet row verified event.else:logger.error(f"Unknown action: {event.payload.action}")awaitat_service.set_activity(activity=event.payload.action,status=ActivityStatus.FAILED,text=f"Unknown action: {event.payload.action}",)raiseException(f"Unknown action: {event.payload.action}")return0# SuccessexceptExceptionase:logger.error(f"Error in agentic table event handler: {e}")return1# Failurefinally:# De-register the agentawaitat_service.deregister_agent()# Create the default app instance at module level# This MUST be at module level so uvicorn can find it when importing_SETTINGS=UniqueSettings.from_env(env_file=Path(__file__).parent/"unique.env")_SETTINGS.init_sdk()# Create app using factory_MINIMAL_APP=build_unique_custom_app(title="Unique Minimal Agentic Table App",settings=_SETTINGS,event_handler=agentic_table_event_handler,event_constructor=MagicTableEvent,subscribed_event_names=[ev.valueforevinMagicTableEventTypes],)if__name__=="__main__":importloggingimportuvicorn# Initialize settings# Enable debug logginglogging.basicConfig(level=logging.DEBUG,format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",)# Run the serveruvicorn.run("fastapi_app_agentic_table:_MINIMAL_APP",host="0.0.0.0",port=5001,reload=True,log_level="debug",)
fromenumimportStrEnumfrompydanticimportBaseModelfromunique_sdkimportCellRendererTypesclassExampleColumnNames(StrEnum):QUESTION="Question"SECTION="Section"ANSWER="Answer"CRITICAL_CONSISTENCY="Critical Consistency"STATUS="Status"REVIEWER="Reviewer"classColumnDefinition(BaseModel):""" Defines a single table column's structure and styling. Attributes: order: Column position (0-indexed) name: Column header text width: Column width in pixels renderer: Optional cell renderer type (dropdown, checkbox, etc.) editable: Whether the column is editable """order:intname:ExampleColumnNameswidth:intrenderer:CellRendererTypes|None=Noneeditable:bool=TrueclassColumnDefinitions(BaseModel):""" Container for all column definitions in the table. Provides helper methods to access columns by name. """columns:list[ColumnDefinition]@propertydefcolumn_map(self)->dict[str,ColumnDefinition]:"""Map of column names to their definitions."""return{column.name:columnforcolumninself.columns}defget_column_by_name(self,name:str)->ColumnDefinition:"""Get column definition by name."""returnself.column_map[name]defget_column_name_by_order(self,order:int)->ExampleColumnNames:"""Get column definition by order."""returnself.columns[order].namedefget_column_names(self)->list[str]:"""Get list of all column names."""returnlist(self.column_map.keys())example_configuration={"columns":[{"order":0,"name":ExampleColumnNames.QUESTION,"width":300,"renderer":None,"editable":False,},{"order":1,"name":ExampleColumnNames.SECTION,"width":150,"renderer":None,"editable":False,},{"order":2,"name":ExampleColumnNames.ANSWER,"width":400,"renderer":CellRendererTypes.SELECTABLE_CELL_RENDERER,"editable":True,},{"order":3,"name":ExampleColumnNames.CRITICAL_CONSISTENCY,"width":200,"renderer":None,"editable":True,},{"order":4,"name":ExampleColumnNames.STATUS,"width":150,"renderer":CellRendererTypes.REVIEW_STATUS_DROPDOWN,"editable":True,},{"order":5,"name":ExampleColumnNames.REVIEWER,"width":150,"renderer":CellRendererTypes.COLLABORATOR_DROPDOWN,"editable":True,},]}example_column_definitions=ColumnDefinitions.model_validate(example_configuration)
fromloggingimportgetLoggerfromunique_sdk.api_resources._agentic_tableimportActivityStatusfromunique_toolkit.agentic_table.schemasimportMagicTableSheetCreatedPayloadfromunique_toolkit.agentic_table.serviceimportAgenticTableServicefrom.agentic_table_example_column_definitionimportexample_column_definitionslogger=getLogger(__name__)asyncdefhandle_sheet_created(at_service:AgenticTableService,payload:MagicTableSheetCreatedPayload)->None:""" Example handler for the sheet creation event. This demo shows how to initialize a new table by: - Setting column headers in row 0 - Applying column styles (width, renderer type, editability) The table is ready to receive data after initialization. Args: at_service: Service instance for table operations payload: Event payload with table_id and sheet_name """logger.info(f"Initializing Source of Wealth table: {payload.sheet_name}")# Set activity statusawaitat_service.set_activity(text="Initializing table schema...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)# Set headers in row 0forcol_definexample_column_definitions.columns:awaitat_service.set_cell(row=0,column=col_def.order,text=col_def.name.value)logger.info(f"Set {len(example_column_definitions.columns)} column headers")# Apply column stylesforcol_definexample_column_definitions.columns:awaitat_service.set_column_style(column=col_def.order,width=col_def.width,cell_renderer=col_def.renderer,editable=col_def.editable,)logger.info("Applied column styles with all CellRendererTypes")# Set completion statusawaitat_service.set_activity(text="Table schema initialized successfully",activity=payload.action,status=ActivityStatus.COMPLETED,)
importiofromloggingimportgetLoggerfromtypingimportCallableimportpandasaspdfromunique_sdk.api_resources._agentic_tableimportActivityStatusfromunique_toolkit.agentic_table.schemasimport(MagicTableAddMetadataPayload,MagicTableCell,)fromunique_toolkit.agentic_table.serviceimportAgenticTableServicefromunique_toolkit.contentimportContent,ContentChunkfrom.agentic_table_example_column_definitionimport(ExampleColumnNames,example_column_definitions,)from.agentic_table_helper_functionsimportContentRegistry,create_id_maplogger=getLogger(__name__)asyncdefhandle_question_files(at_service:AgenticTableService,payload:MagicTableAddMetadataPayload,downloader_fn:Callable[[str],bytes],)->int:""" Handle question files by downloading and parsing CSV to populate the table. Args: at_service: Service instance for table operations payload: Event payload with metadata and file IDs downloader_fn: Function to download file contents Returns: Number of rows added to the table Raises: Exception: If CSV processing fails """# Check if question files were providedifnotpayload.metadata.question_file_ids:logger.warning("No question files provided in metadata")return0awaitat_service.set_activity(text="Downloading CSV file...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)# Get the first question file (CSV)file_id=payload.metadata.question_file_ids[0]logger.info(f"Downloading file: {file_id}")# Download file contentfile_content=downloader_fn(file_id)awaitat_service.set_activity(text="Parsing CSV file...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)file_content_stream=io.BytesIO(file_content)# Parse CSV filedf=pd.read_csv(file_content_stream)df=df.fillna("")# Convert NA values to empty stringslogger.info(f"Parsed CSV with {len(df)} rows and {len(df.columns)} columns")logger.info(df.head())awaitat_service.set_activity(text=f"Populating table with {len(df)} rows...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)# Create batch cellscells=[]forrow_idx,row_dataindf.iterrows():forcol_definexample_column_definitions.columns:cell_value=row_data.get(col_def.name.value,"")ifnotcell_value:continuecells.append(MagicTableCell(row_order=int(row_idx)# type: ignore[arg-type]+1,# +1 for header rowcolumn_order=col_def.order,text=str(cell_value),sheet_id=payload.table_id,))logger.info(f"Created {len(cells)} cells for batch upload")# Batch upload all cellsawaitat_service.set_multiple_cells(cells=cells)logger.info(f"Successfully populated table with {len(df)} rows")returnlen(df)asyncdefhandle_source_files(at_service:AgenticTableService,payload:MagicTableAddMetadataPayload,file_content_getter_fn:Callable[[str],Content|None],augmented_text_with_references_fn:Callable[[str,dict[str,Content|ContentChunk],str,str],str],)->int:""" Handle source files by retrieving content and organizing by metadata. This handler demonstrates two key framework capabilities: 1. Retrieving file content and accessing metadata 2. Creating clickable references that link table cells to source documents The example shows: - How to fetch Content objects for uploaded files - How to use ContentRegistry to group files by metadata keys - How to generate text with inline citations and convert them to clickable references - How to populate table cells with referenced content Args: at_service: Service instance for table operations payload: Event payload with metadata and file IDs file_content_getter_fn: Function to retrieve file content objects augmented_text_with_references_fn: Function to convert citations to references Returns: Number of content items processed """# Check if source files were providedifnotpayload.metadata.source_file_ids:logger.warning("No source files provided in metadata")return0awaitat_service.set_activity(text="Processing source files metadata...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)cells_to_update:list[MagicTableCell]=[]num_rows=awaitat_service.get_num_rows()# STEP 1: Retrieve Content objects for all source files# Each Content object contains:# - content.id: Unique identifier# - content.metadata: Custom key-value pairs (e.g., {"section": "Finance"})# - content.title: File name or title# - content.text: Extracted text content# - content.chunks: List of ContentChunk objects for chunked documentsall_contents=[]forfile_idinpayload.metadata.source_file_ids:content=file_content_getter_fn(file_id)ifcontentisNone:logger.warning(f"No content found for file: {file_id}")continueifcontent.metadataisNone:logger.warning(f"No metadata found for file: {file_id}")continueall_contents.append(content)# STEP 2: Organize content by metadata keys# This example assumes source files have metadata like:# {"Team": "true"}, {"Finance": "true"}, {"Technical": "true"}, etc.sections_of_interest=["Team","Finance","Technical","Planning",]# ContentRegistry groups files by metadata keys# You can later retrieve all files tagged with "Finance", "Team", etc.content_registry=ContentRegistry(keys=sections_of_interest,contents=all_contents)# STEP 3: Process each row in the table# This demonstrates row-by-row processing where each row might need different source filesforrow_indexinrange(1,num_rows+1):# Retrieve the current row to check what data it hasrow_cells=awaitat_service.get_sheet(start_row=row_index,end_row=row_index+1)retrieved_cells:dict[ExampleColumnNames,MagicTableCell]={example_column_definitions.get_column_name_by_order(cell.column_order):cellforcellinrow_cells.magic_table_cells}logger.info(f"Retrieved cells: {retrieved_cells}")answer_cell=retrieved_cells.get(ExampleColumnNames.ANSWER)# Check if the answer cell exists. This means that the answer was already generated.ifanswer_cellisnotNone:logger.info(f"Answer found for row {row_index}: {answer_cell.text}")else:# Get the section for this row (e.g., "Finance", "Team")section_name=retrieved_cells.get(ExampleColumnNames.SECTION)ifsection_nameisNone:logger.warning(f"No section found for row {row_index}")continue# STEP 4: Retrieve relevant content based on row metadata# Use the ContentRegistry to get all files tagged with this sectionrelevant_contents=content_registry.get_contents_by_metadata_key(section_name.text)iflen(relevant_contents)==0:logger.warning(f"No contents found for section '{section_name.text}'")continuelogger.info(f"Found {len(relevant_contents)} content items for section '{section_name}'")# STEP 5: Create a reference registry for citation mapping# This creates temporary IDs like "chunk_a1b2c3d4" for each content item# These IDs will be used in inline citations: [chunk_a1b2c3d4]chunk_prefix="chunk"reference_registry=create_id_map(relevant_contents,chunk_prefix)logger.info(f"Reference registry: {reference_registry.keys()}")# STEP 6: Generate text with inline citations# In a real application, this would be AI-generated text with citations# Here we simulate it by listing the content titles with citation markerssimulated_text_generation_with_references=("The following are the contents of the section: \n")forchunk_id,contentinreference_registry.items():# Add inline citation in format [chunk_xxx]simulated_text_generation_with_references+=(f"{content.title} [{chunk_id}]\n")# STEP 7: Convert inline citations to clickable references# This transforms [chunk_a1b2c3d4] into numbered references like [1], [2]# The frontend will render these as clickable links to the source filesaugmented_text=augmented_text_with_references_fn(simulated_text_generation_with_references,reference_registry,# type: ignore[arg-type]chunk_prefix,r"\[chunk_([a-zA-Z0-9\-]+)\]",# Citation pattern to match)# STEP 8: Update the table cell with referenced textcells_to_update.append(MagicTableCell(row_order=row_index,column_order=example_column_definitions.get_column_by_name(ExampleColumnNames.ANSWER).order,text=augmented_text,sheet_id=payload.table_id,))# Apply any cell updatesifcells_to_update:awaitat_service.set_multiple_cells(cells=cells_to_update)awaitat_service.set_activity(text=f"Successfully processed {len(all_contents)} source files",activity=payload.action,status=ActivityStatus.COMPLETED,)returnlen(all_contents)asyncdefhandle_metadata_added(at_service:AgenticTableService,payload:MagicTableAddMetadataPayload,downloader_fn:Callable[[str],bytes],file_content_getter_fn:Callable[[str],Content|None],augmented_text_with_references_fn:Callable[[str,dict[str,Content|ContentChunk],str,str],str],)->None:""" Example handler for the metadata addition event. This demo shows how to populate a table from uploaded files: - Process question files: Downloads CSV files and populates the table - Process source files: Retrieves content and groups by metadata Args: at_service: Service instance for table operations payload: Event payload with metadata and file IDs downloader_fn: Function to download file contents file_content_getter_fn: Function to retrieve file content objects """logger.info(f"Processing metadata for sheet: {payload.sheet_name}")try:# Handle question files (CSV processing)num_question_rows=awaithandle_question_files(at_service=at_service,payload=payload,downloader_fn=downloader_fn,)# Handle source files (content and metadata processing)num_source_rows=awaithandle_source_files(at_service=at_service,payload=payload,file_content_getter_fn=file_content_getter_fn,augmented_text_with_references_fn=augmented_text_with_references_fn,)# This is different from the LogEntry which shows in the cell historyawaitat_service.set_activity(text=f"Successfully loaded {num_question_rows} rows from CSV and {num_source_rows} source file metadata rows",activity=payload.action,status=ActivityStatus.COMPLETED,)exceptExceptionase:logger.exception(f"Error processing files: {e}",exc_info=True)awaitat_service.set_activity(text=f"Failed to process files: {str(e)}",activity=payload.action,status=ActivityStatus.FAILED,)raise
fromunique_toolkit.agentic_table.schemasimportMagicTableUpdateCellPayloadfromunique_toolkit.agentic_table.serviceimportAgenticTableServicefromloggingimportgetLoggerfromdatetimeimportdatetimefromunique_toolkit.language_model.schemasimportLanguageModelMessageRolefromunique_sdkimportRowVerificationStatusfromunique_toolkit.agentic_table.schemasimportLogEntryfrom.agentic_table_example_column_definitionimport(example_column_definitions,ExampleColumnNames,)logger=getLogger(__name__)asyncdefhandle_cell_updated(at_service:AgenticTableService,payload:MagicTableUpdateCellPayload)->None:""" Example handler for the cell update event. This demo shows a simple workflow automation: when the Critical Consistency column changes to "Consistent", it adds a log entry and updates the row verification status. Args: at_service: Service instance for table operations payload: Event payload with row, column, and new value """logger.info(f"Cell updated at row {payload.row_order}, "f"column {payload.column_order}: {payload.data}")critical_consistency_col=example_column_definitions.get_column_by_name(ExampleColumnNames.CRITICAL_CONSISTENCY)# Check if the Critical Consistency column was updatedifpayload.column_order==critical_consistency_col.order:status_value=payload.data.strip()logger.info(f"Status changed to: {status_value}")# Check if status is Completed or Verified (lock row)ifstatus_value.lower()in["consistent"]:logger.info(f"Locking row {payload.row_order} due to status: {status_value}")# Note: Column-level locking affects all rows. In a production system,# you might track locked rows in metadata and validate edits server-side.# Here we demonstrate the pattern with a log entry.# Add log entry to document the status change and lockinglog_entries=[LogEntry(text=f"Row {payload.row_order} marked as {status_value}. Further edits should be restricted.",created_at=datetime.now().isoformat(),actor_type=LanguageModelMessageRole.ASSISTANT,)]awaitat_service.set_cell(row=payload.row_order,column=payload.column_order,text=status_value,log_entries=log_entries,)# Update row verification statusawaitat_service.update_row_verification_status(row_orders=[payload.row_order],status=RowVerificationStatus.VERIFIED)logger.info(f"Row {payload.row_order} verified and logged")
fromdatetimeimportdatetimefromloggingimportgetLoggerfromtypingimportCallablefromunique_sdk.api_resources._agentic_tableimportActivityStatusfromunique_toolkit._common.docx_generatorimport(DocxGeneratorConfig,DocxGeneratorService,)fromunique_toolkit.agentic_table.schemasimport(MagicTableGenerateArtifactPayload,MagicTableSheet,)fromunique_toolkit.agentic_table.serviceimportAgenticTableServicefromunique_toolkit.content.schemasimportContentfrom.agentic_table_example_column_definitionimport(ExampleColumnNames,example_column_definitions,)logger=getLogger(__name__)asyncdefhandle_artifact_generated(at_service:AgenticTableService,payload:MagicTableGenerateArtifactPayload,uploader_fn:Callable[[bytes,str,str],Content],)->None:""" Example handler for the artifact generation event. This demo shows how to export table data as a Word document: - Fetches all table data - Organizes it by sections - Generates a markdown report - Converts to DOCX and uploads it - Links the artifact back to the table Args: at_service: Service instance for table operations payload: Event payload with artifact type uploader_fn: Function to upload the generated file """logger.info(f"Generating artifact of type: {payload.data.artifact_type}")awaitat_service.set_activity(text="Starting report generation...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)try:# Read and organize datasheet=awaitat_service.get_sheet(start_row=0,end_row=None)rows_data=organize_sheet_data(sheet)# Build markdown reportmarkdown=build_markdown_report(rows_data)# Generate DOCXawaitat_service.set_activity(text="Generating document...",activity=payload.action,status=ActivityStatus.IN_PROGRESS,)docx_generator=DocxGeneratorService(config=DocxGeneratorConfig(template_content_id="content-template-generic",))content_fields=docx_generator.parse_markdown_to_list_content_fields(markdown)docx_file=docx_generator.generate_from_template(content_fields)ifnotdocx_file:raiseException("Failed to generate DOCX file")# Upload to chattimestamp=datetime.now().strftime("%Y%m%d_%H%M%S")filename=f"Table_Report_{timestamp}.docx"content=uploader_fn(docx_file,"application/vnd.openxmlformats-officedocument.wordprocessingml.document",filename,)# Set artifact referenceawaitat_service.set_artifact(artifact_type=payload.data.artifact_type,content_id=content.id,mime_type="application/vnd.openxmlformats-officedocument.wordprocessingml.document",name=filename,)awaitat_service.set_activity(text=f"Report generated successfully: {filename}",activity=payload.action,status=ActivityStatus.COMPLETED,)exceptExceptionase:logger.error(f"Error generating artifact: {e}")awaitat_service.set_activity(text=f"Report generation failed: {str(e)}",activity=payload.action,status=ActivityStatus.FAILED,)raisedeforganize_sheet_data(sheet:MagicTableSheet)->dict[int,dict[int,str]]:""" Convert flat cell list to nested dictionary structure. Returns: Dictionary with structure {row_order: {column_order: cell_text}} """rows_data:dict[int,dict[int,str]]={}forcellinsheet.magic_table_cells:ifcell.row_ordernotinrows_data:rows_data[cell.row_order]={}rows_data[cell.row_order][cell.column_order]=cell.textreturnrows_datadefbuild_markdown_report(rows_data:dict[int,dict[int,str]])->str:""" Build a markdown report grouped by sections. Returns: Markdown string with sections and question details """markdown_lines=["# Table Report","",f"**Generated:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}","","---","",]# Get column indicesquestion_col=example_column_definitions.get_column_by_name(ExampleColumnNames.QUESTION).ordersection_col=example_column_definitions.get_column_by_name(ExampleColumnNames.SECTION).orderanswer_col=example_column_definitions.get_column_by_name(ExampleColumnNames.ANSWER).orderconsistency_col=example_column_definitions.get_column_by_name(ExampleColumnNames.CRITICAL_CONSISTENCY).orderstatus_col=example_column_definitions.get_column_by_name(ExampleColumnNames.STATUS).orderreviewer_col=example_column_definitions.get_column_by_name(ExampleColumnNames.REVIEWER).order# Get data rows (excluding header row 0)data_rows={k:vfork,vinrows_data.items()ifk>0}# Group by sectionsections:dict[str,list[dict[int,str]]]={}forrow_dataindata_rows.values():section=row_data.get(section_col,"General")ifsectionnotinsections:sections[section]=[]sections[section].append(row_data)# Add each sectionforsection_name,section_rowsinsections.items():markdown_lines.extend([f"## {section_name}","",])forrow_datainsection_rows:question=row_data.get(question_col,"N/A")answer=row_data.get(answer_col,"N/A")consistency=row_data.get(consistency_col,"N/A")status=row_data.get(status_col,"N/A")reviewer=row_data.get(reviewer_col,"Unassigned")markdown_lines.extend([f"**Question:** {question}","",f"**Answer:** {answer}","",f"**Consistency:** {consistency}","",f"**Status:** {status}","",f"**Reviewer:** {reviewer}","","---","",])return"\n".join(markdown_lines)