Skip to content

Content Module

Deprecated

This module is deprecated. Use KnowledgeBaseService instead.

The Content module provides low-level functionality for interacting with content stored in the knowledge base.

Overview

The unique_toolkit.content module encompasses all content-related functionality. Content can be any type of textual data that is stored in the Knowledgebase on the Unique platform. During ingestion, content is parsed, split into chunks, indexed, and stored in the database.

Note: This module is deprecated. Please use KnowledgeBaseService from unique_toolkit.services.knowledge_base for all knowledge base operations.

Components

Service

unique_toolkit.content.service.ContentService

Provides methods for searching, downloading and uploading content in the knowledge base.

Source code in unique_toolkit/unique_toolkit/content/service.py
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
@deprecated("Use KnowledgeBaseService instead")
class ContentService:
    """
    Provides methods for searching, downloading and uploading content in the knowledge base.
    """

    @deprecated(
        "Use __init__ with company_id, user_id and chat_id instead or use the classmethod `from_event`"
    )
    @overload
    def __init__(self, event: Event | ChatEvent | BaseEvent): ...

    """
        Initialize the ContentService with an event (deprecated)
    """

    @overload
    def __init__(
        self,
        *,
        company_id: str,
        user_id: str,
        chat_id: str | None = None,
        metadata_filter: dict | None = None,
    ): ...

    """
        Initialize the ContentService with a company_id, user_id and chat_id and metadata_filter.
    """

    def __init__(
        self,
        event: Event | BaseEvent | None = None,
        company_id: str | None = None,
        user_id: str | None = None,
        chat_id: str | None = None,
        metadata_filter: dict | None = None,
    ):
        """
        Initialize the ContentService with a company_id, user_id and chat_id.
        """

        self._event = event  # Changed to protected attribute
        self._metadata_filter = None
        if event:
            self._company_id: str = event.company_id
            self._user_id: str = event.user_id
            if isinstance(event, (ChatEvent, Event)):
                self._metadata_filter = event.payload.metadata_filter
                self._chat_id: str | None = event.payload.chat_id
        else:
            [company_id, user_id] = validate_required_values([company_id, user_id])
            self._company_id: str = company_id
            self._user_id: str = user_id
            self._chat_id: str | None = chat_id
            self._metadata_filter = metadata_filter

    @classmethod
    def from_event(cls, event: Event | ChatEvent | BaseEvent):
        """Initialize the ContentService with an event.

        When the event has a correlation (e.g. subagent run), delegates to
        from_correlation so content operations are scoped to the parent chat
        and files uploaded in the primary session are accessible. Otherwise
        uses the event's chat_id.

        Args:
            event: The event (e.g. from the webhook payload).

        Returns:
            ContentService: Instance scoped to the event's chat or, when
                correlation is present, the parent chat.
        """
        if (
            isinstance(event, (ChatEvent, Event))
            and getattr(event.payload, "correlation", None) is not None
        ):
            if event.payload.correlation is None:
                raise ValueError(
                    "correlation attribute is not defined in the event payload"
                )
            return cls.from_correlation(
                event.company_id,
                event.user_id,
                event.payload.correlation,
                metadata_filter=getattr(event.payload, "metadata_filter", None),
            )
        chat_id = None
        metadata_filter = None

        if isinstance(event, (ChatEvent | Event)):
            chat_id = event.payload.chat_id
            metadata_filter = event.payload.metadata_filter

        return cls(
            company_id=event.company_id,
            user_id=event.user_id,
            chat_id=chat_id,
            metadata_filter=metadata_filter,
        )

    @classmethod
    def from_correlation(
        cls,
        company_id: str,
        user_id: str,
        correlation: Correlation,
        metadata_filter: dict | None = None,
    ):
        """Initialize the ContentService from a correlation (e.g. when running as a subagent).

        Content operations (search, download, upload context) are scoped to
        the parent chat so files uploaded in the primary session are
        accessible.

        Args:
            company_id: Company id (from event).
            user_id: User id (from event).
            correlation: Parent chat/message/assistant ids.
            metadata_filter: Optional metadata filter (e.g. from event.payload).

        Returns:
            ContentService: Instance with chat_id set to correlation.parent_chat_id.
        """
        return cls(
            company_id=company_id,
            user_id=user_id,
            chat_id=correlation.parent_chat_id,
            metadata_filter=metadata_filter,
        )

    @classmethod
    def from_settings(
        cls,
        settings: UniqueSettings | str | None = None,
        metadata_filter: dict | None = None,
    ):
        """
        Initialize the ContentService with a settings object and metadata filter.
        """

        if settings is None:
            settings = UniqueSettings.from_env_auto_with_sdk_init()
        elif isinstance(settings, str):
            settings = UniqueSettings.from_env_auto_with_sdk_init(filename=settings)

        return cls(
            company_id=settings.auth.company_id.get_secret_value(),
            user_id=settings.auth.user_id.get_secret_value(),
            metadata_filter=metadata_filter,
        )

    @property
    @deprecated(
        "The event property is deprecated and will be removed in a future version."
    )
    def event(self) -> Event | BaseEvent | None:
        """
        Get the event object (deprecated).

        Returns:
            Event | BaseEvent | None: The event object.
        """
        return self._event

    @property
    @deprecated(
        "The company_id property is deprecated and will be removed in a future version."
    )
    def company_id(self) -> str | None:
        """
        Get the company identifier (deprecated).

        Returns:
            str | None: The company identifier.
        """
        return self._company_id

    @company_id.setter
    @deprecated(
        "The company_id setter is deprecated and will be removed in a future version."
    )
    def company_id(self, value: str) -> None:
        """
        Set the company identifier (deprecated).

        Args:
            value (str | None): The company identifier.
        """
        self._company_id = value

    @property
    @deprecated(
        "The user_id property is deprecated and will be removed in a future version."
    )
    def user_id(self) -> str | None:
        """
        Get the user identifier (deprecated).

        Returns:
            str | None: The user identifier.
        """
        return self._user_id

    @user_id.setter
    @deprecated(
        "The user_id setter is deprecated and will be removed in a future version."
    )
    def user_id(self, value: str) -> None:
        """
        Set the user identifier (deprecated).

        Args:
            value (str | None): The user identifier.
        """
        self._user_id = value

    @property
    @deprecated(
        "The chat_id property is deprecated and will be removed in a future version."
    )
    def chat_id(self) -> str | None:
        """
        Get the chat identifier (deprecated).

        Returns:
            str | None: The chat identifier.
        """
        return self._chat_id

    @chat_id.setter
    @deprecated(
        "The chat_id setter is deprecated and will be removed in a future version."
    )
    def chat_id(self, value: str | None) -> None:
        """
        Set the chat identifier (deprecated).

        Args:
            value (str | None): The chat identifier.
        """
        self._chat_id = value

    @property
    @deprecated(
        "The metadata_filter property is deprecated and will be removed in a future version."
    )
    def metadata_filter(self) -> dict | None:
        """
        Get the metadata filter (deprecated).

        Returns:
            dict | None: The metadata filter.
        """
        return self._metadata_filter

    @metadata_filter.setter
    @deprecated(
        "The metadata_filter setter is deprecated and will be removed in a future version."
    )
    def metadata_filter(self, value: dict | None) -> None:
        """
        Set the metadata filter (deprecated).

        Args:
            value (dict | None): The metadata filter.
        """
        self._metadata_filter = value

    def search_content_chunks(
        self,
        search_string: str,
        search_type: ContentSearchType,
        limit: int,
        search_language: str = DEFAULT_SEARCH_LANGUAGE,
        chat_id: str = "",
        reranker_config: ContentRerankerConfig | None = None,
        scope_ids: list[str] | None = None,
        chat_only: bool | None = None,
        metadata_filter: dict | None = None,
        content_ids: list[str] | None = None,
        score_threshold: float | None = None,
    ) -> list[ContentChunk]:
        """
        Performs a synchronous search for content chunks in the knowledge base.

        Args:
            search_string (str): The search string.
            search_type (ContentSearchType): The type of search to perform.
            limit (int): The maximum number of results to return.
            search_language (str, optional): The language for the full-text search. Defaults to "english".
            chat_id (str, optional): The chat ID for context. Defaults to empty string.
            reranker_config (ContentRerankerConfig | None, optional): The reranker configuration. Defaults to None.
            scope_ids (list[str] | None, optional): The scope IDs to filter by. Defaults to None.
            chat_only (bool | None, optional): Whether to search only in the current chat. Defaults to None.
            metadata_filter (dict | None, optional): UniqueQL metadata filter. If unspecified/None, it tries to use the metadata filter from the event. Defaults to None.
            content_ids (list[str] | None, optional): The content IDs to search within. Defaults to None.
            score_threshold (float | None, optional): Sets the minimum similarity score for search results to be considered. Defaults to 0.

        Returns:
            list[ContentChunk]: The search results.

        Raises:
            Exception: If there's an error during the search operation.
        """

        if metadata_filter is None:
            metadata_filter = self._metadata_filter

        chat_id = chat_id or self._chat_id  # type: ignore

        if chat_only and not chat_id:
            raise ValueError("Please provide chat_id when limiting with chat_only")

        try:
            searches = search_content_chunks(
                user_id=self._user_id,
                company_id=self._company_id,
                chat_id=chat_id,
                search_string=search_string,
                search_type=search_type,
                limit=limit,
                search_language=search_language,
                reranker_config=reranker_config,
                scope_ids=scope_ids,
                chat_only=chat_only,
                metadata_filter=metadata_filter,
                content_ids=content_ids,
                score_threshold=score_threshold,
            )
            return searches
        except Exception as e:
            logger.error(f"Error while searching content chunks: {e}")
            raise e

    @deprecated("Use search_chunks_async instead")
    async def search_content_chunks_async(
        self,
        search_string: str,
        search_type: ContentSearchType,
        limit: int,
        search_language: str = DEFAULT_SEARCH_LANGUAGE,
        chat_id: str = "",
        reranker_config: ContentRerankerConfig | None = None,
        scope_ids: list[str] | None = None,
        chat_only: bool | None = None,
        metadata_filter: dict | None = None,
        content_ids: list[str] | None = None,
        score_threshold: float | None = None,
    ):
        """
        Performs an asynchronous search for content chunks in the knowledge base.

        Args:
            search_string (str): The search string.
            search_type (ContentSearchType): The type of search to perform.
            limit (int): The maximum number of results to return.
            search_language (str, optional): The language for the full-text search. Defaults to "english".
            chat_id (str, optional): The chat ID for context. Defaults to empty string.
            reranker_config (ContentRerankerConfig | None, optional): The reranker configuration. Defaults to None.
            scope_ids (list[str] | None, optional): The scope IDs to filter by. Defaults to None.
            chat_only (bool | None, optional): Whether to search only in the current chat. Defaults to None.
            metadata_filter (dict | None, optional): UniqueQL metadata filter. If unspecified/None, it tries to use the metadata filter from the event. Defaults to None.
            content_ids (list[str] | None, optional): The content IDs to search within. Defaults to None.
            score_threshold (float | None, optional): Sets the minimum similarity score for search results to be considered. Defaults to 0.

        Returns:
            list[ContentChunk]: The search results.

        Raises:
            Exception: If there's an error during the search operation.
        """
        if metadata_filter is None:
            metadata_filter = self._metadata_filter

        chat_id = chat_id or self._chat_id  # type: ignore

        if chat_only and not chat_id:
            raise ValueError("Please provide chat_id when limiting with chat_only.")

        try:
            searches = await search_content_chunks_async(
                user_id=self._user_id,
                company_id=self._company_id,
                chat_id=chat_id,
                search_string=search_string,
                search_type=search_type,
                limit=limit,
                search_language=search_language,
                reranker_config=reranker_config,
                scope_ids=scope_ids,
                chat_only=chat_only,
                metadata_filter=metadata_filter,
                content_ids=content_ids,
                score_threshold=score_threshold,
            )
            return searches
        except Exception as e:
            logger.error(f"Error while searching content chunks: {e}")
            raise e

    def search_contents(
        self,
        where: dict,
        chat_id: str = "",
    ) -> list[Content]:
        """
        Performs a search in the knowledge base by filter (and not a smilarity search)
        This function loads complete content of the files from the knowledge base in contrast to search_content_chunks.

        Args:
            where (dict): The search criteria.

        Returns:
            list[Content]: The search results.
        """
        chat_id = chat_id or self._chat_id  # type: ignore

        return search_contents(
            user_id=self._user_id,
            company_id=self._company_id,
            chat_id=chat_id,
            where=where,
        )

    async def search_contents_async(
        self,
        where: dict,
        chat_id: str = "",
    ) -> list[Content]:
        """
        Performs an asynchronous search for content files in the knowledge base by filter.

        Args:
            where (dict): The search criteria.

        Returns:
            list[Content]: The search results.
        """
        chat_id = chat_id or self._chat_id  # type: ignore

        return await search_contents_async(
            user_id=self._user_id,
            company_id=self._company_id,
            chat_id=chat_id,
            where=where,
        )

    def search_content_on_chat(self, chat_id: str) -> list[Content]:
        where = {"ownerId": {"equals": chat_id}}

        return self.search_contents(where, chat_id=chat_id)

    def upload_content_from_bytes(
        self,
        content: bytes,
        content_name: str,
        mime_type: str,
        scope_id: str | None = None,
        chat_id: str | None = None,
        skip_ingestion: bool = False,
        skip_excel_ingestion: bool = False,
        ingestion_config: unique_sdk.Content.IngestionConfig | None = None,
        metadata: dict | None = None,
    ) -> Content:
        """
        Uploads content to the knowledge base.

        Args:
            content (bytes): The content to upload.
            content_name (str): The name of the content.
            mime_type (str): The MIME type of the content.
            scope_id (str | None): The scope ID. Defaults to None.
            chat_id (str | None): The chat ID. Defaults to None.
            skip_ingestion (bool): Whether to skip ingestion. Defaults to False.
            skip_excel_ingestion (bool): Whether to skip excel ingestion. Defaults to False.
            ingestion_config (unique_sdk.Content.IngestionConfig | None): The ingestion configuration. Defaults to None.
            metadata (dict | None): The metadata to associate with the content. Defaults to None.

        Returns:
            Content: The uploaded content.
        """

        return upload_content_from_bytes(
            user_id=self._user_id,
            company_id=self._company_id,
            content=content,
            content_name=content_name,
            mime_type=mime_type,
            scope_id=scope_id,
            chat_id=chat_id,
            skip_ingestion=skip_ingestion,
            ingestion_config=ingestion_config,
            metadata=metadata,
        )

    async def upload_content_from_bytes_async(
        self,
        content: bytes,
        content_name: str,
        mime_type: str,
        scope_id: str | None = None,
        chat_id: str | None = None,
        skip_ingestion: bool = False,
        ingestion_config: unique_sdk.Content.IngestionConfig | None = None,
        metadata: dict | None = None,
    ) -> Content:
        """
        Uploads content to the knowledge base.

        Args:
            content (bytes): The content to upload.
            content_name (str): The name of the content.
            mime_type (str): The MIME type of the content.
            scope_id (str | None): The scope ID. Defaults to None.
            skip_ingestion (bool): Whether to skip ingestion. Defaults to False.
            skip_excel_ingestion (bool): Whether to skip excel ingestion. Defaults to False.
            ingestion_config (unique_sdk.Content.IngestionConfig | None): The ingestion configuration. Defaults to None.
            metadata (dict | None): The metadata to associate with the content. Defaults to None.

        Returns:
            Content: The uploaded content.
        """

        return await upload_content_from_bytes_async(
            user_id=self._user_id,
            company_id=self._company_id,
            content=content,
            content_name=content_name,
            mime_type=mime_type,
            scope_id=scope_id,
            chat_id=chat_id,
            skip_ingestion=skip_ingestion,
            ingestion_config=ingestion_config,
            metadata=metadata,
        )

    def upload_content(
        self,
        path_to_content: str,
        content_name: str,
        mime_type: str,
        scope_id: str | None = None,
        chat_id: str | None = None,
        skip_ingestion: bool = False,
        skip_excel_ingestion: bool = False,
        ingestion_config: unique_sdk.Content.IngestionConfig | None = None,
        metadata: dict[str, Any] | None = None,
    ) -> Content:
        """
        Uploads content to the knowledge base.

        Args:
            path_to_content (str): The path to the content to upload.
            content_name (str): The name of the content.
            mime_type (str): The MIME type of the content.
            scope_id (str | None): The scope ID. Defaults to None.
            chat_id (str | None): The chat ID. Defaults to None.
            skip_ingestion (bool): Whether to skip ingestion. Defaults to False.
            skip_excel_ingestion (bool): Whether to skip excel ingestion. Defaults to False.
            ingestion_config (unique_sdk.Content.IngestionConfig | None): The ingestion configuration. Defaults to None.
            metadata (dict[str, Any] | None): The metadata to associate with the content. Defaults to None.

        Returns:
            Content: The uploaded content.
        """

        return upload_content(
            user_id=self._user_id,
            company_id=self._company_id,
            path_to_content=path_to_content,
            content_name=content_name,
            mime_type=mime_type,
            scope_id=scope_id,
            chat_id=chat_id,
            skip_ingestion=skip_ingestion,
            skip_excel_ingestion=skip_excel_ingestion,
            ingestion_config=ingestion_config,
            metadata=metadata,
        )

    def request_content_by_id(
        self,
        content_id: str,
        chat_id: str | None = None,
    ) -> Response:
        """
        Sends a request to download content from a chat.

        Args:
            content_id (str): The ID of the content to download.
            chat_id (str): The ID of the chat from which to download the content. Defaults to None to download from knowledge base.

        Returns:
            requests.Response: The response object containing the downloaded content.

        """
        chat_id = chat_id or self._chat_id  # type: ignore

        return request_content_by_id(
            user_id=self._user_id,
            company_id=self._company_id,
            content_id=content_id,
            chat_id=chat_id,
        )

    def download_content_to_file_by_id(
        self,
        content_id: str,
        chat_id: str | None = None,
        filename: str | None = None,
        tmp_dir_path: str | Path | None = "/tmp",
    ):
        """
        Downloads content from a chat and saves it to a file.

        Args:
            content_id (str): The ID of the content to download.
            chat_id (str | None): The ID of the chat to download from. Defaults to None and the file is downloaded from the knowledge base.
            filename (str | None): The name of the file to save the content as. If not provided, the original filename will be used. Defaults to None.
            tmp_dir_path (str | Path | None): The path to the temporary directory where the content will be saved. Defaults to "/tmp".

        Returns:
            Path: The path to the downloaded file.

        Raises:
            Exception: If the download fails or the filename cannot be determined.
        """

        chat_id = chat_id or self._chat_id  # type: ignore

        return download_content_to_file_by_id(
            user_id=self._user_id,
            company_id=self._company_id,
            content_id=content_id,
            chat_id=chat_id,
            filename=filename,
            tmp_dir_path=tmp_dir_path,
        )

    # TODO: Discuss if we should deprecate this method due to unclear use by content_name
    def download_content(
        self,
        content_id: str,
        content_name: str,
        chat_id: str | None = None,
        dir_path: str | Path | None = "/tmp",
    ) -> Path:
        """
        Downloads content to temporary directory

        Args:
            content_id (str): The id of the uploaded content.
            content_name (str): The name of the uploaded content.
            chat_id (Optional[str]): The chat_id, defaults to None.
            dir_path (Optional[Union[str, Path]]): The directory path to download the content to, defaults to "/tmp". If not provided, the content will be downloaded to a random directory inside /tmp. Be aware that this directory won't be cleaned up automatically.

        Returns:
            content_path: The path to the downloaded content in the temporary directory.

        Raises:
            Exception: If the download fails.
        """

        chat_id = chat_id or self._chat_id  # type: ignore

        return download_content(
            user_id=self._user_id,
            company_id=self._company_id,
            content_id=content_id,
            content_name=content_name,
            chat_id=chat_id,
            dir_path=dir_path,
        )

    def download_content_to_bytes(
        self,
        content_id: str,
        chat_id: str | None = None,
    ) -> bytes:
        """
        Downloads content to memory

        Args:
            content_id (str): The id of the uploaded content.
            chat_id (Optional[str]): The chat_id, defaults to None.

        Returns:
            bytes: The downloaded content.

        Raises:
            Exception: If the download fails.
        """
        chat_id = chat_id or self._chat_id  # type: ignore
        return download_content_to_bytes(
            user_id=self._user_id,
            company_id=self._company_id,
            content_id=content_id,
            chat_id=chat_id,
        )

    async def download_content_to_bytes_async(
        self,
        content_id: str,
        chat_id: str | None = None,
    ) -> bytes:
        """
        Asynchronously downloads content to memory.

        Args:
            content_id (str): The id of the uploaded content.
            chat_id (Optional[str]): The chat_id, defaults to None.

        Returns:
            bytes: The downloaded content.

        Raises:
            Exception: If the download fails.
        """
        chat_id = chat_id or self._chat_id  # type: ignore
        return await download_content_to_bytes_async(
            user_id=self._user_id,
            company_id=self._company_id,
            content_id=content_id,
            chat_id=chat_id,
        )

    def get_documents_uploaded_to_chat(self) -> list[Content]:
        chat_contents = self.search_contents(
            where={
                "ownerId": {
                    "equals": self._chat_id,
                },
            },
        )

        content: list[Content] = []
        for c in chat_contents:
            if self.is_file_content(c.key):
                content.append(c)

        return content

    def get_images_uploaded_to_chat(self) -> list[Content]:
        chat_contents = self.search_contents(
            where={
                "ownerId": {
                    "equals": self._chat_id,
                },
            },
        )

        content: list[Content] = []
        for c in chat_contents:
            if self.is_image_content(c.key):
                content.append(c)

        return content

    def is_file_content(self, filename: str) -> bool:
        return is_file_content(filename=filename)

    def is_image_content(self, filename: str) -> bool:
        return is_image_content(filename=filename)

chat_id property writable

Get the chat identifier (deprecated).

Returns:

Type Description
str | None

str | None: The chat identifier.

company_id property writable

Get the company identifier (deprecated).

Returns:

Type Description
str | None

str | None: The company identifier.

event property

Get the event object (deprecated).

Returns:

Type Description
Event | BaseEvent | None

Event | BaseEvent | None: The event object.

metadata_filter property writable

Get the metadata filter (deprecated).

Returns:

Type Description
dict | None

dict | None: The metadata filter.

user_id property writable

Get the user identifier (deprecated).

Returns:

Type Description
str | None

str | None: The user identifier.

__init__(event=None, company_id=None, user_id=None, chat_id=None, metadata_filter=None)

__init__(event: Event | ChatEvent | BaseEvent)
__init__(
    *,
    company_id: str,
    user_id: str,
    chat_id: str | None = None,
    metadata_filter: dict | None = None,
)

Initialize the ContentService with a company_id, user_id and chat_id.

Source code in unique_toolkit/unique_toolkit/content/service.py
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
def __init__(
    self,
    event: Event | BaseEvent | None = None,
    company_id: str | None = None,
    user_id: str | None = None,
    chat_id: str | None = None,
    metadata_filter: dict | None = None,
):
    """
    Initialize the ContentService with a company_id, user_id and chat_id.
    """

    self._event = event  # Changed to protected attribute
    self._metadata_filter = None
    if event:
        self._company_id: str = event.company_id
        self._user_id: str = event.user_id
        if isinstance(event, (ChatEvent, Event)):
            self._metadata_filter = event.payload.metadata_filter
            self._chat_id: str | None = event.payload.chat_id
    else:
        [company_id, user_id] = validate_required_values([company_id, user_id])
        self._company_id: str = company_id
        self._user_id: str = user_id
        self._chat_id: str | None = chat_id
        self._metadata_filter = metadata_filter

download_content(content_id, content_name, chat_id=None, dir_path='/tmp')

Downloads content to temporary directory

Parameters:

Name Type Description Default
content_id str

The id of the uploaded content.

required
content_name str

The name of the uploaded content.

required
chat_id Optional[str]

The chat_id, defaults to None.

None
dir_path Optional[Union[str, Path]]

The directory path to download the content to, defaults to "/tmp". If not provided, the content will be downloaded to a random directory inside /tmp. Be aware that this directory won't be cleaned up automatically.

'/tmp'

Returns:

Name Type Description
content_path Path

The path to the downloaded content in the temporary directory.

Raises:

Type Description
Exception

If the download fails.

Source code in unique_toolkit/unique_toolkit/content/service.py
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
def download_content(
    self,
    content_id: str,
    content_name: str,
    chat_id: str | None = None,
    dir_path: str | Path | None = "/tmp",
) -> Path:
    """
    Downloads content to temporary directory

    Args:
        content_id (str): The id of the uploaded content.
        content_name (str): The name of the uploaded content.
        chat_id (Optional[str]): The chat_id, defaults to None.
        dir_path (Optional[Union[str, Path]]): The directory path to download the content to, defaults to "/tmp". If not provided, the content will be downloaded to a random directory inside /tmp. Be aware that this directory won't be cleaned up automatically.

    Returns:
        content_path: The path to the downloaded content in the temporary directory.

    Raises:
        Exception: If the download fails.
    """

    chat_id = chat_id or self._chat_id  # type: ignore

    return download_content(
        user_id=self._user_id,
        company_id=self._company_id,
        content_id=content_id,
        content_name=content_name,
        chat_id=chat_id,
        dir_path=dir_path,
    )

download_content_to_bytes(content_id, chat_id=None)

Downloads content to memory

Parameters:

Name Type Description Default
content_id str

The id of the uploaded content.

required
chat_id Optional[str]

The chat_id, defaults to None.

None

Returns:

Name Type Description
bytes bytes

The downloaded content.

Raises:

Type Description
Exception

If the download fails.

Source code in unique_toolkit/unique_toolkit/content/service.py
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
def download_content_to_bytes(
    self,
    content_id: str,
    chat_id: str | None = None,
) -> bytes:
    """
    Downloads content to memory

    Args:
        content_id (str): The id of the uploaded content.
        chat_id (Optional[str]): The chat_id, defaults to None.

    Returns:
        bytes: The downloaded content.

    Raises:
        Exception: If the download fails.
    """
    chat_id = chat_id or self._chat_id  # type: ignore
    return download_content_to_bytes(
        user_id=self._user_id,
        company_id=self._company_id,
        content_id=content_id,
        chat_id=chat_id,
    )

download_content_to_bytes_async(content_id, chat_id=None) async

Asynchronously downloads content to memory.

Parameters:

Name Type Description Default
content_id str

The id of the uploaded content.

required
chat_id Optional[str]

The chat_id, defaults to None.

None

Returns:

Name Type Description
bytes bytes

The downloaded content.

Raises:

Type Description
Exception

If the download fails.

Source code in unique_toolkit/unique_toolkit/content/service.py
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
async def download_content_to_bytes_async(
    self,
    content_id: str,
    chat_id: str | None = None,
) -> bytes:
    """
    Asynchronously downloads content to memory.

    Args:
        content_id (str): The id of the uploaded content.
        chat_id (Optional[str]): The chat_id, defaults to None.

    Returns:
        bytes: The downloaded content.

    Raises:
        Exception: If the download fails.
    """
    chat_id = chat_id or self._chat_id  # type: ignore
    return await download_content_to_bytes_async(
        user_id=self._user_id,
        company_id=self._company_id,
        content_id=content_id,
        chat_id=chat_id,
    )

download_content_to_file_by_id(content_id, chat_id=None, filename=None, tmp_dir_path='/tmp')

Downloads content from a chat and saves it to a file.

Parameters:

Name Type Description Default
content_id str

The ID of the content to download.

required
chat_id str | None

The ID of the chat to download from. Defaults to None and the file is downloaded from the knowledge base.

None
filename str | None

The name of the file to save the content as. If not provided, the original filename will be used. Defaults to None.

None
tmp_dir_path str | Path | None

The path to the temporary directory where the content will be saved. Defaults to "/tmp".

'/tmp'

Returns:

Name Type Description
Path

The path to the downloaded file.

Raises:

Type Description
Exception

If the download fails or the filename cannot be determined.

Source code in unique_toolkit/unique_toolkit/content/service.py
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
def download_content_to_file_by_id(
    self,
    content_id: str,
    chat_id: str | None = None,
    filename: str | None = None,
    tmp_dir_path: str | Path | None = "/tmp",
):
    """
    Downloads content from a chat and saves it to a file.

    Args:
        content_id (str): The ID of the content to download.
        chat_id (str | None): The ID of the chat to download from. Defaults to None and the file is downloaded from the knowledge base.
        filename (str | None): The name of the file to save the content as. If not provided, the original filename will be used. Defaults to None.
        tmp_dir_path (str | Path | None): The path to the temporary directory where the content will be saved. Defaults to "/tmp".

    Returns:
        Path: The path to the downloaded file.

    Raises:
        Exception: If the download fails or the filename cannot be determined.
    """

    chat_id = chat_id or self._chat_id  # type: ignore

    return download_content_to_file_by_id(
        user_id=self._user_id,
        company_id=self._company_id,
        content_id=content_id,
        chat_id=chat_id,
        filename=filename,
        tmp_dir_path=tmp_dir_path,
    )

from_correlation(company_id, user_id, correlation, metadata_filter=None) classmethod

Initialize the ContentService from a correlation (e.g. when running as a subagent).

Content operations (search, download, upload context) are scoped to the parent chat so files uploaded in the primary session are accessible.

Parameters:

Name Type Description Default
company_id str

Company id (from event).

required
user_id str

User id (from event).

required
correlation Correlation

Parent chat/message/assistant ids.

required
metadata_filter dict | None

Optional metadata filter (e.g. from event.payload).

None

Returns:

Name Type Description
ContentService

Instance with chat_id set to correlation.parent_chat_id.

Source code in unique_toolkit/unique_toolkit/content/service.py
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
@classmethod
def from_correlation(
    cls,
    company_id: str,
    user_id: str,
    correlation: Correlation,
    metadata_filter: dict | None = None,
):
    """Initialize the ContentService from a correlation (e.g. when running as a subagent).

    Content operations (search, download, upload context) are scoped to
    the parent chat so files uploaded in the primary session are
    accessible.

    Args:
        company_id: Company id (from event).
        user_id: User id (from event).
        correlation: Parent chat/message/assistant ids.
        metadata_filter: Optional metadata filter (e.g. from event.payload).

    Returns:
        ContentService: Instance with chat_id set to correlation.parent_chat_id.
    """
    return cls(
        company_id=company_id,
        user_id=user_id,
        chat_id=correlation.parent_chat_id,
        metadata_filter=metadata_filter,
    )

from_event(event) classmethod

Initialize the ContentService with an event.

When the event has a correlation (e.g. subagent run), delegates to from_correlation so content operations are scoped to the parent chat and files uploaded in the primary session are accessible. Otherwise uses the event's chat_id.

Parameters:

Name Type Description Default
event Event | ChatEvent | BaseEvent

The event (e.g. from the webhook payload).

required

Returns:

Name Type Description
ContentService

Instance scoped to the event's chat or, when correlation is present, the parent chat.

Source code in unique_toolkit/unique_toolkit/content/service.py
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
@classmethod
def from_event(cls, event: Event | ChatEvent | BaseEvent):
    """Initialize the ContentService with an event.

    When the event has a correlation (e.g. subagent run), delegates to
    from_correlation so content operations are scoped to the parent chat
    and files uploaded in the primary session are accessible. Otherwise
    uses the event's chat_id.

    Args:
        event: The event (e.g. from the webhook payload).

    Returns:
        ContentService: Instance scoped to the event's chat or, when
            correlation is present, the parent chat.
    """
    if (
        isinstance(event, (ChatEvent, Event))
        and getattr(event.payload, "correlation", None) is not None
    ):
        if event.payload.correlation is None:
            raise ValueError(
                "correlation attribute is not defined in the event payload"
            )
        return cls.from_correlation(
            event.company_id,
            event.user_id,
            event.payload.correlation,
            metadata_filter=getattr(event.payload, "metadata_filter", None),
        )
    chat_id = None
    metadata_filter = None

    if isinstance(event, (ChatEvent | Event)):
        chat_id = event.payload.chat_id
        metadata_filter = event.payload.metadata_filter

    return cls(
        company_id=event.company_id,
        user_id=event.user_id,
        chat_id=chat_id,
        metadata_filter=metadata_filter,
    )

from_settings(settings=None, metadata_filter=None) classmethod

Initialize the ContentService with a settings object and metadata filter.

Source code in unique_toolkit/unique_toolkit/content/service.py
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
@classmethod
def from_settings(
    cls,
    settings: UniqueSettings | str | None = None,
    metadata_filter: dict | None = None,
):
    """
    Initialize the ContentService with a settings object and metadata filter.
    """

    if settings is None:
        settings = UniqueSettings.from_env_auto_with_sdk_init()
    elif isinstance(settings, str):
        settings = UniqueSettings.from_env_auto_with_sdk_init(filename=settings)

    return cls(
        company_id=settings.auth.company_id.get_secret_value(),
        user_id=settings.auth.user_id.get_secret_value(),
        metadata_filter=metadata_filter,
    )

request_content_by_id(content_id, chat_id=None)

Sends a request to download content from a chat.

Parameters:

Name Type Description Default
content_id str

The ID of the content to download.

required
chat_id str

The ID of the chat from which to download the content. Defaults to None to download from knowledge base.

None

Returns:

Type Description
Response

requests.Response: The response object containing the downloaded content.

Source code in unique_toolkit/unique_toolkit/content/service.py
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
def request_content_by_id(
    self,
    content_id: str,
    chat_id: str | None = None,
) -> Response:
    """
    Sends a request to download content from a chat.

    Args:
        content_id (str): The ID of the content to download.
        chat_id (str): The ID of the chat from which to download the content. Defaults to None to download from knowledge base.

    Returns:
        requests.Response: The response object containing the downloaded content.

    """
    chat_id = chat_id or self._chat_id  # type: ignore

    return request_content_by_id(
        user_id=self._user_id,
        company_id=self._company_id,
        content_id=content_id,
        chat_id=chat_id,
    )

search_content_chunks(search_string, search_type, limit, search_language=DEFAULT_SEARCH_LANGUAGE, chat_id='', reranker_config=None, scope_ids=None, chat_only=None, metadata_filter=None, content_ids=None, score_threshold=None)

Performs a synchronous search for content chunks in the knowledge base.

Parameters:

Name Type Description Default
search_string str

The search string.

required
search_type ContentSearchType

The type of search to perform.

required
limit int

The maximum number of results to return.

required
search_language str

The language for the full-text search. Defaults to "english".

DEFAULT_SEARCH_LANGUAGE
chat_id str

The chat ID for context. Defaults to empty string.

''
reranker_config ContentRerankerConfig | None

The reranker configuration. Defaults to None.

None
scope_ids list[str] | None

The scope IDs to filter by. Defaults to None.

None
chat_only bool | None

Whether to search only in the current chat. Defaults to None.

None
metadata_filter dict | None

UniqueQL metadata filter. If unspecified/None, it tries to use the metadata filter from the event. Defaults to None.

None
content_ids list[str] | None

The content IDs to search within. Defaults to None.

None
score_threshold float | None

Sets the minimum similarity score for search results to be considered. Defaults to 0.

None

Returns:

Type Description
list[ContentChunk]

list[ContentChunk]: The search results.

Raises:

Type Description
Exception

If there's an error during the search operation.

Source code in unique_toolkit/unique_toolkit/content/service.py
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
def search_content_chunks(
    self,
    search_string: str,
    search_type: ContentSearchType,
    limit: int,
    search_language: str = DEFAULT_SEARCH_LANGUAGE,
    chat_id: str = "",
    reranker_config: ContentRerankerConfig | None = None,
    scope_ids: list[str] | None = None,
    chat_only: bool | None = None,
    metadata_filter: dict | None = None,
    content_ids: list[str] | None = None,
    score_threshold: float | None = None,
) -> list[ContentChunk]:
    """
    Performs a synchronous search for content chunks in the knowledge base.

    Args:
        search_string (str): The search string.
        search_type (ContentSearchType): The type of search to perform.
        limit (int): The maximum number of results to return.
        search_language (str, optional): The language for the full-text search. Defaults to "english".
        chat_id (str, optional): The chat ID for context. Defaults to empty string.
        reranker_config (ContentRerankerConfig | None, optional): The reranker configuration. Defaults to None.
        scope_ids (list[str] | None, optional): The scope IDs to filter by. Defaults to None.
        chat_only (bool | None, optional): Whether to search only in the current chat. Defaults to None.
        metadata_filter (dict | None, optional): UniqueQL metadata filter. If unspecified/None, it tries to use the metadata filter from the event. Defaults to None.
        content_ids (list[str] | None, optional): The content IDs to search within. Defaults to None.
        score_threshold (float | None, optional): Sets the minimum similarity score for search results to be considered. Defaults to 0.

    Returns:
        list[ContentChunk]: The search results.

    Raises:
        Exception: If there's an error during the search operation.
    """

    if metadata_filter is None:
        metadata_filter = self._metadata_filter

    chat_id = chat_id or self._chat_id  # type: ignore

    if chat_only and not chat_id:
        raise ValueError("Please provide chat_id when limiting with chat_only")

    try:
        searches = search_content_chunks(
            user_id=self._user_id,
            company_id=self._company_id,
            chat_id=chat_id,
            search_string=search_string,
            search_type=search_type,
            limit=limit,
            search_language=search_language,
            reranker_config=reranker_config,
            scope_ids=scope_ids,
            chat_only=chat_only,
            metadata_filter=metadata_filter,
            content_ids=content_ids,
            score_threshold=score_threshold,
        )
        return searches
    except Exception as e:
        logger.error(f"Error while searching content chunks: {e}")
        raise e

search_content_chunks_async(search_string, search_type, limit, search_language=DEFAULT_SEARCH_LANGUAGE, chat_id='', reranker_config=None, scope_ids=None, chat_only=None, metadata_filter=None, content_ids=None, score_threshold=None) async

Performs an asynchronous search for content chunks in the knowledge base.

Parameters:

Name Type Description Default
search_string str

The search string.

required
search_type ContentSearchType

The type of search to perform.

required
limit int

The maximum number of results to return.

required
search_language str

The language for the full-text search. Defaults to "english".

DEFAULT_SEARCH_LANGUAGE
chat_id str

The chat ID for context. Defaults to empty string.

''
reranker_config ContentRerankerConfig | None

The reranker configuration. Defaults to None.

None
scope_ids list[str] | None

The scope IDs to filter by. Defaults to None.

None
chat_only bool | None

Whether to search only in the current chat. Defaults to None.

None
metadata_filter dict | None

UniqueQL metadata filter. If unspecified/None, it tries to use the metadata filter from the event. Defaults to None.

None
content_ids list[str] | None

The content IDs to search within. Defaults to None.

None
score_threshold float | None

Sets the minimum similarity score for search results to be considered. Defaults to 0.

None

Returns:

Type Description

list[ContentChunk]: The search results.

Raises:

Type Description
Exception

If there's an error during the search operation.

Source code in unique_toolkit/unique_toolkit/content/service.py
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
@deprecated("Use search_chunks_async instead")
async def search_content_chunks_async(
    self,
    search_string: str,
    search_type: ContentSearchType,
    limit: int,
    search_language: str = DEFAULT_SEARCH_LANGUAGE,
    chat_id: str = "",
    reranker_config: ContentRerankerConfig | None = None,
    scope_ids: list[str] | None = None,
    chat_only: bool | None = None,
    metadata_filter: dict | None = None,
    content_ids: list[str] | None = None,
    score_threshold: float | None = None,
):
    """
    Performs an asynchronous search for content chunks in the knowledge base.

    Args:
        search_string (str): The search string.
        search_type (ContentSearchType): The type of search to perform.
        limit (int): The maximum number of results to return.
        search_language (str, optional): The language for the full-text search. Defaults to "english".
        chat_id (str, optional): The chat ID for context. Defaults to empty string.
        reranker_config (ContentRerankerConfig | None, optional): The reranker configuration. Defaults to None.
        scope_ids (list[str] | None, optional): The scope IDs to filter by. Defaults to None.
        chat_only (bool | None, optional): Whether to search only in the current chat. Defaults to None.
        metadata_filter (dict | None, optional): UniqueQL metadata filter. If unspecified/None, it tries to use the metadata filter from the event. Defaults to None.
        content_ids (list[str] | None, optional): The content IDs to search within. Defaults to None.
        score_threshold (float | None, optional): Sets the minimum similarity score for search results to be considered. Defaults to 0.

    Returns:
        list[ContentChunk]: The search results.

    Raises:
        Exception: If there's an error during the search operation.
    """
    if metadata_filter is None:
        metadata_filter = self._metadata_filter

    chat_id = chat_id or self._chat_id  # type: ignore

    if chat_only and not chat_id:
        raise ValueError("Please provide chat_id when limiting with chat_only.")

    try:
        searches = await search_content_chunks_async(
            user_id=self._user_id,
            company_id=self._company_id,
            chat_id=chat_id,
            search_string=search_string,
            search_type=search_type,
            limit=limit,
            search_language=search_language,
            reranker_config=reranker_config,
            scope_ids=scope_ids,
            chat_only=chat_only,
            metadata_filter=metadata_filter,
            content_ids=content_ids,
            score_threshold=score_threshold,
        )
        return searches
    except Exception as e:
        logger.error(f"Error while searching content chunks: {e}")
        raise e

search_contents(where, chat_id='')

Performs a search in the knowledge base by filter (and not a smilarity search) This function loads complete content of the files from the knowledge base in contrast to search_content_chunks.

Parameters:

Name Type Description Default
where dict

The search criteria.

required

Returns:

Type Description
list[Content]

list[Content]: The search results.

Source code in unique_toolkit/unique_toolkit/content/service.py
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
def search_contents(
    self,
    where: dict,
    chat_id: str = "",
) -> list[Content]:
    """
    Performs a search in the knowledge base by filter (and not a smilarity search)
    This function loads complete content of the files from the knowledge base in contrast to search_content_chunks.

    Args:
        where (dict): The search criteria.

    Returns:
        list[Content]: The search results.
    """
    chat_id = chat_id or self._chat_id  # type: ignore

    return search_contents(
        user_id=self._user_id,
        company_id=self._company_id,
        chat_id=chat_id,
        where=where,
    )

search_contents_async(where, chat_id='') async

Performs an asynchronous search for content files in the knowledge base by filter.

Parameters:

Name Type Description Default
where dict

The search criteria.

required

Returns:

Type Description
list[Content]

list[Content]: The search results.

Source code in unique_toolkit/unique_toolkit/content/service.py
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
async def search_contents_async(
    self,
    where: dict,
    chat_id: str = "",
) -> list[Content]:
    """
    Performs an asynchronous search for content files in the knowledge base by filter.

    Args:
        where (dict): The search criteria.

    Returns:
        list[Content]: The search results.
    """
    chat_id = chat_id or self._chat_id  # type: ignore

    return await search_contents_async(
        user_id=self._user_id,
        company_id=self._company_id,
        chat_id=chat_id,
        where=where,
    )

upload_content(path_to_content, content_name, mime_type, scope_id=None, chat_id=None, skip_ingestion=False, skip_excel_ingestion=False, ingestion_config=None, metadata=None)

Uploads content to the knowledge base.

Parameters:

Name Type Description Default
path_to_content str

The path to the content to upload.

required
content_name str

The name of the content.

required
mime_type str

The MIME type of the content.

required
scope_id str | None

The scope ID. Defaults to None.

None
chat_id str | None

The chat ID. Defaults to None.

None
skip_ingestion bool

Whether to skip ingestion. Defaults to False.

False
skip_excel_ingestion bool

Whether to skip excel ingestion. Defaults to False.

False
ingestion_config IngestionConfig | None

The ingestion configuration. Defaults to None.

None
metadata dict[str, Any] | None

The metadata to associate with the content. Defaults to None.

None

Returns:

Name Type Description
Content Content

The uploaded content.

Source code in unique_toolkit/unique_toolkit/content/service.py
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
def upload_content(
    self,
    path_to_content: str,
    content_name: str,
    mime_type: str,
    scope_id: str | None = None,
    chat_id: str | None = None,
    skip_ingestion: bool = False,
    skip_excel_ingestion: bool = False,
    ingestion_config: unique_sdk.Content.IngestionConfig | None = None,
    metadata: dict[str, Any] | None = None,
) -> Content:
    """
    Uploads content to the knowledge base.

    Args:
        path_to_content (str): The path to the content to upload.
        content_name (str): The name of the content.
        mime_type (str): The MIME type of the content.
        scope_id (str | None): The scope ID. Defaults to None.
        chat_id (str | None): The chat ID. Defaults to None.
        skip_ingestion (bool): Whether to skip ingestion. Defaults to False.
        skip_excel_ingestion (bool): Whether to skip excel ingestion. Defaults to False.
        ingestion_config (unique_sdk.Content.IngestionConfig | None): The ingestion configuration. Defaults to None.
        metadata (dict[str, Any] | None): The metadata to associate with the content. Defaults to None.

    Returns:
        Content: The uploaded content.
    """

    return upload_content(
        user_id=self._user_id,
        company_id=self._company_id,
        path_to_content=path_to_content,
        content_name=content_name,
        mime_type=mime_type,
        scope_id=scope_id,
        chat_id=chat_id,
        skip_ingestion=skip_ingestion,
        skip_excel_ingestion=skip_excel_ingestion,
        ingestion_config=ingestion_config,
        metadata=metadata,
    )

upload_content_from_bytes(content, content_name, mime_type, scope_id=None, chat_id=None, skip_ingestion=False, skip_excel_ingestion=False, ingestion_config=None, metadata=None)

Uploads content to the knowledge base.

Parameters:

Name Type Description Default
content bytes

The content to upload.

required
content_name str

The name of the content.

required
mime_type str

The MIME type of the content.

required
scope_id str | None

The scope ID. Defaults to None.

None
chat_id str | None

The chat ID. Defaults to None.

None
skip_ingestion bool

Whether to skip ingestion. Defaults to False.

False
skip_excel_ingestion bool

Whether to skip excel ingestion. Defaults to False.

False
ingestion_config IngestionConfig | None

The ingestion configuration. Defaults to None.

None
metadata dict | None

The metadata to associate with the content. Defaults to None.

None

Returns:

Name Type Description
Content Content

The uploaded content.

Source code in unique_toolkit/unique_toolkit/content/service.py
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
def upload_content_from_bytes(
    self,
    content: bytes,
    content_name: str,
    mime_type: str,
    scope_id: str | None = None,
    chat_id: str | None = None,
    skip_ingestion: bool = False,
    skip_excel_ingestion: bool = False,
    ingestion_config: unique_sdk.Content.IngestionConfig | None = None,
    metadata: dict | None = None,
) -> Content:
    """
    Uploads content to the knowledge base.

    Args:
        content (bytes): The content to upload.
        content_name (str): The name of the content.
        mime_type (str): The MIME type of the content.
        scope_id (str | None): The scope ID. Defaults to None.
        chat_id (str | None): The chat ID. Defaults to None.
        skip_ingestion (bool): Whether to skip ingestion. Defaults to False.
        skip_excel_ingestion (bool): Whether to skip excel ingestion. Defaults to False.
        ingestion_config (unique_sdk.Content.IngestionConfig | None): The ingestion configuration. Defaults to None.
        metadata (dict | None): The metadata to associate with the content. Defaults to None.

    Returns:
        Content: The uploaded content.
    """

    return upload_content_from_bytes(
        user_id=self._user_id,
        company_id=self._company_id,
        content=content,
        content_name=content_name,
        mime_type=mime_type,
        scope_id=scope_id,
        chat_id=chat_id,
        skip_ingestion=skip_ingestion,
        ingestion_config=ingestion_config,
        metadata=metadata,
    )

upload_content_from_bytes_async(content, content_name, mime_type, scope_id=None, chat_id=None, skip_ingestion=False, ingestion_config=None, metadata=None) async

Uploads content to the knowledge base.

Parameters:

Name Type Description Default
content bytes

The content to upload.

required
content_name str

The name of the content.

required
mime_type str

The MIME type of the content.

required
scope_id str | None

The scope ID. Defaults to None.

None
skip_ingestion bool

Whether to skip ingestion. Defaults to False.

False
skip_excel_ingestion bool

Whether to skip excel ingestion. Defaults to False.

required
ingestion_config IngestionConfig | None

The ingestion configuration. Defaults to None.

None
metadata dict | None

The metadata to associate with the content. Defaults to None.

None

Returns:

Name Type Description
Content Content

The uploaded content.

Source code in unique_toolkit/unique_toolkit/content/service.py
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
async def upload_content_from_bytes_async(
    self,
    content: bytes,
    content_name: str,
    mime_type: str,
    scope_id: str | None = None,
    chat_id: str | None = None,
    skip_ingestion: bool = False,
    ingestion_config: unique_sdk.Content.IngestionConfig | None = None,
    metadata: dict | None = None,
) -> Content:
    """
    Uploads content to the knowledge base.

    Args:
        content (bytes): The content to upload.
        content_name (str): The name of the content.
        mime_type (str): The MIME type of the content.
        scope_id (str | None): The scope ID. Defaults to None.
        skip_ingestion (bool): Whether to skip ingestion. Defaults to False.
        skip_excel_ingestion (bool): Whether to skip excel ingestion. Defaults to False.
        ingestion_config (unique_sdk.Content.IngestionConfig | None): The ingestion configuration. Defaults to None.
        metadata (dict | None): The metadata to associate with the content. Defaults to None.

    Returns:
        Content: The uploaded content.
    """

    return await upload_content_from_bytes_async(
        user_id=self._user_id,
        company_id=self._company_id,
        content=content,
        content_name=content_name,
        mime_type=mime_type,
        scope_id=scope_id,
        chat_id=chat_id,
        skip_ingestion=skip_ingestion,
        ingestion_config=ingestion_config,
        metadata=metadata,
    )

Schemas

unique_toolkit.content.schemas.Content

Bases: BaseModel

Source code in unique_toolkit/unique_toolkit/content/schemas.py
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
class Content(BaseModel):
    model_config = model_config
    id: str = Field(
        default="",
        description="The id of the content. The id starts with 'cont_' followed by an alphanumeric string of length 24.",
        examples=["cont_abcdefgehijklmnopqrstuvwx"],
    )
    key: str = Field(
        default="",
        description="The key of the content. For documents this is the the filename",
    )
    title: str | None = Field(
        default=None,
        description="The title of the content. For documents this is the title of the document.",
    )
    url: str | None = None
    chunks: list[ContentChunk] = []
    write_url: str | None = None
    read_url: str | None = None
    created_at: datetime | None = None
    updated_at: datetime | None = None
    expired_at: datetime | None = None
    metadata: dict[str, Any] | None = None
    ingestion_config: dict | None = None
    applied_ingestion_config: dict | None = None
    ingestion_state: str | None = None

__class_vars__ class-attribute

The names of the class variables defined on the model.

__private_attributes__ class-attribute

Metadata about the private attributes of the model.

__pydantic_complete__ = False class-attribute

Whether model building is completed, or if there are still undefined fields.

__pydantic_computed_fields__ class-attribute

A dictionary of computed field names and their corresponding [ComputedFieldInfo][pydantic.fields.ComputedFieldInfo] objects.

__pydantic_core_schema__ class-attribute

The core schema of the model.

__pydantic_custom_init__ class-attribute

Whether the model has a custom __init__ method.

__pydantic_decorators__ = _decorators.DecoratorInfos() class-attribute

Metadata containing the decorators defined on the model. This replaces Model.__validators__ and Model.__root_validators__ from Pydantic V1.

__pydantic_extra__ = _model_construction.NoInitField(init=False) class-attribute instance-attribute

A dictionary containing extra values, if [extra][pydantic.config.ConfigDict.extra] is set to 'allow'.

__pydantic_fields__ class-attribute

A dictionary of field names and their corresponding [FieldInfo][pydantic.fields.FieldInfo] objects. This replaces Model.__fields__ from Pydantic V1.

__pydantic_fields_set__ = _model_construction.NoInitField(init=False) class-attribute instance-attribute

The names of fields explicitly set during instantiation.

__pydantic_generic_metadata__ class-attribute

Metadata for generic models; contains data used for a similar purpose to args, origin, parameters in typing-module generics. May eventually be replaced by these.

__pydantic_parent_namespace__ = None class-attribute

Parent namespace of the model, used for automatic rebuilding of models.

__pydantic_post_init__ class-attribute

The name of the post-init method for the model, if defined.

__pydantic_private__ = _model_construction.NoInitField(init=False) class-attribute instance-attribute

Values of private attributes set on the model instance.

__pydantic_root_model__ = False class-attribute

Whether the model is a [RootModel][pydantic.root_model.RootModel].

__pydantic_serializer__ class-attribute

The pydantic-core SchemaSerializer used to dump instances of the model.

__pydantic_setattr_handlers__ class-attribute

__setattr__ handlers. Memoizing the handlers leads to a dramatic performance improvement in __setattr__

__pydantic_validator__ class-attribute

The pydantic-core SchemaValidator used to validate instances of the model.

__signature__ class-attribute

The synthesized __init__ [Signature][inspect.Signature] of the model.

model_extra property

Get extra fields set during validation.

Returns:

Type Description
dict[str, Any] | None

A dictionary of extra fields, or None if config.extra is not set to "allow".

model_fields_set property

Returns the set of fields that have been explicitly set on this model instance.

Returns:

Type Description
set[str]

A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

__copy__()

Returns a shallow copy of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
def __copy__(self) -> Self:
    """Returns a shallow copy of the model."""
    cls = type(self)
    m = cls.__new__(cls)
    _object_setattr(m, '__dict__', copy(self.__dict__))
    _object_setattr(m, '__pydantic_extra__', copy(self.__pydantic_extra__))
    _object_setattr(m, '__pydantic_fields_set__', copy(self.__pydantic_fields_set__))

    if not hasattr(self, '__pydantic_private__') or self.__pydantic_private__ is None:
        _object_setattr(m, '__pydantic_private__', None)
    else:
        _object_setattr(
            m,
            '__pydantic_private__',
            {k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined},
        )

    return m

__deepcopy__(memo=None)

Returns a deep copy of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
def __deepcopy__(self, memo: dict[int, Any] | None = None) -> Self:
    """Returns a deep copy of the model."""
    cls = type(self)
    m = cls.__new__(cls)
    _object_setattr(m, '__dict__', deepcopy(self.__dict__, memo=memo))
    _object_setattr(m, '__pydantic_extra__', deepcopy(self.__pydantic_extra__, memo=memo))
    # This next line doesn't need a deepcopy because __pydantic_fields_set__ is a set[str],
    # and attempting a deepcopy would be marginally slower.
    _object_setattr(m, '__pydantic_fields_set__', copy(self.__pydantic_fields_set__))

    if not hasattr(self, '__pydantic_private__') or self.__pydantic_private__ is None:
        _object_setattr(m, '__pydantic_private__', None)
    else:
        _object_setattr(
            m,
            '__pydantic_private__',
            deepcopy({k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined}, memo=memo),
        )

    return m

__get_pydantic_json_schema__(core_schema, handler) classmethod

Hook into generating the model's JSON schema.

Parameters:

Name Type Description Default
core_schema CoreSchema

A pydantic-core CoreSchema. You can ignore this argument and call the handler with a new CoreSchema, wrap this CoreSchema ({'type': 'nullable', 'schema': current_schema}), or just call the handler with the original schema.

required
handler GetJsonSchemaHandler

Call into Pydantic's internal JSON schema generation. This will raise a pydantic.errors.PydanticInvalidForJsonSchema if JSON schema generation fails. Since this gets called by BaseModel.model_json_schema you can override the schema_generator argument to that function to change JSON schema generation globally for a type.

required

Returns:

Type Description
JsonSchemaValue

A JSON schema, as a Python object.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
@classmethod
def __get_pydantic_json_schema__(
    cls,
    core_schema: CoreSchema,
    handler: GetJsonSchemaHandler,
    /,
) -> JsonSchemaValue:
    """Hook into generating the model's JSON schema.

    Args:
        core_schema: A `pydantic-core` CoreSchema.
            You can ignore this argument and call the handler with a new CoreSchema,
            wrap this CoreSchema (`{'type': 'nullable', 'schema': current_schema}`),
            or just call the handler with the original schema.
        handler: Call into Pydantic's internal JSON schema generation.
            This will raise a `pydantic.errors.PydanticInvalidForJsonSchema` if JSON schema
            generation fails.
            Since this gets called by `BaseModel.model_json_schema` you can override the
            `schema_generator` argument to that function to change JSON schema generation globally
            for a type.

    Returns:
        A JSON schema, as a Python object.
    """
    return handler(core_schema)

__init__(**data)

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
def __init__(self, /, **data: Any) -> None:
    """Create a new model by parsing and validating input data from keyword arguments.

    Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be
    validated to form a valid model.

    `self` is explicitly positional-only to allow `self` as a field name.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
    if self is not validated_self:
        warnings.warn(
            'A custom validator is returning a value other than `self`.\n'
            "Returning anything other than `self` from a top level model validator isn't supported when validating via `__init__`.\n"
            'See the `model_validator` docs (https://docs.pydantic.dev/latest/concepts/validators/#model-validators) for more details.',
            stacklevel=2,
        )

__init_subclass__(**kwargs)

This signature is included purely to help type-checkers check arguments to class declaration, which provides a way to conveniently set model_config key/value pairs.

from pydantic import BaseModel

class MyModel(BaseModel, extra='allow'): ...

However, this may be deceiving, since the actual calls to __init_subclass__ will not receive any of the config arguments, and will only receive any keyword arguments passed during class initialization that are not expected keys in ConfigDict. (This is due to the way ModelMetaclass.__new__ works.)

Parameters:

Name Type Description Default
**kwargs Unpack[ConfigDict]

Keyword arguments passed to the class definition, which set model_config

{}
Note

You may want to override __pydantic_init_subclass__ instead, which behaves similarly but is called after the class is fully initialized.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
def __init_subclass__(cls, **kwargs: Unpack[ConfigDict]):
    """This signature is included purely to help type-checkers check arguments to class declaration, which
    provides a way to conveniently set model_config key/value pairs.

    ```python
    from pydantic import BaseModel

    class MyModel(BaseModel, extra='allow'): ...
    ```

    However, this may be deceiving, since the _actual_ calls to `__init_subclass__` will not receive any
    of the config arguments, and will only receive any keyword arguments passed during class initialization
    that are _not_ expected keys in ConfigDict. (This is due to the way `ModelMetaclass.__new__` works.)

    Args:
        **kwargs: Keyword arguments passed to the class definition, which set model_config

    Note:
        You may want to override `__pydantic_init_subclass__` instead, which behaves similarly but is called
        *after* the class is fully initialized.
    """

__iter__()

So dict(model) works.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
1229
1230
1231
1232
1233
1234
def __iter__(self) -> TupleGenerator:
    """So `dict(model)` works."""
    yield from [(k, v) for (k, v) in self.__dict__.items() if not k.startswith('_')]
    extra = self.__pydantic_extra__
    if extra:
        yield from extra.items()

__pydantic_init_subclass__(**kwargs) classmethod

This is intended to behave just like __init_subclass__, but is called by ModelMetaclass only after basic class initialization is complete. In particular, attributes like model_fields will be present when this is called, but forward annotations are not guaranteed to be resolved yet, meaning that creating an instance of the class may fail.

This is necessary because __init_subclass__ will always be called by type.__new__, and it would require a prohibitively large refactor to the ModelMetaclass to ensure that type.__new__ was called in such a manner that the class would already be sufficiently initialized.

This will receive the same kwargs that would be passed to the standard __init_subclass__, namely, any kwargs passed to the class definition that aren't used internally by Pydantic.

Parameters:

Name Type Description Default
**kwargs Any

Any keyword arguments passed to the class definition that aren't used internally by Pydantic.

{}
Note

You may want to override __pydantic_on_complete__() instead, which is called once the class and its fields are fully initialized and ready for validation.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    """This is intended to behave just like `__init_subclass__`, but is called by `ModelMetaclass`
    only after basic class initialization is complete. In particular, attributes like `model_fields` will
    be present when this is called, but forward annotations are not guaranteed to be resolved yet,
    meaning that creating an instance of the class may fail.

    This is necessary because `__init_subclass__` will always be called by `type.__new__`,
    and it would require a prohibitively large refactor to the `ModelMetaclass` to ensure that
    `type.__new__` was called in such a manner that the class would already be sufficiently initialized.

    This will receive the same `kwargs` that would be passed to the standard `__init_subclass__`, namely,
    any kwargs passed to the class definition that aren't used internally by Pydantic.

    Args:
        **kwargs: Any keyword arguments passed to the class definition that aren't used internally
            by Pydantic.

    Note:
        You may want to override [`__pydantic_on_complete__()`][pydantic.main.BaseModel.__pydantic_on_complete__]
        instead, which is called once the class and its fields are fully initialized and ready for validation.
    """

__pydantic_on_complete__() classmethod

This is called once the class and its fields are fully initialized and ready to be used.

This typically happens when the class is created (just before __pydantic_init_subclass__() is called on the superclass), except when forward annotations are used that could not immediately be resolved. In that case, it will be called later, when the model is rebuilt automatically or explicitly using model_rebuild().

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
877
878
879
880
881
882
883
884
885
886
@classmethod
def __pydantic_on_complete__(cls) -> None:
    """This is called once the class and its fields are fully initialized and ready to be used.

    This typically happens when the class is created (just before
    [`__pydantic_init_subclass__()`][pydantic.main.BaseModel.__pydantic_init_subclass__] is called on the superclass),
    except when forward annotations are used that could not immediately be resolved.
    In that case, it will be called later, when the model is rebuilt automatically or explicitly using
    [`model_rebuild()`][pydantic.main.BaseModel.model_rebuild].
    """

copy(*, include=None, exclude=None, update=None, deep=False)

Returns a copy of the model.

Deprecated

This method is now deprecated; use model_copy instead.

If you need include or exclude, use:

data = self.model_dump(include=include, exclude=exclude, round_trip=True)
data = {**data, **(update or {})}
copied = self.model_validate(data)

Parameters:

Name Type Description Default
include AbstractSetIntStr | MappingIntStrAny | None

Optional set or mapping specifying which fields to include in the copied model.

None
exclude AbstractSetIntStr | MappingIntStrAny | None

Optional set or mapping specifying which fields to exclude in the copied model.

None
update Dict[str, Any] | None

Optional dictionary of field-value pairs to override field values in the copied model.

None
deep bool

If True, the values of fields that are Pydantic models will be deep-copied.

False

Returns:

Type Description
Self

A copy of the model with included, excluded and updated fields as specified.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
@typing_extensions.deprecated(
    'The `copy` method is deprecated; use `model_copy` instead. '
    'See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`.',
    category=None,
)
def copy(
    self,
    *,
    include: AbstractSetIntStr | MappingIntStrAny | None = None,
    exclude: AbstractSetIntStr | MappingIntStrAny | None = None,
    update: Dict[str, Any] | None = None,  # noqa UP006
    deep: bool = False,
) -> Self:  # pragma: no cover
    """Returns a copy of the model.

    !!! warning "Deprecated"
        This method is now deprecated; use `model_copy` instead.

    If you need `include` or `exclude`, use:

    ```python {test="skip" lint="skip"}
    data = self.model_dump(include=include, exclude=exclude, round_trip=True)
    data = {**data, **(update or {})}
    copied = self.model_validate(data)
    ```

    Args:
        include: Optional set or mapping specifying which fields to include in the copied model.
        exclude: Optional set or mapping specifying which fields to exclude in the copied model.
        update: Optional dictionary of field-value pairs to override field values in the copied model.
        deep: If True, the values of fields that are Pydantic models will be deep-copied.

    Returns:
        A copy of the model with included, excluded and updated fields as specified.
    """
    warnings.warn(
        'The `copy` method is deprecated; use `model_copy` instead. '
        'See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`.',
        category=PydanticDeprecatedSince20,
        stacklevel=2,
    )
    from .deprecated import copy_internals

    values = dict(
        copy_internals._iter(
            self, to_dict=False, by_alias=False, include=include, exclude=exclude, exclude_unset=False
        ),
        **(update or {}),
    )
    if self.__pydantic_private__ is None:
        private = None
    else:
        private = {k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined}

    if self.__pydantic_extra__ is None:
        extra: dict[str, Any] | None = None
    else:
        extra = self.__pydantic_extra__.copy()
        for k in list(self.__pydantic_extra__):
            if k not in values:  # k was in the exclude
                extra.pop(k)
        for k in list(values):
            if k in self.__pydantic_extra__:  # k must have come from extra
                extra[k] = values.pop(k)

    # new `__pydantic_fields_set__` can have unset optional fields with a set value in `update` kwarg
    if update:
        fields_set = self.__pydantic_fields_set__ | update.keys()
    else:
        fields_set = set(self.__pydantic_fields_set__)

    # removing excluded fields from `__pydantic_fields_set__`
    if exclude:
        fields_set -= set(exclude)

    return copy_internals._copy_and_set_values(self, values, fields_set, extra, private, deep=deep)

model_computed_fields() classmethod

A mapping of computed field names to their respective [ComputedFieldInfo][pydantic.fields.ComputedFieldInfo] instances.

Warning

Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3. Instead, you should access this attribute from the model class.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
273
274
275
276
277
278
279
280
281
282
@_utils.deprecated_instance_property
@classmethod
def model_computed_fields(cls) -> dict[str, ComputedFieldInfo]:
    """A mapping of computed field names to their respective [`ComputedFieldInfo`][pydantic.fields.ComputedFieldInfo] instances.

    !!! warning
        Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3.
        Instead, you should access this attribute from the model class.
    """
    return getattr(cls, '__pydantic_computed_fields__', {})

model_construct(_fields_set=None, **values) classmethod

Creates a new instance of the Model class with validated data.

Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.

Note

model_construct() generally respects the model_config.extra setting on the provided model. That is, if model_config.extra == 'allow', then all extra passed values are added to the model instance's __dict__ and __pydantic_extra__ fields. If model_config.extra == 'ignore' (the default), then all extra passed values are ignored. Because no validation is performed with a call to model_construct(), having model_config.extra == 'forbid' does not result in an error if extra values are passed, but they will be ignored.

Parameters:

Name Type Description Default
_fields_set set[str] | None

A set of field names that were originally explicitly set during instantiation. If provided, this is directly used for the [model_fields_set][pydantic.BaseModel.model_fields_set] attribute. Otherwise, the field names from the values argument will be used.

None
values Any

Trusted or pre-validated data dictionary.

{}

Returns:

Type Description
Self

A new instance of the Model class with validated data.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
@classmethod
def model_construct(cls, _fields_set: set[str] | None = None, **values: Any) -> Self:  # noqa: C901
    """Creates a new instance of the `Model` class with validated data.

    Creates a new model setting `__dict__` and `__pydantic_fields_set__` from trusted or pre-validated data.
    Default values are respected, but no other validation is performed.

    !!! note
        `model_construct()` generally respects the `model_config.extra` setting on the provided model.
        That is, if `model_config.extra == 'allow'`, then all extra passed values are added to the model instance's `__dict__`
        and `__pydantic_extra__` fields. If `model_config.extra == 'ignore'` (the default), then all extra passed values are ignored.
        Because no validation is performed with a call to `model_construct()`, having `model_config.extra == 'forbid'` does not result in
        an error if extra values are passed, but they will be ignored.

    Args:
        _fields_set: A set of field names that were originally explicitly set during instantiation. If provided,
            this is directly used for the [`model_fields_set`][pydantic.BaseModel.model_fields_set] attribute.
            Otherwise, the field names from the `values` argument will be used.
        values: Trusted or pre-validated data dictionary.

    Returns:
        A new instance of the `Model` class with validated data.
    """
    m = cls.__new__(cls)
    fields_values: dict[str, Any] = {}
    fields_set = set()

    for name, field in cls.__pydantic_fields__.items():
        if field.alias is not None and field.alias in values:
            fields_values[name] = values.pop(field.alias)
            fields_set.add(name)

        if (name not in fields_set) and (field.validation_alias is not None):
            validation_aliases: list[str | AliasPath] = (
                field.validation_alias.choices
                if isinstance(field.validation_alias, AliasChoices)
                else [field.validation_alias]
            )

            for alias in validation_aliases:
                if isinstance(alias, str) and alias in values:
                    fields_values[name] = values.pop(alias)
                    fields_set.add(name)
                    break
                elif isinstance(alias, AliasPath):
                    value = alias.search_dict_for_path(values)
                    if value is not PydanticUndefined:
                        fields_values[name] = value
                        fields_set.add(name)
                        break

        if name not in fields_set:
            if name in values:
                fields_values[name] = values.pop(name)
                fields_set.add(name)
            elif not field.is_required():
                fields_values[name] = field.get_default(call_default_factory=True, validated_data=fields_values)
    if _fields_set is None:
        _fields_set = fields_set

    _extra: dict[str, Any] | None = values if cls.model_config.get('extra') == 'allow' else None
    _object_setattr(m, '__dict__', fields_values)
    _object_setattr(m, '__pydantic_fields_set__', _fields_set)
    if not cls.__pydantic_root_model__:
        _object_setattr(m, '__pydantic_extra__', _extra)

    if cls.__pydantic_post_init__:
        m.model_post_init(None)
        # update private attributes with values set
        if hasattr(m, '__pydantic_private__') and m.__pydantic_private__ is not None:
            for k, v in values.items():
                if k in m.__private_attributes__:
                    m.__pydantic_private__[k] = v

    elif not cls.__pydantic_root_model__:
        # Note: if there are any private attributes, cls.__pydantic_post_init__ would exist
        # Since it doesn't, that means that `__pydantic_private__` should be set to None
        _object_setattr(m, '__pydantic_private__', None)

    return m

model_copy(*, update=None, deep=False)

Usage Documentation

model_copy

Returns a copy of the model.

Note

The underlying instance's [__dict__][object.__dict__] attribute is copied. This might have unexpected side effects if you store anything in it, on top of the model fields (e.g. the value of [cached properties][functools.cached_property]).

Parameters:

Name Type Description Default
update Mapping[str, Any] | None

Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data.

None
deep bool

Set to True to make a deep copy of the model.

False

Returns:

Type Description
Self

New model instance.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
def model_copy(self, *, update: Mapping[str, Any] | None = None, deep: bool = False) -> Self:
    """!!! abstract "Usage Documentation"
        [`model_copy`](../concepts/models.md#model-copy)

    Returns a copy of the model.

    !!! note
        The underlying instance's [`__dict__`][object.__dict__] attribute is copied. This
        might have unexpected side effects if you store anything in it, on top of the model
        fields (e.g. the value of [cached properties][functools.cached_property]).

    Args:
        update: Values to change/add in the new model. Note: the data is not validated
            before creating the new model. You should trust this data.
        deep: Set to `True` to make a deep copy of the model.

    Returns:
        New model instance.
    """
    copied = self.__deepcopy__() if deep else self.__copy__()
    if update:
        if self.model_config.get('extra') == 'allow':
            for k, v in update.items():
                if k in self.__pydantic_fields__:
                    copied.__dict__[k] = v
                else:
                    if copied.__pydantic_extra__ is None:
                        copied.__pydantic_extra__ = {}
                    copied.__pydantic_extra__[k] = v
        else:
            copied.__dict__.update(update)
        copied.__pydantic_fields_set__.update(update.keys())
    return copied

model_dump(*, mode='python', include=None, exclude=None, context=None, by_alias=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, exclude_computed_fields=False, round_trip=False, warnings=True, fallback=None, serialize_as_any=False)

Usage Documentation

model_dump

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters:

Name Type Description Default
mode Literal['json', 'python'] | str

The mode in which to_python should run. If mode is 'json', the output will only contain JSON serializable types. If mode is 'python', the output may contain non-JSON-serializable Python objects.

'python'
include IncEx | None

A set of fields to include in the output.

None
exclude IncEx | None

A set of fields to exclude from the output.

None
context Any | None

Additional context to pass to the serializer.

None
by_alias bool | None

Whether to use the field's alias in the dictionary key if defined.

None
exclude_unset bool

Whether to exclude fields that have not been explicitly set.

False
exclude_defaults bool

Whether to exclude fields that are set to their default value.

False
exclude_none bool

Whether to exclude fields that have a value of None.

False
exclude_computed_fields bool

Whether to exclude computed fields. While this can be useful for round-tripping, it is usually recommended to use the dedicated round_trip parameter instead.

False
round_trip bool

If True, dumped values should be valid as input for non-idempotent types such as Json[T].

False
warnings bool | Literal['none', 'warn', 'error']

How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].

True
fallback Callable[[Any], Any] | None

A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.

None
serialize_as_any bool

Whether to serialize fields with duck-typing serialization behavior.

False

Returns:

Type Description
dict[str, Any]

A dictionary representation of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
def model_dump(
    self,
    *,
    mode: Literal['json', 'python'] | str = 'python',
    include: IncEx | None = None,
    exclude: IncEx | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    exclude_unset: bool = False,
    exclude_defaults: bool = False,
    exclude_none: bool = False,
    exclude_computed_fields: bool = False,
    round_trip: bool = False,
    warnings: bool | Literal['none', 'warn', 'error'] = True,
    fallback: Callable[[Any], Any] | None = None,
    serialize_as_any: bool = False,
) -> dict[str, Any]:
    """!!! abstract "Usage Documentation"
        [`model_dump`](../concepts/serialization.md#python-mode)

    Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

    Args:
        mode: The mode in which `to_python` should run.
            If mode is 'json', the output will only contain JSON serializable types.
            If mode is 'python', the output may contain non-JSON-serializable Python objects.
        include: A set of fields to include in the output.
        exclude: A set of fields to exclude from the output.
        context: Additional context to pass to the serializer.
        by_alias: Whether to use the field's alias in the dictionary key if defined.
        exclude_unset: Whether to exclude fields that have not been explicitly set.
        exclude_defaults: Whether to exclude fields that are set to their default value.
        exclude_none: Whether to exclude fields that have a value of `None`.
        exclude_computed_fields: Whether to exclude computed fields.
            While this can be useful for round-tripping, it is usually recommended to use the dedicated
            `round_trip` parameter instead.
        round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T].
        warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors,
            "error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError].
        fallback: A function to call when an unknown value is encountered. If not provided,
            a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError] error is raised.
        serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.

    Returns:
        A dictionary representation of the model.
    """
    return self.__pydantic_serializer__.to_python(
        self,
        mode=mode,
        by_alias=by_alias,
        include=include,
        exclude=exclude,
        context=context,
        exclude_unset=exclude_unset,
        exclude_defaults=exclude_defaults,
        exclude_none=exclude_none,
        exclude_computed_fields=exclude_computed_fields,
        round_trip=round_trip,
        warnings=warnings,
        fallback=fallback,
        serialize_as_any=serialize_as_any,
    )

model_dump_json(*, indent=None, ensure_ascii=False, include=None, exclude=None, context=None, by_alias=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, exclude_computed_fields=False, round_trip=False, warnings=True, fallback=None, serialize_as_any=False)

Usage Documentation

model_dump_json

Generates a JSON representation of the model using Pydantic's to_json method.

Parameters:

Name Type Description Default
indent int | None

Indentation to use in the JSON output. If None is passed, the output will be compact.

None
ensure_ascii bool

If True, the output is guaranteed to have all incoming non-ASCII characters escaped. If False (the default), these characters will be output as-is.

False
include IncEx | None

Field(s) to include in the JSON output.

None
exclude IncEx | None

Field(s) to exclude from the JSON output.

None
context Any | None

Additional context to pass to the serializer.

None
by_alias bool | None

Whether to serialize using field aliases.

None
exclude_unset bool

Whether to exclude fields that have not been explicitly set.

False
exclude_defaults bool

Whether to exclude fields that are set to their default value.

False
exclude_none bool

Whether to exclude fields that have a value of None.

False
exclude_computed_fields bool

Whether to exclude computed fields. While this can be useful for round-tripping, it is usually recommended to use the dedicated round_trip parameter instead.

False
round_trip bool

If True, dumped values should be valid as input for non-idempotent types such as Json[T].

False
warnings bool | Literal['none', 'warn', 'error']

How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].

True
fallback Callable[[Any], Any] | None

A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.

None
serialize_as_any bool

Whether to serialize fields with duck-typing serialization behavior.

False

Returns:

Type Description
str

A JSON string representation of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
def model_dump_json(
    self,
    *,
    indent: int | None = None,
    ensure_ascii: bool = False,
    include: IncEx | None = None,
    exclude: IncEx | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    exclude_unset: bool = False,
    exclude_defaults: bool = False,
    exclude_none: bool = False,
    exclude_computed_fields: bool = False,
    round_trip: bool = False,
    warnings: bool | Literal['none', 'warn', 'error'] = True,
    fallback: Callable[[Any], Any] | None = None,
    serialize_as_any: bool = False,
) -> str:
    """!!! abstract "Usage Documentation"
        [`model_dump_json`](../concepts/serialization.md#json-mode)

    Generates a JSON representation of the model using Pydantic's `to_json` method.

    Args:
        indent: Indentation to use in the JSON output. If None is passed, the output will be compact.
        ensure_ascii: If `True`, the output is guaranteed to have all incoming non-ASCII characters escaped.
            If `False` (the default), these characters will be output as-is.
        include: Field(s) to include in the JSON output.
        exclude: Field(s) to exclude from the JSON output.
        context: Additional context to pass to the serializer.
        by_alias: Whether to serialize using field aliases.
        exclude_unset: Whether to exclude fields that have not been explicitly set.
        exclude_defaults: Whether to exclude fields that are set to their default value.
        exclude_none: Whether to exclude fields that have a value of `None`.
        exclude_computed_fields: Whether to exclude computed fields.
            While this can be useful for round-tripping, it is usually recommended to use the dedicated
            `round_trip` parameter instead.
        round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T].
        warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors,
            "error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError].
        fallback: A function to call when an unknown value is encountered. If not provided,
            a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError] error is raised.
        serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.

    Returns:
        A JSON string representation of the model.
    """
    return self.__pydantic_serializer__.to_json(
        self,
        indent=indent,
        ensure_ascii=ensure_ascii,
        include=include,
        exclude=exclude,
        context=context,
        by_alias=by_alias,
        exclude_unset=exclude_unset,
        exclude_defaults=exclude_defaults,
        exclude_none=exclude_none,
        exclude_computed_fields=exclude_computed_fields,
        round_trip=round_trip,
        warnings=warnings,
        fallback=fallback,
        serialize_as_any=serialize_as_any,
    ).decode()

model_fields() classmethod

A mapping of field names to their respective [FieldInfo][pydantic.fields.FieldInfo] instances.

Warning

Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3. Instead, you should access this attribute from the model class.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
262
263
264
265
266
267
268
269
270
271
@_utils.deprecated_instance_property
@classmethod
def model_fields(cls) -> dict[str, FieldInfo]:
    """A mapping of field names to their respective [`FieldInfo`][pydantic.fields.FieldInfo] instances.

    !!! warning
        Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3.
        Instead, you should access this attribute from the model class.
    """
    return getattr(cls, '__pydantic_fields__', {})

model_json_schema(by_alias=True, ref_template=DEFAULT_REF_TEMPLATE, schema_generator=GenerateJsonSchema, mode='validation', *, union_format='any_of') classmethod

Generates a JSON schema for a model class.

Parameters:

Name Type Description Default
by_alias bool

Whether to use attribute aliases or not.

True
ref_template str

The reference template.

DEFAULT_REF_TEMPLATE
union_format Literal['any_of', 'primitive_type_array']

The format to use when combining schemas from unions together. Can be one of:

  • 'any_of': Use the anyOf keyword to combine schemas (the default).
  • 'primitive_type_array': Use the type keyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive type (string, boolean, null, integer or number) or contains constraints/metadata, falls back to any_of.
'any_of'
schema_generator type[GenerateJsonSchema]

To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications

GenerateJsonSchema
mode JsonSchemaMode

The mode in which to generate the schema.

'validation'

Returns:

Type Description
dict[str, Any]

The JSON schema for the given model class.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
@classmethod
def model_json_schema(
    cls,
    by_alias: bool = True,
    ref_template: str = DEFAULT_REF_TEMPLATE,
    schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema,
    mode: JsonSchemaMode = 'validation',
    *,
    union_format: Literal['any_of', 'primitive_type_array'] = 'any_of',
) -> dict[str, Any]:
    """Generates a JSON schema for a model class.

    Args:
        by_alias: Whether to use attribute aliases or not.
        ref_template: The reference template.
        union_format: The format to use when combining schemas from unions together. Can be one of:

            - `'any_of'`: Use the [`anyOf`](https://json-schema.org/understanding-json-schema/reference/combining#anyOf)
            keyword to combine schemas (the default).
            - `'primitive_type_array'`: Use the [`type`](https://json-schema.org/understanding-json-schema/reference/type)
            keyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive
            type (`string`, `boolean`, `null`, `integer` or `number`) or contains constraints/metadata, falls back to
            `any_of`.
        schema_generator: To override the logic used to generate the JSON schema, as a subclass of
            `GenerateJsonSchema` with your desired modifications
        mode: The mode in which to generate the schema.

    Returns:
        The JSON schema for the given model class.
    """
    return model_json_schema(
        cls,
        by_alias=by_alias,
        ref_template=ref_template,
        union_format=union_format,
        schema_generator=schema_generator,
        mode=mode,
    )

model_parametrized_name(params) classmethod

Compute the class name for parametrizations of generic classes.

This method can be overridden to achieve a custom naming scheme for generic BaseModels.

Parameters:

Name Type Description Default
params tuple[type[Any], ...]

Tuple of types of the class. Given a generic class Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.

required

Returns:

Type Description
str

String representing the new class where params are passed to cls as type variables.

Raises:

Type Description
TypeError

Raised when trying to generate concrete names for non-generic models.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
@classmethod
def model_parametrized_name(cls, params: tuple[type[Any], ...]) -> str:
    """Compute the class name for parametrizations of generic classes.

    This method can be overridden to achieve a custom naming scheme for generic BaseModels.

    Args:
        params: Tuple of types of the class. Given a generic class
            `Model` with 2 type variables and a concrete model `Model[str, int]`,
            the value `(str, int)` would be passed to `params`.

    Returns:
        String representing the new class where `params` are passed to `cls` as type variables.

    Raises:
        TypeError: Raised when trying to generate concrete names for non-generic models.
    """
    if not issubclass(cls, Generic):
        raise TypeError('Concrete names should only be generated for generic models.')

    # Any strings received should represent forward references, so we handle them specially below.
    # If we eventually move toward wrapping them in a ForwardRef in __class_getitem__ in the future,
    # we may be able to remove this special case.
    param_names = [param if isinstance(param, str) else _repr.display_as_type(param) for param in params]
    params_component = ', '.join(param_names)
    return f'{cls.__name__}[{params_component}]'

model_post_init(context)

Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
612
613
614
615
def model_post_init(self, context: Any, /) -> None:
    """Override this method to perform additional initialization after `__init__` and `model_construct`.
    This is useful if you want to do some validation that requires the entire model to be initialized.
    """

model_rebuild(*, force=False, raise_errors=True, _parent_namespace_depth=2, _types_namespace=None) classmethod

Try to rebuild the pydantic-core schema for the model.

This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.

Parameters:

Name Type Description Default
force bool

Whether to force the rebuilding of the model schema, defaults to False.

False
raise_errors bool

Whether to raise errors, defaults to True.

True
_parent_namespace_depth int

The depth level of the parent namespace, defaults to 2.

2
_types_namespace MappingNamespace | None

The types namespace, defaults to None.

None

Returns:

Type Description
bool | None

Returns None if the schema is already "complete" and rebuilding was not required.

bool | None

If rebuilding was required, returns True if rebuilding was successful, otherwise False.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
@classmethod
def model_rebuild(
    cls,
    *,
    force: bool = False,
    raise_errors: bool = True,
    _parent_namespace_depth: int = 2,
    _types_namespace: MappingNamespace | None = None,
) -> bool | None:
    """Try to rebuild the pydantic-core schema for the model.

    This may be necessary when one of the annotations is a ForwardRef which could not be resolved during
    the initial attempt to build the schema, and automatic rebuilding fails.

    Args:
        force: Whether to force the rebuilding of the model schema, defaults to `False`.
        raise_errors: Whether to raise errors, defaults to `True`.
        _parent_namespace_depth: The depth level of the parent namespace, defaults to 2.
        _types_namespace: The types namespace, defaults to `None`.

    Returns:
        Returns `None` if the schema is already "complete" and rebuilding was not required.
        If rebuilding _was_ required, returns `True` if rebuilding was successful, otherwise `False`.
    """
    already_complete = cls.__pydantic_complete__
    if already_complete and not force:
        return None

    cls.__pydantic_complete__ = False

    for attr in ('__pydantic_core_schema__', '__pydantic_validator__', '__pydantic_serializer__'):
        if attr in cls.__dict__ and not isinstance(getattr(cls, attr), _mock_val_ser.MockValSer):
            # Deleting the validator/serializer is necessary as otherwise they can get reused in
            # pydantic-core. We do so only if they aren't mock instances, otherwise — as `model_rebuild()`
            # isn't thread-safe — concurrent model instantiations can lead to the parent validator being used.
            # Same applies for the core schema that can be reused in schema generation.
            delattr(cls, attr)

    if _types_namespace is not None:
        rebuild_ns = _types_namespace
    elif _parent_namespace_depth > 0:
        rebuild_ns = _typing_extra.parent_frame_namespace(parent_depth=_parent_namespace_depth, force=True) or {}
    else:
        rebuild_ns = {}

    parent_ns = _model_construction.unpack_lenient_weakvaluedict(cls.__pydantic_parent_namespace__) or {}

    ns_resolver = _namespace_utils.NsResolver(
        parent_namespace={**rebuild_ns, **parent_ns},
    )

    return _model_construction.complete_model_class(
        cls,
        _config.ConfigWrapper(cls.model_config, check=False),
        ns_resolver,
        raise_errors=raise_errors,
        # If the model was already complete, we don't need to call the hook again.
        call_on_complete_hook=not already_complete,
    )

model_validate(obj, *, strict=None, extra=None, from_attributes=None, context=None, by_alias=None, by_name=None) classmethod

Validate a pydantic model instance.

Parameters:

Name Type Description Default
obj Any

The object to validate.

required
strict bool | None

Whether to enforce types strictly.

None
extra ExtraValues | None

Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.

None
from_attributes bool | None

Whether to extract data from object attributes.

None
context Any | None

Additional context to pass to the validator.

None
by_alias bool | None

Whether to use the field's alias when validating against the provided input data.

None
by_name bool | None

Whether to use the field's name when validating against the provided input data.

None

Raises:

Type Description
ValidationError

If the object could not be validated.

Returns:

Type Description
Self

The validated model instance.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
@classmethod
def model_validate(
    cls,
    obj: Any,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    from_attributes: bool | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to enforce types strictly.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Raises:
        ValidationError: If the object could not be validated.

    Returns:
        The validated model instance.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True

    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return cls.__pydantic_validator__.validate_python(
        obj,
        strict=strict,
        extra=extra,
        from_attributes=from_attributes,
        context=context,
        by_alias=by_alias,
        by_name=by_name,
    )

model_validate_json(json_data, *, strict=None, extra=None, context=None, by_alias=None, by_name=None) classmethod

Usage Documentation

JSON Parsing

Validate the given JSON data against the Pydantic model.

Parameters:

Name Type Description Default
json_data str | bytes | bytearray

The JSON data to validate.

required
strict bool | None

Whether to enforce types strictly.

None
extra ExtraValues | None

Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.

None
context Any | None

Extra variables to pass to the validator.

None
by_alias bool | None

Whether to use the field's alias when validating against the provided input data.

None
by_name bool | None

Whether to use the field's name when validating against the provided input data.

None

Returns:

Type Description
Self

The validated Pydantic model.

Raises:

Type Description
ValidationError

If json_data is not a JSON string or the object could not be validated.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
@classmethod
def model_validate_json(
    cls,
    json_data: str | bytes | bytearray,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> Self:
    """!!! abstract "Usage Documentation"
        [JSON Parsing](../concepts/json.md#json-parsing)

    Validate the given JSON data against the Pydantic model.

    Args:
        json_data: The JSON data to validate.
        strict: Whether to enforce types strictly.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        context: Extra variables to pass to the validator.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Returns:
        The validated Pydantic model.

    Raises:
        ValidationError: If `json_data` is not a JSON string or the object could not be validated.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True

    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return cls.__pydantic_validator__.validate_json(
        json_data, strict=strict, extra=extra, context=context, by_alias=by_alias, by_name=by_name
    )

model_validate_strings(obj, *, strict=None, extra=None, context=None, by_alias=None, by_name=None) classmethod

Validate the given object with string data against the Pydantic model.

Parameters:

Name Type Description Default
obj Any

The object containing string data to validate.

required
strict bool | None

Whether to enforce types strictly.

None
extra ExtraValues | None

Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.

None
context Any | None

Extra variables to pass to the validator.

None
by_alias bool | None

Whether to use the field's alias when validating against the provided input data.

None
by_name bool | None

Whether to use the field's name when validating against the provided input data.

None

Returns:

Type Description
Self

The validated Pydantic model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
@classmethod
def model_validate_strings(
    cls,
    obj: Any,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> Self:
    """Validate the given object with string data against the Pydantic model.

    Args:
        obj: The object containing string data to validate.
        strict: Whether to enforce types strictly.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        context: Extra variables to pass to the validator.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Returns:
        The validated Pydantic model.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True

    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return cls.__pydantic_validator__.validate_strings(
        obj, strict=strict, extra=extra, context=context, by_alias=by_alias, by_name=by_name
    )

unique_toolkit.content.schemas.ContentChunk

Bases: BaseModel

Source code in unique_toolkit/unique_toolkit/content/schemas.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
@register_config()
class ContentChunk(BaseModel):
    model_config = model_config
    id: str = Field(
        default="",
        description="The id of the content this chunk belongs to. The id starts with 'cont_' followed by an alphanumeric string of length 24.",
        examples=["cont_abcdefgehijklmnopqrstuvwx"],
    )
    text: str = Field(default="", description="The text content of the chunk.")
    order: int = Field(
        default=0,
        description="The order of the chunk in the original content. Concatenating the chunks in order will give the original content.",
    )
    key: str | None = Field(
        default=None,
        description="The key of the chunk. For document chunks this is the the filename",
    )
    chunk_id: str | None = Field(
        default=None,
        description="The id of the chunk. The id starts with 'chunk_' followed by an alphanumeric string of length 24.",
        examples=["chunk_abcdefgehijklmnopqrstuv"],
    )
    url: str | None = Field(
        default=None,
        description="For chunk retrieved from the web this is the url of the chunk.",
    )
    title: str | None = Field(
        default=None,
        description="The title of the chunk. For document chunks this is the title of the document.",
    )
    start_page: int | None = Field(
        default=None,
        description="The start page of the chunk. For document chunks this is the start page of the document.",
    )
    end_page: int | None = Field(
        default=None,
        description="The end page of the chunk. For document chunks this is the end page of the document.",
    )

    object: str | None = None
    metadata: ContentMetadata | None = None
    internally_stored_at: datetime | None = None
    created_at: datetime | None = None
    updated_at: datetime | None = None

__class_vars__ class-attribute

The names of the class variables defined on the model.

__private_attributes__ class-attribute

Metadata about the private attributes of the model.

__pydantic_complete__ = False class-attribute

Whether model building is completed, or if there are still undefined fields.

__pydantic_computed_fields__ class-attribute

A dictionary of computed field names and their corresponding [ComputedFieldInfo][pydantic.fields.ComputedFieldInfo] objects.

__pydantic_core_schema__ class-attribute

The core schema of the model.

__pydantic_custom_init__ class-attribute

Whether the model has a custom __init__ method.

__pydantic_decorators__ = _decorators.DecoratorInfos() class-attribute

Metadata containing the decorators defined on the model. This replaces Model.__validators__ and Model.__root_validators__ from Pydantic V1.

__pydantic_extra__ = _model_construction.NoInitField(init=False) class-attribute instance-attribute

A dictionary containing extra values, if [extra][pydantic.config.ConfigDict.extra] is set to 'allow'.

__pydantic_fields__ class-attribute

A dictionary of field names and their corresponding [FieldInfo][pydantic.fields.FieldInfo] objects. This replaces Model.__fields__ from Pydantic V1.

__pydantic_fields_set__ = _model_construction.NoInitField(init=False) class-attribute instance-attribute

The names of fields explicitly set during instantiation.

__pydantic_generic_metadata__ class-attribute

Metadata for generic models; contains data used for a similar purpose to args, origin, parameters in typing-module generics. May eventually be replaced by these.

__pydantic_parent_namespace__ = None class-attribute

Parent namespace of the model, used for automatic rebuilding of models.

__pydantic_post_init__ class-attribute

The name of the post-init method for the model, if defined.

__pydantic_private__ = _model_construction.NoInitField(init=False) class-attribute instance-attribute

Values of private attributes set on the model instance.

__pydantic_root_model__ = False class-attribute

Whether the model is a [RootModel][pydantic.root_model.RootModel].

__pydantic_serializer__ class-attribute

The pydantic-core SchemaSerializer used to dump instances of the model.

__pydantic_setattr_handlers__ class-attribute

__setattr__ handlers. Memoizing the handlers leads to a dramatic performance improvement in __setattr__

__pydantic_validator__ class-attribute

The pydantic-core SchemaValidator used to validate instances of the model.

__signature__ class-attribute

The synthesized __init__ [Signature][inspect.Signature] of the model.

model_extra property

Get extra fields set during validation.

Returns:

Type Description
dict[str, Any] | None

A dictionary of extra fields, or None if config.extra is not set to "allow".

model_fields_set property

Returns the set of fields that have been explicitly set on this model instance.

Returns:

Type Description
set[str]

A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

__copy__()

Returns a shallow copy of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
def __copy__(self) -> Self:
    """Returns a shallow copy of the model."""
    cls = type(self)
    m = cls.__new__(cls)
    _object_setattr(m, '__dict__', copy(self.__dict__))
    _object_setattr(m, '__pydantic_extra__', copy(self.__pydantic_extra__))
    _object_setattr(m, '__pydantic_fields_set__', copy(self.__pydantic_fields_set__))

    if not hasattr(self, '__pydantic_private__') or self.__pydantic_private__ is None:
        _object_setattr(m, '__pydantic_private__', None)
    else:
        _object_setattr(
            m,
            '__pydantic_private__',
            {k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined},
        )

    return m

__deepcopy__(memo=None)

Returns a deep copy of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
def __deepcopy__(self, memo: dict[int, Any] | None = None) -> Self:
    """Returns a deep copy of the model."""
    cls = type(self)
    m = cls.__new__(cls)
    _object_setattr(m, '__dict__', deepcopy(self.__dict__, memo=memo))
    _object_setattr(m, '__pydantic_extra__', deepcopy(self.__pydantic_extra__, memo=memo))
    # This next line doesn't need a deepcopy because __pydantic_fields_set__ is a set[str],
    # and attempting a deepcopy would be marginally slower.
    _object_setattr(m, '__pydantic_fields_set__', copy(self.__pydantic_fields_set__))

    if not hasattr(self, '__pydantic_private__') or self.__pydantic_private__ is None:
        _object_setattr(m, '__pydantic_private__', None)
    else:
        _object_setattr(
            m,
            '__pydantic_private__',
            deepcopy({k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined}, memo=memo),
        )

    return m

__get_pydantic_json_schema__(core_schema, handler) classmethod

Hook into generating the model's JSON schema.

Parameters:

Name Type Description Default
core_schema CoreSchema

A pydantic-core CoreSchema. You can ignore this argument and call the handler with a new CoreSchema, wrap this CoreSchema ({'type': 'nullable', 'schema': current_schema}), or just call the handler with the original schema.

required
handler GetJsonSchemaHandler

Call into Pydantic's internal JSON schema generation. This will raise a pydantic.errors.PydanticInvalidForJsonSchema if JSON schema generation fails. Since this gets called by BaseModel.model_json_schema you can override the schema_generator argument to that function to change JSON schema generation globally for a type.

required

Returns:

Type Description
JsonSchemaValue

A JSON schema, as a Python object.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
@classmethod
def __get_pydantic_json_schema__(
    cls,
    core_schema: CoreSchema,
    handler: GetJsonSchemaHandler,
    /,
) -> JsonSchemaValue:
    """Hook into generating the model's JSON schema.

    Args:
        core_schema: A `pydantic-core` CoreSchema.
            You can ignore this argument and call the handler with a new CoreSchema,
            wrap this CoreSchema (`{'type': 'nullable', 'schema': current_schema}`),
            or just call the handler with the original schema.
        handler: Call into Pydantic's internal JSON schema generation.
            This will raise a `pydantic.errors.PydanticInvalidForJsonSchema` if JSON schema
            generation fails.
            Since this gets called by `BaseModel.model_json_schema` you can override the
            `schema_generator` argument to that function to change JSON schema generation globally
            for a type.

    Returns:
        A JSON schema, as a Python object.
    """
    return handler(core_schema)

__init__(**data)

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
def __init__(self, /, **data: Any) -> None:
    """Create a new model by parsing and validating input data from keyword arguments.

    Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be
    validated to form a valid model.

    `self` is explicitly positional-only to allow `self` as a field name.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
    if self is not validated_self:
        warnings.warn(
            'A custom validator is returning a value other than `self`.\n'
            "Returning anything other than `self` from a top level model validator isn't supported when validating via `__init__`.\n"
            'See the `model_validator` docs (https://docs.pydantic.dev/latest/concepts/validators/#model-validators) for more details.',
            stacklevel=2,
        )

__init_subclass__(**kwargs)

This signature is included purely to help type-checkers check arguments to class declaration, which provides a way to conveniently set model_config key/value pairs.

from pydantic import BaseModel

class MyModel(BaseModel, extra='allow'): ...

However, this may be deceiving, since the actual calls to __init_subclass__ will not receive any of the config arguments, and will only receive any keyword arguments passed during class initialization that are not expected keys in ConfigDict. (This is due to the way ModelMetaclass.__new__ works.)

Parameters:

Name Type Description Default
**kwargs Unpack[ConfigDict]

Keyword arguments passed to the class definition, which set model_config

{}
Note

You may want to override __pydantic_init_subclass__ instead, which behaves similarly but is called after the class is fully initialized.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
def __init_subclass__(cls, **kwargs: Unpack[ConfigDict]):
    """This signature is included purely to help type-checkers check arguments to class declaration, which
    provides a way to conveniently set model_config key/value pairs.

    ```python
    from pydantic import BaseModel

    class MyModel(BaseModel, extra='allow'): ...
    ```

    However, this may be deceiving, since the _actual_ calls to `__init_subclass__` will not receive any
    of the config arguments, and will only receive any keyword arguments passed during class initialization
    that are _not_ expected keys in ConfigDict. (This is due to the way `ModelMetaclass.__new__` works.)

    Args:
        **kwargs: Keyword arguments passed to the class definition, which set model_config

    Note:
        You may want to override `__pydantic_init_subclass__` instead, which behaves similarly but is called
        *after* the class is fully initialized.
    """

__iter__()

So dict(model) works.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
1229
1230
1231
1232
1233
1234
def __iter__(self) -> TupleGenerator:
    """So `dict(model)` works."""
    yield from [(k, v) for (k, v) in self.__dict__.items() if not k.startswith('_')]
    extra = self.__pydantic_extra__
    if extra:
        yield from extra.items()

__pydantic_init_subclass__(**kwargs) classmethod

This is intended to behave just like __init_subclass__, but is called by ModelMetaclass only after basic class initialization is complete. In particular, attributes like model_fields will be present when this is called, but forward annotations are not guaranteed to be resolved yet, meaning that creating an instance of the class may fail.

This is necessary because __init_subclass__ will always be called by type.__new__, and it would require a prohibitively large refactor to the ModelMetaclass to ensure that type.__new__ was called in such a manner that the class would already be sufficiently initialized.

This will receive the same kwargs that would be passed to the standard __init_subclass__, namely, any kwargs passed to the class definition that aren't used internally by Pydantic.

Parameters:

Name Type Description Default
**kwargs Any

Any keyword arguments passed to the class definition that aren't used internally by Pydantic.

{}
Note

You may want to override __pydantic_on_complete__() instead, which is called once the class and its fields are fully initialized and ready for validation.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    """This is intended to behave just like `__init_subclass__`, but is called by `ModelMetaclass`
    only after basic class initialization is complete. In particular, attributes like `model_fields` will
    be present when this is called, but forward annotations are not guaranteed to be resolved yet,
    meaning that creating an instance of the class may fail.

    This is necessary because `__init_subclass__` will always be called by `type.__new__`,
    and it would require a prohibitively large refactor to the `ModelMetaclass` to ensure that
    `type.__new__` was called in such a manner that the class would already be sufficiently initialized.

    This will receive the same `kwargs` that would be passed to the standard `__init_subclass__`, namely,
    any kwargs passed to the class definition that aren't used internally by Pydantic.

    Args:
        **kwargs: Any keyword arguments passed to the class definition that aren't used internally
            by Pydantic.

    Note:
        You may want to override [`__pydantic_on_complete__()`][pydantic.main.BaseModel.__pydantic_on_complete__]
        instead, which is called once the class and its fields are fully initialized and ready for validation.
    """

__pydantic_on_complete__() classmethod

This is called once the class and its fields are fully initialized and ready to be used.

This typically happens when the class is created (just before __pydantic_init_subclass__() is called on the superclass), except when forward annotations are used that could not immediately be resolved. In that case, it will be called later, when the model is rebuilt automatically or explicitly using model_rebuild().

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
877
878
879
880
881
882
883
884
885
886
@classmethod
def __pydantic_on_complete__(cls) -> None:
    """This is called once the class and its fields are fully initialized and ready to be used.

    This typically happens when the class is created (just before
    [`__pydantic_init_subclass__()`][pydantic.main.BaseModel.__pydantic_init_subclass__] is called on the superclass),
    except when forward annotations are used that could not immediately be resolved.
    In that case, it will be called later, when the model is rebuilt automatically or explicitly using
    [`model_rebuild()`][pydantic.main.BaseModel.model_rebuild].
    """

copy(*, include=None, exclude=None, update=None, deep=False)

Returns a copy of the model.

Deprecated

This method is now deprecated; use model_copy instead.

If you need include or exclude, use:

data = self.model_dump(include=include, exclude=exclude, round_trip=True)
data = {**data, **(update or {})}
copied = self.model_validate(data)

Parameters:

Name Type Description Default
include AbstractSetIntStr | MappingIntStrAny | None

Optional set or mapping specifying which fields to include in the copied model.

None
exclude AbstractSetIntStr | MappingIntStrAny | None

Optional set or mapping specifying which fields to exclude in the copied model.

None
update Dict[str, Any] | None

Optional dictionary of field-value pairs to override field values in the copied model.

None
deep bool

If True, the values of fields that are Pydantic models will be deep-copied.

False

Returns:

Type Description
Self

A copy of the model with included, excluded and updated fields as specified.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
@typing_extensions.deprecated(
    'The `copy` method is deprecated; use `model_copy` instead. '
    'See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`.',
    category=None,
)
def copy(
    self,
    *,
    include: AbstractSetIntStr | MappingIntStrAny | None = None,
    exclude: AbstractSetIntStr | MappingIntStrAny | None = None,
    update: Dict[str, Any] | None = None,  # noqa UP006
    deep: bool = False,
) -> Self:  # pragma: no cover
    """Returns a copy of the model.

    !!! warning "Deprecated"
        This method is now deprecated; use `model_copy` instead.

    If you need `include` or `exclude`, use:

    ```python {test="skip" lint="skip"}
    data = self.model_dump(include=include, exclude=exclude, round_trip=True)
    data = {**data, **(update or {})}
    copied = self.model_validate(data)
    ```

    Args:
        include: Optional set or mapping specifying which fields to include in the copied model.
        exclude: Optional set or mapping specifying which fields to exclude in the copied model.
        update: Optional dictionary of field-value pairs to override field values in the copied model.
        deep: If True, the values of fields that are Pydantic models will be deep-copied.

    Returns:
        A copy of the model with included, excluded and updated fields as specified.
    """
    warnings.warn(
        'The `copy` method is deprecated; use `model_copy` instead. '
        'See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`.',
        category=PydanticDeprecatedSince20,
        stacklevel=2,
    )
    from .deprecated import copy_internals

    values = dict(
        copy_internals._iter(
            self, to_dict=False, by_alias=False, include=include, exclude=exclude, exclude_unset=False
        ),
        **(update or {}),
    )
    if self.__pydantic_private__ is None:
        private = None
    else:
        private = {k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined}

    if self.__pydantic_extra__ is None:
        extra: dict[str, Any] | None = None
    else:
        extra = self.__pydantic_extra__.copy()
        for k in list(self.__pydantic_extra__):
            if k not in values:  # k was in the exclude
                extra.pop(k)
        for k in list(values):
            if k in self.__pydantic_extra__:  # k must have come from extra
                extra[k] = values.pop(k)

    # new `__pydantic_fields_set__` can have unset optional fields with a set value in `update` kwarg
    if update:
        fields_set = self.__pydantic_fields_set__ | update.keys()
    else:
        fields_set = set(self.__pydantic_fields_set__)

    # removing excluded fields from `__pydantic_fields_set__`
    if exclude:
        fields_set -= set(exclude)

    return copy_internals._copy_and_set_values(self, values, fields_set, extra, private, deep=deep)

model_computed_fields() classmethod

A mapping of computed field names to their respective [ComputedFieldInfo][pydantic.fields.ComputedFieldInfo] instances.

Warning

Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3. Instead, you should access this attribute from the model class.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
273
274
275
276
277
278
279
280
281
282
@_utils.deprecated_instance_property
@classmethod
def model_computed_fields(cls) -> dict[str, ComputedFieldInfo]:
    """A mapping of computed field names to their respective [`ComputedFieldInfo`][pydantic.fields.ComputedFieldInfo] instances.

    !!! warning
        Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3.
        Instead, you should access this attribute from the model class.
    """
    return getattr(cls, '__pydantic_computed_fields__', {})

model_construct(_fields_set=None, **values) classmethod

Creates a new instance of the Model class with validated data.

Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.

Note

model_construct() generally respects the model_config.extra setting on the provided model. That is, if model_config.extra == 'allow', then all extra passed values are added to the model instance's __dict__ and __pydantic_extra__ fields. If model_config.extra == 'ignore' (the default), then all extra passed values are ignored. Because no validation is performed with a call to model_construct(), having model_config.extra == 'forbid' does not result in an error if extra values are passed, but they will be ignored.

Parameters:

Name Type Description Default
_fields_set set[str] | None

A set of field names that were originally explicitly set during instantiation. If provided, this is directly used for the [model_fields_set][pydantic.BaseModel.model_fields_set] attribute. Otherwise, the field names from the values argument will be used.

None
values Any

Trusted or pre-validated data dictionary.

{}

Returns:

Type Description
Self

A new instance of the Model class with validated data.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
@classmethod
def model_construct(cls, _fields_set: set[str] | None = None, **values: Any) -> Self:  # noqa: C901
    """Creates a new instance of the `Model` class with validated data.

    Creates a new model setting `__dict__` and `__pydantic_fields_set__` from trusted or pre-validated data.
    Default values are respected, but no other validation is performed.

    !!! note
        `model_construct()` generally respects the `model_config.extra` setting on the provided model.
        That is, if `model_config.extra == 'allow'`, then all extra passed values are added to the model instance's `__dict__`
        and `__pydantic_extra__` fields. If `model_config.extra == 'ignore'` (the default), then all extra passed values are ignored.
        Because no validation is performed with a call to `model_construct()`, having `model_config.extra == 'forbid'` does not result in
        an error if extra values are passed, but they will be ignored.

    Args:
        _fields_set: A set of field names that were originally explicitly set during instantiation. If provided,
            this is directly used for the [`model_fields_set`][pydantic.BaseModel.model_fields_set] attribute.
            Otherwise, the field names from the `values` argument will be used.
        values: Trusted or pre-validated data dictionary.

    Returns:
        A new instance of the `Model` class with validated data.
    """
    m = cls.__new__(cls)
    fields_values: dict[str, Any] = {}
    fields_set = set()

    for name, field in cls.__pydantic_fields__.items():
        if field.alias is not None and field.alias in values:
            fields_values[name] = values.pop(field.alias)
            fields_set.add(name)

        if (name not in fields_set) and (field.validation_alias is not None):
            validation_aliases: list[str | AliasPath] = (
                field.validation_alias.choices
                if isinstance(field.validation_alias, AliasChoices)
                else [field.validation_alias]
            )

            for alias in validation_aliases:
                if isinstance(alias, str) and alias in values:
                    fields_values[name] = values.pop(alias)
                    fields_set.add(name)
                    break
                elif isinstance(alias, AliasPath):
                    value = alias.search_dict_for_path(values)
                    if value is not PydanticUndefined:
                        fields_values[name] = value
                        fields_set.add(name)
                        break

        if name not in fields_set:
            if name in values:
                fields_values[name] = values.pop(name)
                fields_set.add(name)
            elif not field.is_required():
                fields_values[name] = field.get_default(call_default_factory=True, validated_data=fields_values)
    if _fields_set is None:
        _fields_set = fields_set

    _extra: dict[str, Any] | None = values if cls.model_config.get('extra') == 'allow' else None
    _object_setattr(m, '__dict__', fields_values)
    _object_setattr(m, '__pydantic_fields_set__', _fields_set)
    if not cls.__pydantic_root_model__:
        _object_setattr(m, '__pydantic_extra__', _extra)

    if cls.__pydantic_post_init__:
        m.model_post_init(None)
        # update private attributes with values set
        if hasattr(m, '__pydantic_private__') and m.__pydantic_private__ is not None:
            for k, v in values.items():
                if k in m.__private_attributes__:
                    m.__pydantic_private__[k] = v

    elif not cls.__pydantic_root_model__:
        # Note: if there are any private attributes, cls.__pydantic_post_init__ would exist
        # Since it doesn't, that means that `__pydantic_private__` should be set to None
        _object_setattr(m, '__pydantic_private__', None)

    return m

model_copy(*, update=None, deep=False)

Usage Documentation

model_copy

Returns a copy of the model.

Note

The underlying instance's [__dict__][object.__dict__] attribute is copied. This might have unexpected side effects if you store anything in it, on top of the model fields (e.g. the value of [cached properties][functools.cached_property]).

Parameters:

Name Type Description Default
update Mapping[str, Any] | None

Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data.

None
deep bool

Set to True to make a deep copy of the model.

False

Returns:

Type Description
Self

New model instance.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
def model_copy(self, *, update: Mapping[str, Any] | None = None, deep: bool = False) -> Self:
    """!!! abstract "Usage Documentation"
        [`model_copy`](../concepts/models.md#model-copy)

    Returns a copy of the model.

    !!! note
        The underlying instance's [`__dict__`][object.__dict__] attribute is copied. This
        might have unexpected side effects if you store anything in it, on top of the model
        fields (e.g. the value of [cached properties][functools.cached_property]).

    Args:
        update: Values to change/add in the new model. Note: the data is not validated
            before creating the new model. You should trust this data.
        deep: Set to `True` to make a deep copy of the model.

    Returns:
        New model instance.
    """
    copied = self.__deepcopy__() if deep else self.__copy__()
    if update:
        if self.model_config.get('extra') == 'allow':
            for k, v in update.items():
                if k in self.__pydantic_fields__:
                    copied.__dict__[k] = v
                else:
                    if copied.__pydantic_extra__ is None:
                        copied.__pydantic_extra__ = {}
                    copied.__pydantic_extra__[k] = v
        else:
            copied.__dict__.update(update)
        copied.__pydantic_fields_set__.update(update.keys())
    return copied

model_dump(*, mode='python', include=None, exclude=None, context=None, by_alias=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, exclude_computed_fields=False, round_trip=False, warnings=True, fallback=None, serialize_as_any=False)

Usage Documentation

model_dump

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters:

Name Type Description Default
mode Literal['json', 'python'] | str

The mode in which to_python should run. If mode is 'json', the output will only contain JSON serializable types. If mode is 'python', the output may contain non-JSON-serializable Python objects.

'python'
include IncEx | None

A set of fields to include in the output.

None
exclude IncEx | None

A set of fields to exclude from the output.

None
context Any | None

Additional context to pass to the serializer.

None
by_alias bool | None

Whether to use the field's alias in the dictionary key if defined.

None
exclude_unset bool

Whether to exclude fields that have not been explicitly set.

False
exclude_defaults bool

Whether to exclude fields that are set to their default value.

False
exclude_none bool

Whether to exclude fields that have a value of None.

False
exclude_computed_fields bool

Whether to exclude computed fields. While this can be useful for round-tripping, it is usually recommended to use the dedicated round_trip parameter instead.

False
round_trip bool

If True, dumped values should be valid as input for non-idempotent types such as Json[T].

False
warnings bool | Literal['none', 'warn', 'error']

How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].

True
fallback Callable[[Any], Any] | None

A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.

None
serialize_as_any bool

Whether to serialize fields with duck-typing serialization behavior.

False

Returns:

Type Description
dict[str, Any]

A dictionary representation of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
def model_dump(
    self,
    *,
    mode: Literal['json', 'python'] | str = 'python',
    include: IncEx | None = None,
    exclude: IncEx | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    exclude_unset: bool = False,
    exclude_defaults: bool = False,
    exclude_none: bool = False,
    exclude_computed_fields: bool = False,
    round_trip: bool = False,
    warnings: bool | Literal['none', 'warn', 'error'] = True,
    fallback: Callable[[Any], Any] | None = None,
    serialize_as_any: bool = False,
) -> dict[str, Any]:
    """!!! abstract "Usage Documentation"
        [`model_dump`](../concepts/serialization.md#python-mode)

    Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

    Args:
        mode: The mode in which `to_python` should run.
            If mode is 'json', the output will only contain JSON serializable types.
            If mode is 'python', the output may contain non-JSON-serializable Python objects.
        include: A set of fields to include in the output.
        exclude: A set of fields to exclude from the output.
        context: Additional context to pass to the serializer.
        by_alias: Whether to use the field's alias in the dictionary key if defined.
        exclude_unset: Whether to exclude fields that have not been explicitly set.
        exclude_defaults: Whether to exclude fields that are set to their default value.
        exclude_none: Whether to exclude fields that have a value of `None`.
        exclude_computed_fields: Whether to exclude computed fields.
            While this can be useful for round-tripping, it is usually recommended to use the dedicated
            `round_trip` parameter instead.
        round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T].
        warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors,
            "error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError].
        fallback: A function to call when an unknown value is encountered. If not provided,
            a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError] error is raised.
        serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.

    Returns:
        A dictionary representation of the model.
    """
    return self.__pydantic_serializer__.to_python(
        self,
        mode=mode,
        by_alias=by_alias,
        include=include,
        exclude=exclude,
        context=context,
        exclude_unset=exclude_unset,
        exclude_defaults=exclude_defaults,
        exclude_none=exclude_none,
        exclude_computed_fields=exclude_computed_fields,
        round_trip=round_trip,
        warnings=warnings,
        fallback=fallback,
        serialize_as_any=serialize_as_any,
    )

model_dump_json(*, indent=None, ensure_ascii=False, include=None, exclude=None, context=None, by_alias=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, exclude_computed_fields=False, round_trip=False, warnings=True, fallback=None, serialize_as_any=False)

Usage Documentation

model_dump_json

Generates a JSON representation of the model using Pydantic's to_json method.

Parameters:

Name Type Description Default
indent int | None

Indentation to use in the JSON output. If None is passed, the output will be compact.

None
ensure_ascii bool

If True, the output is guaranteed to have all incoming non-ASCII characters escaped. If False (the default), these characters will be output as-is.

False
include IncEx | None

Field(s) to include in the JSON output.

None
exclude IncEx | None

Field(s) to exclude from the JSON output.

None
context Any | None

Additional context to pass to the serializer.

None
by_alias bool | None

Whether to serialize using field aliases.

None
exclude_unset bool

Whether to exclude fields that have not been explicitly set.

False
exclude_defaults bool

Whether to exclude fields that are set to their default value.

False
exclude_none bool

Whether to exclude fields that have a value of None.

False
exclude_computed_fields bool

Whether to exclude computed fields. While this can be useful for round-tripping, it is usually recommended to use the dedicated round_trip parameter instead.

False
round_trip bool

If True, dumped values should be valid as input for non-idempotent types such as Json[T].

False
warnings bool | Literal['none', 'warn', 'error']

How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].

True
fallback Callable[[Any], Any] | None

A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.

None
serialize_as_any bool

Whether to serialize fields with duck-typing serialization behavior.

False

Returns:

Type Description
str

A JSON string representation of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
def model_dump_json(
    self,
    *,
    indent: int | None = None,
    ensure_ascii: bool = False,
    include: IncEx | None = None,
    exclude: IncEx | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    exclude_unset: bool = False,
    exclude_defaults: bool = False,
    exclude_none: bool = False,
    exclude_computed_fields: bool = False,
    round_trip: bool = False,
    warnings: bool | Literal['none', 'warn', 'error'] = True,
    fallback: Callable[[Any], Any] | None = None,
    serialize_as_any: bool = False,
) -> str:
    """!!! abstract "Usage Documentation"
        [`model_dump_json`](../concepts/serialization.md#json-mode)

    Generates a JSON representation of the model using Pydantic's `to_json` method.

    Args:
        indent: Indentation to use in the JSON output. If None is passed, the output will be compact.
        ensure_ascii: If `True`, the output is guaranteed to have all incoming non-ASCII characters escaped.
            If `False` (the default), these characters will be output as-is.
        include: Field(s) to include in the JSON output.
        exclude: Field(s) to exclude from the JSON output.
        context: Additional context to pass to the serializer.
        by_alias: Whether to serialize using field aliases.
        exclude_unset: Whether to exclude fields that have not been explicitly set.
        exclude_defaults: Whether to exclude fields that are set to their default value.
        exclude_none: Whether to exclude fields that have a value of `None`.
        exclude_computed_fields: Whether to exclude computed fields.
            While this can be useful for round-tripping, it is usually recommended to use the dedicated
            `round_trip` parameter instead.
        round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T].
        warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors,
            "error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError].
        fallback: A function to call when an unknown value is encountered. If not provided,
            a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError] error is raised.
        serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.

    Returns:
        A JSON string representation of the model.
    """
    return self.__pydantic_serializer__.to_json(
        self,
        indent=indent,
        ensure_ascii=ensure_ascii,
        include=include,
        exclude=exclude,
        context=context,
        by_alias=by_alias,
        exclude_unset=exclude_unset,
        exclude_defaults=exclude_defaults,
        exclude_none=exclude_none,
        exclude_computed_fields=exclude_computed_fields,
        round_trip=round_trip,
        warnings=warnings,
        fallback=fallback,
        serialize_as_any=serialize_as_any,
    ).decode()

model_fields() classmethod

A mapping of field names to their respective [FieldInfo][pydantic.fields.FieldInfo] instances.

Warning

Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3. Instead, you should access this attribute from the model class.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
262
263
264
265
266
267
268
269
270
271
@_utils.deprecated_instance_property
@classmethod
def model_fields(cls) -> dict[str, FieldInfo]:
    """A mapping of field names to their respective [`FieldInfo`][pydantic.fields.FieldInfo] instances.

    !!! warning
        Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3.
        Instead, you should access this attribute from the model class.
    """
    return getattr(cls, '__pydantic_fields__', {})

model_json_schema(by_alias=True, ref_template=DEFAULT_REF_TEMPLATE, schema_generator=GenerateJsonSchema, mode='validation', *, union_format='any_of') classmethod

Generates a JSON schema for a model class.

Parameters:

Name Type Description Default
by_alias bool

Whether to use attribute aliases or not.

True
ref_template str

The reference template.

DEFAULT_REF_TEMPLATE
union_format Literal['any_of', 'primitive_type_array']

The format to use when combining schemas from unions together. Can be one of:

  • 'any_of': Use the anyOf keyword to combine schemas (the default).
  • 'primitive_type_array': Use the type keyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive type (string, boolean, null, integer or number) or contains constraints/metadata, falls back to any_of.
'any_of'
schema_generator type[GenerateJsonSchema]

To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications

GenerateJsonSchema
mode JsonSchemaMode

The mode in which to generate the schema.

'validation'

Returns:

Type Description
dict[str, Any]

The JSON schema for the given model class.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
@classmethod
def model_json_schema(
    cls,
    by_alias: bool = True,
    ref_template: str = DEFAULT_REF_TEMPLATE,
    schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema,
    mode: JsonSchemaMode = 'validation',
    *,
    union_format: Literal['any_of', 'primitive_type_array'] = 'any_of',
) -> dict[str, Any]:
    """Generates a JSON schema for a model class.

    Args:
        by_alias: Whether to use attribute aliases or not.
        ref_template: The reference template.
        union_format: The format to use when combining schemas from unions together. Can be one of:

            - `'any_of'`: Use the [`anyOf`](https://json-schema.org/understanding-json-schema/reference/combining#anyOf)
            keyword to combine schemas (the default).
            - `'primitive_type_array'`: Use the [`type`](https://json-schema.org/understanding-json-schema/reference/type)
            keyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive
            type (`string`, `boolean`, `null`, `integer` or `number`) or contains constraints/metadata, falls back to
            `any_of`.
        schema_generator: To override the logic used to generate the JSON schema, as a subclass of
            `GenerateJsonSchema` with your desired modifications
        mode: The mode in which to generate the schema.

    Returns:
        The JSON schema for the given model class.
    """
    return model_json_schema(
        cls,
        by_alias=by_alias,
        ref_template=ref_template,
        union_format=union_format,
        schema_generator=schema_generator,
        mode=mode,
    )

model_parametrized_name(params) classmethod

Compute the class name for parametrizations of generic classes.

This method can be overridden to achieve a custom naming scheme for generic BaseModels.

Parameters:

Name Type Description Default
params tuple[type[Any], ...]

Tuple of types of the class. Given a generic class Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.

required

Returns:

Type Description
str

String representing the new class where params are passed to cls as type variables.

Raises:

Type Description
TypeError

Raised when trying to generate concrete names for non-generic models.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
@classmethod
def model_parametrized_name(cls, params: tuple[type[Any], ...]) -> str:
    """Compute the class name for parametrizations of generic classes.

    This method can be overridden to achieve a custom naming scheme for generic BaseModels.

    Args:
        params: Tuple of types of the class. Given a generic class
            `Model` with 2 type variables and a concrete model `Model[str, int]`,
            the value `(str, int)` would be passed to `params`.

    Returns:
        String representing the new class where `params` are passed to `cls` as type variables.

    Raises:
        TypeError: Raised when trying to generate concrete names for non-generic models.
    """
    if not issubclass(cls, Generic):
        raise TypeError('Concrete names should only be generated for generic models.')

    # Any strings received should represent forward references, so we handle them specially below.
    # If we eventually move toward wrapping them in a ForwardRef in __class_getitem__ in the future,
    # we may be able to remove this special case.
    param_names = [param if isinstance(param, str) else _repr.display_as_type(param) for param in params]
    params_component = ', '.join(param_names)
    return f'{cls.__name__}[{params_component}]'

model_post_init(context)

Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
612
613
614
615
def model_post_init(self, context: Any, /) -> None:
    """Override this method to perform additional initialization after `__init__` and `model_construct`.
    This is useful if you want to do some validation that requires the entire model to be initialized.
    """

model_rebuild(*, force=False, raise_errors=True, _parent_namespace_depth=2, _types_namespace=None) classmethod

Try to rebuild the pydantic-core schema for the model.

This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.

Parameters:

Name Type Description Default
force bool

Whether to force the rebuilding of the model schema, defaults to False.

False
raise_errors bool

Whether to raise errors, defaults to True.

True
_parent_namespace_depth int

The depth level of the parent namespace, defaults to 2.

2
_types_namespace MappingNamespace | None

The types namespace, defaults to None.

None

Returns:

Type Description
bool | None

Returns None if the schema is already "complete" and rebuilding was not required.

bool | None

If rebuilding was required, returns True if rebuilding was successful, otherwise False.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
@classmethod
def model_rebuild(
    cls,
    *,
    force: bool = False,
    raise_errors: bool = True,
    _parent_namespace_depth: int = 2,
    _types_namespace: MappingNamespace | None = None,
) -> bool | None:
    """Try to rebuild the pydantic-core schema for the model.

    This may be necessary when one of the annotations is a ForwardRef which could not be resolved during
    the initial attempt to build the schema, and automatic rebuilding fails.

    Args:
        force: Whether to force the rebuilding of the model schema, defaults to `False`.
        raise_errors: Whether to raise errors, defaults to `True`.
        _parent_namespace_depth: The depth level of the parent namespace, defaults to 2.
        _types_namespace: The types namespace, defaults to `None`.

    Returns:
        Returns `None` if the schema is already "complete" and rebuilding was not required.
        If rebuilding _was_ required, returns `True` if rebuilding was successful, otherwise `False`.
    """
    already_complete = cls.__pydantic_complete__
    if already_complete and not force:
        return None

    cls.__pydantic_complete__ = False

    for attr in ('__pydantic_core_schema__', '__pydantic_validator__', '__pydantic_serializer__'):
        if attr in cls.__dict__ and not isinstance(getattr(cls, attr), _mock_val_ser.MockValSer):
            # Deleting the validator/serializer is necessary as otherwise they can get reused in
            # pydantic-core. We do so only if they aren't mock instances, otherwise — as `model_rebuild()`
            # isn't thread-safe — concurrent model instantiations can lead to the parent validator being used.
            # Same applies for the core schema that can be reused in schema generation.
            delattr(cls, attr)

    if _types_namespace is not None:
        rebuild_ns = _types_namespace
    elif _parent_namespace_depth > 0:
        rebuild_ns = _typing_extra.parent_frame_namespace(parent_depth=_parent_namespace_depth, force=True) or {}
    else:
        rebuild_ns = {}

    parent_ns = _model_construction.unpack_lenient_weakvaluedict(cls.__pydantic_parent_namespace__) or {}

    ns_resolver = _namespace_utils.NsResolver(
        parent_namespace={**rebuild_ns, **parent_ns},
    )

    return _model_construction.complete_model_class(
        cls,
        _config.ConfigWrapper(cls.model_config, check=False),
        ns_resolver,
        raise_errors=raise_errors,
        # If the model was already complete, we don't need to call the hook again.
        call_on_complete_hook=not already_complete,
    )

model_validate(obj, *, strict=None, extra=None, from_attributes=None, context=None, by_alias=None, by_name=None) classmethod

Validate a pydantic model instance.

Parameters:

Name Type Description Default
obj Any

The object to validate.

required
strict bool | None

Whether to enforce types strictly.

None
extra ExtraValues | None

Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.

None
from_attributes bool | None

Whether to extract data from object attributes.

None
context Any | None

Additional context to pass to the validator.

None
by_alias bool | None

Whether to use the field's alias when validating against the provided input data.

None
by_name bool | None

Whether to use the field's name when validating against the provided input data.

None

Raises:

Type Description
ValidationError

If the object could not be validated.

Returns:

Type Description
Self

The validated model instance.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
@classmethod
def model_validate(
    cls,
    obj: Any,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    from_attributes: bool | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to enforce types strictly.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Raises:
        ValidationError: If the object could not be validated.

    Returns:
        The validated model instance.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True

    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return cls.__pydantic_validator__.validate_python(
        obj,
        strict=strict,
        extra=extra,
        from_attributes=from_attributes,
        context=context,
        by_alias=by_alias,
        by_name=by_name,
    )

model_validate_json(json_data, *, strict=None, extra=None, context=None, by_alias=None, by_name=None) classmethod

Usage Documentation

JSON Parsing

Validate the given JSON data against the Pydantic model.

Parameters:

Name Type Description Default
json_data str | bytes | bytearray

The JSON data to validate.

required
strict bool | None

Whether to enforce types strictly.

None
extra ExtraValues | None

Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.

None
context Any | None

Extra variables to pass to the validator.

None
by_alias bool | None

Whether to use the field's alias when validating against the provided input data.

None
by_name bool | None

Whether to use the field's name when validating against the provided input data.

None

Returns:

Type Description
Self

The validated Pydantic model.

Raises:

Type Description
ValidationError

If json_data is not a JSON string or the object could not be validated.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
@classmethod
def model_validate_json(
    cls,
    json_data: str | bytes | bytearray,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> Self:
    """!!! abstract "Usage Documentation"
        [JSON Parsing](../concepts/json.md#json-parsing)

    Validate the given JSON data against the Pydantic model.

    Args:
        json_data: The JSON data to validate.
        strict: Whether to enforce types strictly.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        context: Extra variables to pass to the validator.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Returns:
        The validated Pydantic model.

    Raises:
        ValidationError: If `json_data` is not a JSON string or the object could not be validated.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True

    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return cls.__pydantic_validator__.validate_json(
        json_data, strict=strict, extra=extra, context=context, by_alias=by_alias, by_name=by_name
    )

model_validate_strings(obj, *, strict=None, extra=None, context=None, by_alias=None, by_name=None) classmethod

Validate the given object with string data against the Pydantic model.

Parameters:

Name Type Description Default
obj Any

The object containing string data to validate.

required
strict bool | None

Whether to enforce types strictly.

None
extra ExtraValues | None

Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.

None
context Any | None

Extra variables to pass to the validator.

None
by_alias bool | None

Whether to use the field's alias when validating against the provided input data.

None
by_name bool | None

Whether to use the field's name when validating against the provided input data.

None

Returns:

Type Description
Self

The validated Pydantic model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
@classmethod
def model_validate_strings(
    cls,
    obj: Any,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> Self:
    """Validate the given object with string data against the Pydantic model.

    Args:
        obj: The object containing string data to validate.
        strict: Whether to enforce types strictly.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        context: Extra variables to pass to the validator.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Returns:
        The validated Pydantic model.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True

    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return cls.__pydantic_validator__.validate_strings(
        obj, strict=strict, extra=extra, context=context, by_alias=by_alias, by_name=by_name
    )

unique_toolkit.content.schemas.ContentSearchType

Bases: StrEnum

Source code in unique_toolkit/unique_toolkit/content/schemas.py
141
142
143
class ContentSearchType(StrEnum):
    COMBINED = "COMBINED"
    VECTOR = "VECTOR"

unique_toolkit.content.schemas.ContentRerankerConfig

Bases: BaseModel

Source code in unique_toolkit/unique_toolkit/content/schemas.py
171
172
173
174
class ContentRerankerConfig(BaseModel):
    model_config = model_config
    deployment_name: str = Field(serialization_alias="deploymentName")
    options: dict | None = None

__class_vars__ class-attribute

The names of the class variables defined on the model.

__private_attributes__ class-attribute

Metadata about the private attributes of the model.

__pydantic_complete__ = False class-attribute

Whether model building is completed, or if there are still undefined fields.

__pydantic_computed_fields__ class-attribute

A dictionary of computed field names and their corresponding [ComputedFieldInfo][pydantic.fields.ComputedFieldInfo] objects.

__pydantic_core_schema__ class-attribute

The core schema of the model.

__pydantic_custom_init__ class-attribute

Whether the model has a custom __init__ method.

__pydantic_decorators__ = _decorators.DecoratorInfos() class-attribute

Metadata containing the decorators defined on the model. This replaces Model.__validators__ and Model.__root_validators__ from Pydantic V1.

__pydantic_extra__ = _model_construction.NoInitField(init=False) class-attribute instance-attribute

A dictionary containing extra values, if [extra][pydantic.config.ConfigDict.extra] is set to 'allow'.

__pydantic_fields__ class-attribute

A dictionary of field names and their corresponding [FieldInfo][pydantic.fields.FieldInfo] objects. This replaces Model.__fields__ from Pydantic V1.

__pydantic_fields_set__ = _model_construction.NoInitField(init=False) class-attribute instance-attribute

The names of fields explicitly set during instantiation.

__pydantic_generic_metadata__ class-attribute

Metadata for generic models; contains data used for a similar purpose to args, origin, parameters in typing-module generics. May eventually be replaced by these.

__pydantic_parent_namespace__ = None class-attribute

Parent namespace of the model, used for automatic rebuilding of models.

__pydantic_post_init__ class-attribute

The name of the post-init method for the model, if defined.

__pydantic_private__ = _model_construction.NoInitField(init=False) class-attribute instance-attribute

Values of private attributes set on the model instance.

__pydantic_root_model__ = False class-attribute

Whether the model is a [RootModel][pydantic.root_model.RootModel].

__pydantic_serializer__ class-attribute

The pydantic-core SchemaSerializer used to dump instances of the model.

__pydantic_setattr_handlers__ class-attribute

__setattr__ handlers. Memoizing the handlers leads to a dramatic performance improvement in __setattr__

__pydantic_validator__ class-attribute

The pydantic-core SchemaValidator used to validate instances of the model.

__signature__ class-attribute

The synthesized __init__ [Signature][inspect.Signature] of the model.

model_extra property

Get extra fields set during validation.

Returns:

Type Description
dict[str, Any] | None

A dictionary of extra fields, or None if config.extra is not set to "allow".

model_fields_set property

Returns the set of fields that have been explicitly set on this model instance.

Returns:

Type Description
set[str]

A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

__copy__()

Returns a shallow copy of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
def __copy__(self) -> Self:
    """Returns a shallow copy of the model."""
    cls = type(self)
    m = cls.__new__(cls)
    _object_setattr(m, '__dict__', copy(self.__dict__))
    _object_setattr(m, '__pydantic_extra__', copy(self.__pydantic_extra__))
    _object_setattr(m, '__pydantic_fields_set__', copy(self.__pydantic_fields_set__))

    if not hasattr(self, '__pydantic_private__') or self.__pydantic_private__ is None:
        _object_setattr(m, '__pydantic_private__', None)
    else:
        _object_setattr(
            m,
            '__pydantic_private__',
            {k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined},
        )

    return m

__deepcopy__(memo=None)

Returns a deep copy of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
def __deepcopy__(self, memo: dict[int, Any] | None = None) -> Self:
    """Returns a deep copy of the model."""
    cls = type(self)
    m = cls.__new__(cls)
    _object_setattr(m, '__dict__', deepcopy(self.__dict__, memo=memo))
    _object_setattr(m, '__pydantic_extra__', deepcopy(self.__pydantic_extra__, memo=memo))
    # This next line doesn't need a deepcopy because __pydantic_fields_set__ is a set[str],
    # and attempting a deepcopy would be marginally slower.
    _object_setattr(m, '__pydantic_fields_set__', copy(self.__pydantic_fields_set__))

    if not hasattr(self, '__pydantic_private__') or self.__pydantic_private__ is None:
        _object_setattr(m, '__pydantic_private__', None)
    else:
        _object_setattr(
            m,
            '__pydantic_private__',
            deepcopy({k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined}, memo=memo),
        )

    return m

__get_pydantic_json_schema__(core_schema, handler) classmethod

Hook into generating the model's JSON schema.

Parameters:

Name Type Description Default
core_schema CoreSchema

A pydantic-core CoreSchema. You can ignore this argument and call the handler with a new CoreSchema, wrap this CoreSchema ({'type': 'nullable', 'schema': current_schema}), or just call the handler with the original schema.

required
handler GetJsonSchemaHandler

Call into Pydantic's internal JSON schema generation. This will raise a pydantic.errors.PydanticInvalidForJsonSchema if JSON schema generation fails. Since this gets called by BaseModel.model_json_schema you can override the schema_generator argument to that function to change JSON schema generation globally for a type.

required

Returns:

Type Description
JsonSchemaValue

A JSON schema, as a Python object.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
@classmethod
def __get_pydantic_json_schema__(
    cls,
    core_schema: CoreSchema,
    handler: GetJsonSchemaHandler,
    /,
) -> JsonSchemaValue:
    """Hook into generating the model's JSON schema.

    Args:
        core_schema: A `pydantic-core` CoreSchema.
            You can ignore this argument and call the handler with a new CoreSchema,
            wrap this CoreSchema (`{'type': 'nullable', 'schema': current_schema}`),
            or just call the handler with the original schema.
        handler: Call into Pydantic's internal JSON schema generation.
            This will raise a `pydantic.errors.PydanticInvalidForJsonSchema` if JSON schema
            generation fails.
            Since this gets called by `BaseModel.model_json_schema` you can override the
            `schema_generator` argument to that function to change JSON schema generation globally
            for a type.

    Returns:
        A JSON schema, as a Python object.
    """
    return handler(core_schema)

__init__(**data)

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
def __init__(self, /, **data: Any) -> None:
    """Create a new model by parsing and validating input data from keyword arguments.

    Raises [`ValidationError`][pydantic_core.ValidationError] if the input data cannot be
    validated to form a valid model.

    `self` is explicitly positional-only to allow `self` as a field name.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
    if self is not validated_self:
        warnings.warn(
            'A custom validator is returning a value other than `self`.\n'
            "Returning anything other than `self` from a top level model validator isn't supported when validating via `__init__`.\n"
            'See the `model_validator` docs (https://docs.pydantic.dev/latest/concepts/validators/#model-validators) for more details.',
            stacklevel=2,
        )

__init_subclass__(**kwargs)

This signature is included purely to help type-checkers check arguments to class declaration, which provides a way to conveniently set model_config key/value pairs.

from pydantic import BaseModel

class MyModel(BaseModel, extra='allow'): ...

However, this may be deceiving, since the actual calls to __init_subclass__ will not receive any of the config arguments, and will only receive any keyword arguments passed during class initialization that are not expected keys in ConfigDict. (This is due to the way ModelMetaclass.__new__ works.)

Parameters:

Name Type Description Default
**kwargs Unpack[ConfigDict]

Keyword arguments passed to the class definition, which set model_config

{}
Note

You may want to override __pydantic_init_subclass__ instead, which behaves similarly but is called after the class is fully initialized.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
def __init_subclass__(cls, **kwargs: Unpack[ConfigDict]):
    """This signature is included purely to help type-checkers check arguments to class declaration, which
    provides a way to conveniently set model_config key/value pairs.

    ```python
    from pydantic import BaseModel

    class MyModel(BaseModel, extra='allow'): ...
    ```

    However, this may be deceiving, since the _actual_ calls to `__init_subclass__` will not receive any
    of the config arguments, and will only receive any keyword arguments passed during class initialization
    that are _not_ expected keys in ConfigDict. (This is due to the way `ModelMetaclass.__new__` works.)

    Args:
        **kwargs: Keyword arguments passed to the class definition, which set model_config

    Note:
        You may want to override `__pydantic_init_subclass__` instead, which behaves similarly but is called
        *after* the class is fully initialized.
    """

__iter__()

So dict(model) works.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
1229
1230
1231
1232
1233
1234
def __iter__(self) -> TupleGenerator:
    """So `dict(model)` works."""
    yield from [(k, v) for (k, v) in self.__dict__.items() if not k.startswith('_')]
    extra = self.__pydantic_extra__
    if extra:
        yield from extra.items()

__pydantic_init_subclass__(**kwargs) classmethod

This is intended to behave just like __init_subclass__, but is called by ModelMetaclass only after basic class initialization is complete. In particular, attributes like model_fields will be present when this is called, but forward annotations are not guaranteed to be resolved yet, meaning that creating an instance of the class may fail.

This is necessary because __init_subclass__ will always be called by type.__new__, and it would require a prohibitively large refactor to the ModelMetaclass to ensure that type.__new__ was called in such a manner that the class would already be sufficiently initialized.

This will receive the same kwargs that would be passed to the standard __init_subclass__, namely, any kwargs passed to the class definition that aren't used internally by Pydantic.

Parameters:

Name Type Description Default
**kwargs Any

Any keyword arguments passed to the class definition that aren't used internally by Pydantic.

{}
Note

You may want to override __pydantic_on_complete__() instead, which is called once the class and its fields are fully initialized and ready for validation.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
@classmethod
def __pydantic_init_subclass__(cls, **kwargs: Any) -> None:
    """This is intended to behave just like `__init_subclass__`, but is called by `ModelMetaclass`
    only after basic class initialization is complete. In particular, attributes like `model_fields` will
    be present when this is called, but forward annotations are not guaranteed to be resolved yet,
    meaning that creating an instance of the class may fail.

    This is necessary because `__init_subclass__` will always be called by `type.__new__`,
    and it would require a prohibitively large refactor to the `ModelMetaclass` to ensure that
    `type.__new__` was called in such a manner that the class would already be sufficiently initialized.

    This will receive the same `kwargs` that would be passed to the standard `__init_subclass__`, namely,
    any kwargs passed to the class definition that aren't used internally by Pydantic.

    Args:
        **kwargs: Any keyword arguments passed to the class definition that aren't used internally
            by Pydantic.

    Note:
        You may want to override [`__pydantic_on_complete__()`][pydantic.main.BaseModel.__pydantic_on_complete__]
        instead, which is called once the class and its fields are fully initialized and ready for validation.
    """

__pydantic_on_complete__() classmethod

This is called once the class and its fields are fully initialized and ready to be used.

This typically happens when the class is created (just before __pydantic_init_subclass__() is called on the superclass), except when forward annotations are used that could not immediately be resolved. In that case, it will be called later, when the model is rebuilt automatically or explicitly using model_rebuild().

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
877
878
879
880
881
882
883
884
885
886
@classmethod
def __pydantic_on_complete__(cls) -> None:
    """This is called once the class and its fields are fully initialized and ready to be used.

    This typically happens when the class is created (just before
    [`__pydantic_init_subclass__()`][pydantic.main.BaseModel.__pydantic_init_subclass__] is called on the superclass),
    except when forward annotations are used that could not immediately be resolved.
    In that case, it will be called later, when the model is rebuilt automatically or explicitly using
    [`model_rebuild()`][pydantic.main.BaseModel.model_rebuild].
    """

copy(*, include=None, exclude=None, update=None, deep=False)

Returns a copy of the model.

Deprecated

This method is now deprecated; use model_copy instead.

If you need include or exclude, use:

data = self.model_dump(include=include, exclude=exclude, round_trip=True)
data = {**data, **(update or {})}
copied = self.model_validate(data)

Parameters:

Name Type Description Default
include AbstractSetIntStr | MappingIntStrAny | None

Optional set or mapping specifying which fields to include in the copied model.

None
exclude AbstractSetIntStr | MappingIntStrAny | None

Optional set or mapping specifying which fields to exclude in the copied model.

None
update Dict[str, Any] | None

Optional dictionary of field-value pairs to override field values in the copied model.

None
deep bool

If True, the values of fields that are Pydantic models will be deep-copied.

False

Returns:

Type Description
Self

A copy of the model with included, excluded and updated fields as specified.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
@typing_extensions.deprecated(
    'The `copy` method is deprecated; use `model_copy` instead. '
    'See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`.',
    category=None,
)
def copy(
    self,
    *,
    include: AbstractSetIntStr | MappingIntStrAny | None = None,
    exclude: AbstractSetIntStr | MappingIntStrAny | None = None,
    update: Dict[str, Any] | None = None,  # noqa UP006
    deep: bool = False,
) -> Self:  # pragma: no cover
    """Returns a copy of the model.

    !!! warning "Deprecated"
        This method is now deprecated; use `model_copy` instead.

    If you need `include` or `exclude`, use:

    ```python {test="skip" lint="skip"}
    data = self.model_dump(include=include, exclude=exclude, round_trip=True)
    data = {**data, **(update or {})}
    copied = self.model_validate(data)
    ```

    Args:
        include: Optional set or mapping specifying which fields to include in the copied model.
        exclude: Optional set or mapping specifying which fields to exclude in the copied model.
        update: Optional dictionary of field-value pairs to override field values in the copied model.
        deep: If True, the values of fields that are Pydantic models will be deep-copied.

    Returns:
        A copy of the model with included, excluded and updated fields as specified.
    """
    warnings.warn(
        'The `copy` method is deprecated; use `model_copy` instead. '
        'See the docstring of `BaseModel.copy` for details about how to handle `include` and `exclude`.',
        category=PydanticDeprecatedSince20,
        stacklevel=2,
    )
    from .deprecated import copy_internals

    values = dict(
        copy_internals._iter(
            self, to_dict=False, by_alias=False, include=include, exclude=exclude, exclude_unset=False
        ),
        **(update or {}),
    )
    if self.__pydantic_private__ is None:
        private = None
    else:
        private = {k: v for k, v in self.__pydantic_private__.items() if v is not PydanticUndefined}

    if self.__pydantic_extra__ is None:
        extra: dict[str, Any] | None = None
    else:
        extra = self.__pydantic_extra__.copy()
        for k in list(self.__pydantic_extra__):
            if k not in values:  # k was in the exclude
                extra.pop(k)
        for k in list(values):
            if k in self.__pydantic_extra__:  # k must have come from extra
                extra[k] = values.pop(k)

    # new `__pydantic_fields_set__` can have unset optional fields with a set value in `update` kwarg
    if update:
        fields_set = self.__pydantic_fields_set__ | update.keys()
    else:
        fields_set = set(self.__pydantic_fields_set__)

    # removing excluded fields from `__pydantic_fields_set__`
    if exclude:
        fields_set -= set(exclude)

    return copy_internals._copy_and_set_values(self, values, fields_set, extra, private, deep=deep)

model_computed_fields() classmethod

A mapping of computed field names to their respective [ComputedFieldInfo][pydantic.fields.ComputedFieldInfo] instances.

Warning

Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3. Instead, you should access this attribute from the model class.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
273
274
275
276
277
278
279
280
281
282
@_utils.deprecated_instance_property
@classmethod
def model_computed_fields(cls) -> dict[str, ComputedFieldInfo]:
    """A mapping of computed field names to their respective [`ComputedFieldInfo`][pydantic.fields.ComputedFieldInfo] instances.

    !!! warning
        Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3.
        Instead, you should access this attribute from the model class.
    """
    return getattr(cls, '__pydantic_computed_fields__', {})

model_construct(_fields_set=None, **values) classmethod

Creates a new instance of the Model class with validated data.

Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.

Note

model_construct() generally respects the model_config.extra setting on the provided model. That is, if model_config.extra == 'allow', then all extra passed values are added to the model instance's __dict__ and __pydantic_extra__ fields. If model_config.extra == 'ignore' (the default), then all extra passed values are ignored. Because no validation is performed with a call to model_construct(), having model_config.extra == 'forbid' does not result in an error if extra values are passed, but they will be ignored.

Parameters:

Name Type Description Default
_fields_set set[str] | None

A set of field names that were originally explicitly set during instantiation. If provided, this is directly used for the [model_fields_set][pydantic.BaseModel.model_fields_set] attribute. Otherwise, the field names from the values argument will be used.

None
values Any

Trusted or pre-validated data dictionary.

{}

Returns:

Type Description
Self

A new instance of the Model class with validated data.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
@classmethod
def model_construct(cls, _fields_set: set[str] | None = None, **values: Any) -> Self:  # noqa: C901
    """Creates a new instance of the `Model` class with validated data.

    Creates a new model setting `__dict__` and `__pydantic_fields_set__` from trusted or pre-validated data.
    Default values are respected, but no other validation is performed.

    !!! note
        `model_construct()` generally respects the `model_config.extra` setting on the provided model.
        That is, if `model_config.extra == 'allow'`, then all extra passed values are added to the model instance's `__dict__`
        and `__pydantic_extra__` fields. If `model_config.extra == 'ignore'` (the default), then all extra passed values are ignored.
        Because no validation is performed with a call to `model_construct()`, having `model_config.extra == 'forbid'` does not result in
        an error if extra values are passed, but they will be ignored.

    Args:
        _fields_set: A set of field names that were originally explicitly set during instantiation. If provided,
            this is directly used for the [`model_fields_set`][pydantic.BaseModel.model_fields_set] attribute.
            Otherwise, the field names from the `values` argument will be used.
        values: Trusted or pre-validated data dictionary.

    Returns:
        A new instance of the `Model` class with validated data.
    """
    m = cls.__new__(cls)
    fields_values: dict[str, Any] = {}
    fields_set = set()

    for name, field in cls.__pydantic_fields__.items():
        if field.alias is not None and field.alias in values:
            fields_values[name] = values.pop(field.alias)
            fields_set.add(name)

        if (name not in fields_set) and (field.validation_alias is not None):
            validation_aliases: list[str | AliasPath] = (
                field.validation_alias.choices
                if isinstance(field.validation_alias, AliasChoices)
                else [field.validation_alias]
            )

            for alias in validation_aliases:
                if isinstance(alias, str) and alias in values:
                    fields_values[name] = values.pop(alias)
                    fields_set.add(name)
                    break
                elif isinstance(alias, AliasPath):
                    value = alias.search_dict_for_path(values)
                    if value is not PydanticUndefined:
                        fields_values[name] = value
                        fields_set.add(name)
                        break

        if name not in fields_set:
            if name in values:
                fields_values[name] = values.pop(name)
                fields_set.add(name)
            elif not field.is_required():
                fields_values[name] = field.get_default(call_default_factory=True, validated_data=fields_values)
    if _fields_set is None:
        _fields_set = fields_set

    _extra: dict[str, Any] | None = values if cls.model_config.get('extra') == 'allow' else None
    _object_setattr(m, '__dict__', fields_values)
    _object_setattr(m, '__pydantic_fields_set__', _fields_set)
    if not cls.__pydantic_root_model__:
        _object_setattr(m, '__pydantic_extra__', _extra)

    if cls.__pydantic_post_init__:
        m.model_post_init(None)
        # update private attributes with values set
        if hasattr(m, '__pydantic_private__') and m.__pydantic_private__ is not None:
            for k, v in values.items():
                if k in m.__private_attributes__:
                    m.__pydantic_private__[k] = v

    elif not cls.__pydantic_root_model__:
        # Note: if there are any private attributes, cls.__pydantic_post_init__ would exist
        # Since it doesn't, that means that `__pydantic_private__` should be set to None
        _object_setattr(m, '__pydantic_private__', None)

    return m

model_copy(*, update=None, deep=False)

Usage Documentation

model_copy

Returns a copy of the model.

Note

The underlying instance's [__dict__][object.__dict__] attribute is copied. This might have unexpected side effects if you store anything in it, on top of the model fields (e.g. the value of [cached properties][functools.cached_property]).

Parameters:

Name Type Description Default
update Mapping[str, Any] | None

Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data.

None
deep bool

Set to True to make a deep copy of the model.

False

Returns:

Type Description
Self

New model instance.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
def model_copy(self, *, update: Mapping[str, Any] | None = None, deep: bool = False) -> Self:
    """!!! abstract "Usage Documentation"
        [`model_copy`](../concepts/models.md#model-copy)

    Returns a copy of the model.

    !!! note
        The underlying instance's [`__dict__`][object.__dict__] attribute is copied. This
        might have unexpected side effects if you store anything in it, on top of the model
        fields (e.g. the value of [cached properties][functools.cached_property]).

    Args:
        update: Values to change/add in the new model. Note: the data is not validated
            before creating the new model. You should trust this data.
        deep: Set to `True` to make a deep copy of the model.

    Returns:
        New model instance.
    """
    copied = self.__deepcopy__() if deep else self.__copy__()
    if update:
        if self.model_config.get('extra') == 'allow':
            for k, v in update.items():
                if k in self.__pydantic_fields__:
                    copied.__dict__[k] = v
                else:
                    if copied.__pydantic_extra__ is None:
                        copied.__pydantic_extra__ = {}
                    copied.__pydantic_extra__[k] = v
        else:
            copied.__dict__.update(update)
        copied.__pydantic_fields_set__.update(update.keys())
    return copied

model_dump(*, mode='python', include=None, exclude=None, context=None, by_alias=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, exclude_computed_fields=False, round_trip=False, warnings=True, fallback=None, serialize_as_any=False)

Usage Documentation

model_dump

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters:

Name Type Description Default
mode Literal['json', 'python'] | str

The mode in which to_python should run. If mode is 'json', the output will only contain JSON serializable types. If mode is 'python', the output may contain non-JSON-serializable Python objects.

'python'
include IncEx | None

A set of fields to include in the output.

None
exclude IncEx | None

A set of fields to exclude from the output.

None
context Any | None

Additional context to pass to the serializer.

None
by_alias bool | None

Whether to use the field's alias in the dictionary key if defined.

None
exclude_unset bool

Whether to exclude fields that have not been explicitly set.

False
exclude_defaults bool

Whether to exclude fields that are set to their default value.

False
exclude_none bool

Whether to exclude fields that have a value of None.

False
exclude_computed_fields bool

Whether to exclude computed fields. While this can be useful for round-tripping, it is usually recommended to use the dedicated round_trip parameter instead.

False
round_trip bool

If True, dumped values should be valid as input for non-idempotent types such as Json[T].

False
warnings bool | Literal['none', 'warn', 'error']

How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].

True
fallback Callable[[Any], Any] | None

A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.

None
serialize_as_any bool

Whether to serialize fields with duck-typing serialization behavior.

False

Returns:

Type Description
dict[str, Any]

A dictionary representation of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
def model_dump(
    self,
    *,
    mode: Literal['json', 'python'] | str = 'python',
    include: IncEx | None = None,
    exclude: IncEx | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    exclude_unset: bool = False,
    exclude_defaults: bool = False,
    exclude_none: bool = False,
    exclude_computed_fields: bool = False,
    round_trip: bool = False,
    warnings: bool | Literal['none', 'warn', 'error'] = True,
    fallback: Callable[[Any], Any] | None = None,
    serialize_as_any: bool = False,
) -> dict[str, Any]:
    """!!! abstract "Usage Documentation"
        [`model_dump`](../concepts/serialization.md#python-mode)

    Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

    Args:
        mode: The mode in which `to_python` should run.
            If mode is 'json', the output will only contain JSON serializable types.
            If mode is 'python', the output may contain non-JSON-serializable Python objects.
        include: A set of fields to include in the output.
        exclude: A set of fields to exclude from the output.
        context: Additional context to pass to the serializer.
        by_alias: Whether to use the field's alias in the dictionary key if defined.
        exclude_unset: Whether to exclude fields that have not been explicitly set.
        exclude_defaults: Whether to exclude fields that are set to their default value.
        exclude_none: Whether to exclude fields that have a value of `None`.
        exclude_computed_fields: Whether to exclude computed fields.
            While this can be useful for round-tripping, it is usually recommended to use the dedicated
            `round_trip` parameter instead.
        round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T].
        warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors,
            "error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError].
        fallback: A function to call when an unknown value is encountered. If not provided,
            a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError] error is raised.
        serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.

    Returns:
        A dictionary representation of the model.
    """
    return self.__pydantic_serializer__.to_python(
        self,
        mode=mode,
        by_alias=by_alias,
        include=include,
        exclude=exclude,
        context=context,
        exclude_unset=exclude_unset,
        exclude_defaults=exclude_defaults,
        exclude_none=exclude_none,
        exclude_computed_fields=exclude_computed_fields,
        round_trip=round_trip,
        warnings=warnings,
        fallback=fallback,
        serialize_as_any=serialize_as_any,
    )

model_dump_json(*, indent=None, ensure_ascii=False, include=None, exclude=None, context=None, by_alias=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, exclude_computed_fields=False, round_trip=False, warnings=True, fallback=None, serialize_as_any=False)

Usage Documentation

model_dump_json

Generates a JSON representation of the model using Pydantic's to_json method.

Parameters:

Name Type Description Default
indent int | None

Indentation to use in the JSON output. If None is passed, the output will be compact.

None
ensure_ascii bool

If True, the output is guaranteed to have all incoming non-ASCII characters escaped. If False (the default), these characters will be output as-is.

False
include IncEx | None

Field(s) to include in the JSON output.

None
exclude IncEx | None

Field(s) to exclude from the JSON output.

None
context Any | None

Additional context to pass to the serializer.

None
by_alias bool | None

Whether to serialize using field aliases.

None
exclude_unset bool

Whether to exclude fields that have not been explicitly set.

False
exclude_defaults bool

Whether to exclude fields that are set to their default value.

False
exclude_none bool

Whether to exclude fields that have a value of None.

False
exclude_computed_fields bool

Whether to exclude computed fields. While this can be useful for round-tripping, it is usually recommended to use the dedicated round_trip parameter instead.

False
round_trip bool

If True, dumped values should be valid as input for non-idempotent types such as Json[T].

False
warnings bool | Literal['none', 'warn', 'error']

How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].

True
fallback Callable[[Any], Any] | None

A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.

None
serialize_as_any bool

Whether to serialize fields with duck-typing serialization behavior.

False

Returns:

Type Description
str

A JSON string representation of the model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
def model_dump_json(
    self,
    *,
    indent: int | None = None,
    ensure_ascii: bool = False,
    include: IncEx | None = None,
    exclude: IncEx | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    exclude_unset: bool = False,
    exclude_defaults: bool = False,
    exclude_none: bool = False,
    exclude_computed_fields: bool = False,
    round_trip: bool = False,
    warnings: bool | Literal['none', 'warn', 'error'] = True,
    fallback: Callable[[Any], Any] | None = None,
    serialize_as_any: bool = False,
) -> str:
    """!!! abstract "Usage Documentation"
        [`model_dump_json`](../concepts/serialization.md#json-mode)

    Generates a JSON representation of the model using Pydantic's `to_json` method.

    Args:
        indent: Indentation to use in the JSON output. If None is passed, the output will be compact.
        ensure_ascii: If `True`, the output is guaranteed to have all incoming non-ASCII characters escaped.
            If `False` (the default), these characters will be output as-is.
        include: Field(s) to include in the JSON output.
        exclude: Field(s) to exclude from the JSON output.
        context: Additional context to pass to the serializer.
        by_alias: Whether to serialize using field aliases.
        exclude_unset: Whether to exclude fields that have not been explicitly set.
        exclude_defaults: Whether to exclude fields that are set to their default value.
        exclude_none: Whether to exclude fields that have a value of `None`.
        exclude_computed_fields: Whether to exclude computed fields.
            While this can be useful for round-tripping, it is usually recommended to use the dedicated
            `round_trip` parameter instead.
        round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T].
        warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors,
            "error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError].
        fallback: A function to call when an unknown value is encountered. If not provided,
            a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError] error is raised.
        serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.

    Returns:
        A JSON string representation of the model.
    """
    return self.__pydantic_serializer__.to_json(
        self,
        indent=indent,
        ensure_ascii=ensure_ascii,
        include=include,
        exclude=exclude,
        context=context,
        by_alias=by_alias,
        exclude_unset=exclude_unset,
        exclude_defaults=exclude_defaults,
        exclude_none=exclude_none,
        exclude_computed_fields=exclude_computed_fields,
        round_trip=round_trip,
        warnings=warnings,
        fallback=fallback,
        serialize_as_any=serialize_as_any,
    ).decode()

model_fields() classmethod

A mapping of field names to their respective [FieldInfo][pydantic.fields.FieldInfo] instances.

Warning

Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3. Instead, you should access this attribute from the model class.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
262
263
264
265
266
267
268
269
270
271
@_utils.deprecated_instance_property
@classmethod
def model_fields(cls) -> dict[str, FieldInfo]:
    """A mapping of field names to their respective [`FieldInfo`][pydantic.fields.FieldInfo] instances.

    !!! warning
        Accessing this attribute from a model instance is deprecated, and will not work in Pydantic V3.
        Instead, you should access this attribute from the model class.
    """
    return getattr(cls, '__pydantic_fields__', {})

model_json_schema(by_alias=True, ref_template=DEFAULT_REF_TEMPLATE, schema_generator=GenerateJsonSchema, mode='validation', *, union_format='any_of') classmethod

Generates a JSON schema for a model class.

Parameters:

Name Type Description Default
by_alias bool

Whether to use attribute aliases or not.

True
ref_template str

The reference template.

DEFAULT_REF_TEMPLATE
union_format Literal['any_of', 'primitive_type_array']

The format to use when combining schemas from unions together. Can be one of:

  • 'any_of': Use the anyOf keyword to combine schemas (the default).
  • 'primitive_type_array': Use the type keyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive type (string, boolean, null, integer or number) or contains constraints/metadata, falls back to any_of.
'any_of'
schema_generator type[GenerateJsonSchema]

To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications

GenerateJsonSchema
mode JsonSchemaMode

The mode in which to generate the schema.

'validation'

Returns:

Type Description
dict[str, Any]

The JSON schema for the given model class.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
@classmethod
def model_json_schema(
    cls,
    by_alias: bool = True,
    ref_template: str = DEFAULT_REF_TEMPLATE,
    schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema,
    mode: JsonSchemaMode = 'validation',
    *,
    union_format: Literal['any_of', 'primitive_type_array'] = 'any_of',
) -> dict[str, Any]:
    """Generates a JSON schema for a model class.

    Args:
        by_alias: Whether to use attribute aliases or not.
        ref_template: The reference template.
        union_format: The format to use when combining schemas from unions together. Can be one of:

            - `'any_of'`: Use the [`anyOf`](https://json-schema.org/understanding-json-schema/reference/combining#anyOf)
            keyword to combine schemas (the default).
            - `'primitive_type_array'`: Use the [`type`](https://json-schema.org/understanding-json-schema/reference/type)
            keyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive
            type (`string`, `boolean`, `null`, `integer` or `number`) or contains constraints/metadata, falls back to
            `any_of`.
        schema_generator: To override the logic used to generate the JSON schema, as a subclass of
            `GenerateJsonSchema` with your desired modifications
        mode: The mode in which to generate the schema.

    Returns:
        The JSON schema for the given model class.
    """
    return model_json_schema(
        cls,
        by_alias=by_alias,
        ref_template=ref_template,
        union_format=union_format,
        schema_generator=schema_generator,
        mode=mode,
    )

model_parametrized_name(params) classmethod

Compute the class name for parametrizations of generic classes.

This method can be overridden to achieve a custom naming scheme for generic BaseModels.

Parameters:

Name Type Description Default
params tuple[type[Any], ...]

Tuple of types of the class. Given a generic class Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.

required

Returns:

Type Description
str

String representing the new class where params are passed to cls as type variables.

Raises:

Type Description
TypeError

Raised when trying to generate concrete names for non-generic models.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
@classmethod
def model_parametrized_name(cls, params: tuple[type[Any], ...]) -> str:
    """Compute the class name for parametrizations of generic classes.

    This method can be overridden to achieve a custom naming scheme for generic BaseModels.

    Args:
        params: Tuple of types of the class. Given a generic class
            `Model` with 2 type variables and a concrete model `Model[str, int]`,
            the value `(str, int)` would be passed to `params`.

    Returns:
        String representing the new class where `params` are passed to `cls` as type variables.

    Raises:
        TypeError: Raised when trying to generate concrete names for non-generic models.
    """
    if not issubclass(cls, Generic):
        raise TypeError('Concrete names should only be generated for generic models.')

    # Any strings received should represent forward references, so we handle them specially below.
    # If we eventually move toward wrapping them in a ForwardRef in __class_getitem__ in the future,
    # we may be able to remove this special case.
    param_names = [param if isinstance(param, str) else _repr.display_as_type(param) for param in params]
    params_component = ', '.join(param_names)
    return f'{cls.__name__}[{params_component}]'

model_post_init(context)

Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
612
613
614
615
def model_post_init(self, context: Any, /) -> None:
    """Override this method to perform additional initialization after `__init__` and `model_construct`.
    This is useful if you want to do some validation that requires the entire model to be initialized.
    """

model_rebuild(*, force=False, raise_errors=True, _parent_namespace_depth=2, _types_namespace=None) classmethod

Try to rebuild the pydantic-core schema for the model.

This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.

Parameters:

Name Type Description Default
force bool

Whether to force the rebuilding of the model schema, defaults to False.

False
raise_errors bool

Whether to raise errors, defaults to True.

True
_parent_namespace_depth int

The depth level of the parent namespace, defaults to 2.

2
_types_namespace MappingNamespace | None

The types namespace, defaults to None.

None

Returns:

Type Description
bool | None

Returns None if the schema is already "complete" and rebuilding was not required.

bool | None

If rebuilding was required, returns True if rebuilding was successful, otherwise False.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
@classmethod
def model_rebuild(
    cls,
    *,
    force: bool = False,
    raise_errors: bool = True,
    _parent_namespace_depth: int = 2,
    _types_namespace: MappingNamespace | None = None,
) -> bool | None:
    """Try to rebuild the pydantic-core schema for the model.

    This may be necessary when one of the annotations is a ForwardRef which could not be resolved during
    the initial attempt to build the schema, and automatic rebuilding fails.

    Args:
        force: Whether to force the rebuilding of the model schema, defaults to `False`.
        raise_errors: Whether to raise errors, defaults to `True`.
        _parent_namespace_depth: The depth level of the parent namespace, defaults to 2.
        _types_namespace: The types namespace, defaults to `None`.

    Returns:
        Returns `None` if the schema is already "complete" and rebuilding was not required.
        If rebuilding _was_ required, returns `True` if rebuilding was successful, otherwise `False`.
    """
    already_complete = cls.__pydantic_complete__
    if already_complete and not force:
        return None

    cls.__pydantic_complete__ = False

    for attr in ('__pydantic_core_schema__', '__pydantic_validator__', '__pydantic_serializer__'):
        if attr in cls.__dict__ and not isinstance(getattr(cls, attr), _mock_val_ser.MockValSer):
            # Deleting the validator/serializer is necessary as otherwise they can get reused in
            # pydantic-core. We do so only if they aren't mock instances, otherwise — as `model_rebuild()`
            # isn't thread-safe — concurrent model instantiations can lead to the parent validator being used.
            # Same applies for the core schema that can be reused in schema generation.
            delattr(cls, attr)

    if _types_namespace is not None:
        rebuild_ns = _types_namespace
    elif _parent_namespace_depth > 0:
        rebuild_ns = _typing_extra.parent_frame_namespace(parent_depth=_parent_namespace_depth, force=True) or {}
    else:
        rebuild_ns = {}

    parent_ns = _model_construction.unpack_lenient_weakvaluedict(cls.__pydantic_parent_namespace__) or {}

    ns_resolver = _namespace_utils.NsResolver(
        parent_namespace={**rebuild_ns, **parent_ns},
    )

    return _model_construction.complete_model_class(
        cls,
        _config.ConfigWrapper(cls.model_config, check=False),
        ns_resolver,
        raise_errors=raise_errors,
        # If the model was already complete, we don't need to call the hook again.
        call_on_complete_hook=not already_complete,
    )

model_validate(obj, *, strict=None, extra=None, from_attributes=None, context=None, by_alias=None, by_name=None) classmethod

Validate a pydantic model instance.

Parameters:

Name Type Description Default
obj Any

The object to validate.

required
strict bool | None

Whether to enforce types strictly.

None
extra ExtraValues | None

Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.

None
from_attributes bool | None

Whether to extract data from object attributes.

None
context Any | None

Additional context to pass to the validator.

None
by_alias bool | None

Whether to use the field's alias when validating against the provided input data.

None
by_name bool | None

Whether to use the field's name when validating against the provided input data.

None

Raises:

Type Description
ValidationError

If the object could not be validated.

Returns:

Type Description
Self

The validated model instance.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
@classmethod
def model_validate(
    cls,
    obj: Any,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    from_attributes: bool | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> Self:
    """Validate a pydantic model instance.

    Args:
        obj: The object to validate.
        strict: Whether to enforce types strictly.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Raises:
        ValidationError: If the object could not be validated.

    Returns:
        The validated model instance.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True

    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return cls.__pydantic_validator__.validate_python(
        obj,
        strict=strict,
        extra=extra,
        from_attributes=from_attributes,
        context=context,
        by_alias=by_alias,
        by_name=by_name,
    )

model_validate_json(json_data, *, strict=None, extra=None, context=None, by_alias=None, by_name=None) classmethod

Usage Documentation

JSON Parsing

Validate the given JSON data against the Pydantic model.

Parameters:

Name Type Description Default
json_data str | bytes | bytearray

The JSON data to validate.

required
strict bool | None

Whether to enforce types strictly.

None
extra ExtraValues | None

Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.

None
context Any | None

Extra variables to pass to the validator.

None
by_alias bool | None

Whether to use the field's alias when validating against the provided input data.

None
by_name bool | None

Whether to use the field's name when validating against the provided input data.

None

Returns:

Type Description
Self

The validated Pydantic model.

Raises:

Type Description
ValidationError

If json_data is not a JSON string or the object could not be validated.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
@classmethod
def model_validate_json(
    cls,
    json_data: str | bytes | bytearray,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> Self:
    """!!! abstract "Usage Documentation"
        [JSON Parsing](../concepts/json.md#json-parsing)

    Validate the given JSON data against the Pydantic model.

    Args:
        json_data: The JSON data to validate.
        strict: Whether to enforce types strictly.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        context: Extra variables to pass to the validator.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Returns:
        The validated Pydantic model.

    Raises:
        ValidationError: If `json_data` is not a JSON string or the object could not be validated.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True

    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return cls.__pydantic_validator__.validate_json(
        json_data, strict=strict, extra=extra, context=context, by_alias=by_alias, by_name=by_name
    )

model_validate_strings(obj, *, strict=None, extra=None, context=None, by_alias=None, by_name=None) classmethod

Validate the given object with string data against the Pydantic model.

Parameters:

Name Type Description Default
obj Any

The object containing string data to validate.

required
strict bool | None

Whether to enforce types strictly.

None
extra ExtraValues | None

Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.

None
context Any | None

Extra variables to pass to the validator.

None
by_alias bool | None

Whether to use the field's alias when validating against the provided input data.

None
by_name bool | None

Whether to use the field's name when validating against the provided input data.

None

Returns:

Type Description
Self

The validated Pydantic model.

Source code in .venv/lib/python3.12/site-packages/pydantic/main.py
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
@classmethod
def model_validate_strings(
    cls,
    obj: Any,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    context: Any | None = None,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> Self:
    """Validate the given object with string data against the Pydantic model.

    Args:
        obj: The object containing string data to validate.
        strict: Whether to enforce types strictly.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        context: Extra variables to pass to the validator.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Returns:
        The validated Pydantic model.
    """
    # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
    __tracebackhide__ = True

    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return cls.__pydantic_validator__.validate_strings(
        obj, strict=strict, extra=extra, context=context, by_alias=by_alias, by_name=by_name
    )

Functions

unique_toolkit.content.functions.search_content_chunks(user_id, company_id, chat_id, search_string, search_type, limit, search_language=DEFAULT_SEARCH_LANGUAGE, reranker_config=None, scope_ids=None, chat_only=None, metadata_filter=None, content_ids=None, score_threshold=None)

Performs a synchronous search for content chunks in the knowledge base.

Parameters:

Name Type Description Default
search_string str

The search string.

required
search_type ContentSearchType

The type of search to perform.

required
limit int

The maximum number of results to return.

required
search_language str

The language for the full-text search. Defaults to "english".

DEFAULT_SEARCH_LANGUAGE
reranker_config ContentRerankerConfig | None

The reranker configuration. Defaults to None.

None
scope_ids list[str] | None

The scope IDs. Defaults to None.

None
chat_only bool | None

Whether to search only in the current chat. Defaults to None.

None
metadata_filter dict | None

UniqueQL metadata filter. If unspecified/None, it tries to use the metadata filter from the event. Defaults to None.

None
content_ids list[str] | None

The content IDs to search. Defaults to None.

None
score_threshold float | None

The minimum score threshold for results. Defaults to 0.

None
Source code in unique_toolkit/unique_toolkit/content/functions.py
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
def search_content_chunks(
    user_id: str,
    company_id: str,
    chat_id: str,
    search_string: str,
    search_type: ContentSearchType,
    limit: int,
    search_language: str = DEFAULT_SEARCH_LANGUAGE,
    reranker_config: ContentRerankerConfig | None = None,
    scope_ids: list[str] | None = None,
    chat_only: bool | None = None,
    metadata_filter: dict | None = None,
    content_ids: list[str] | None = None,
    score_threshold: float | None = None,
) -> list[ContentChunk]:
    """
    Performs a synchronous search for content chunks in the knowledge base.

    Args:
        search_string (str): The search string.
        search_type (ContentSearchType): The type of search to perform.
        limit (int): The maximum number of results to return.
        search_language (str): The language for the full-text search. Defaults to "english".
        reranker_config (ContentRerankerConfig | None): The reranker configuration. Defaults to None.
        scope_ids (list[str] | None): The scope IDs. Defaults to None.
        chat_only (bool | None): Whether to search only in the current chat. Defaults to None.
        metadata_filter (dict | None): UniqueQL metadata filter. If unspecified/None, it tries to use the metadata filter from the event. Defaults to None.
        content_ids (list[str] | None): The content IDs to search. Defaults to None.
        score_threshold (float | None): The minimum score threshold for results. Defaults to 0.
    Returns:
        list[ContentChunk]: The search results.
    """
    if not scope_ids:
        logger.warning("No scope IDs provided for search.")

    if content_ids:
        logger.info(f"Searching for content chunks with content_ids: {content_ids}")

    try:
        searches = unique_sdk.Search.create(
            user_id=user_id,
            company_id=company_id,
            chatId=chat_id,
            searchString=search_string,
            searchType=search_type.name,
            scopeIds=scope_ids,
            limit=limit,
            reranker=(
                reranker_config.model_dump(by_alias=True) if reranker_config else None
            ),
            language=search_language,
            chatOnly=chat_only,
            metaDataFilter=metadata_filter,
            contentIds=content_ids,
            scoreThreshold=score_threshold,
        )
        return map_to_content_chunks(searches)
    except Exception as e:
        logger.error(f"Error while searching content chunks: {e}")
        raise e

unique_toolkit.content.functions.search_contents(user_id, company_id, chat_id, where, include_failed_content=False)

Performs an asynchronous search for content files in the knowledge base by filter.

Parameters:

Name Type Description Default
user_id str

The user ID.

required
company_id str

The company ID.

required
chat_id str

The chat ID.

required
where dict

The search criteria.

required

Returns:

Type Description
list[Content]

list[Content]: The search results.

Source code in unique_toolkit/unique_toolkit/content/functions.py
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
def search_contents(
    user_id: str,
    company_id: str,
    chat_id: str,
    where: dict,
    include_failed_content: bool = False,
) -> list[Content]:
    """
    Performs an asynchronous search for content files in the knowledge base by filter.

    Args:
        user_id (str): The user ID.
        company_id (str): The company ID.
        chat_id (str): The chat ID.
        where (dict): The search criteria.

    Returns:
        list[Content]: The search results.
    """
    if where.get("contentId"):
        logger.info(f"Searching for content with content_id: {where['contentId']}")

    try:
        contents = unique_sdk.Content.search(
            user_id=user_id,
            company_id=company_id,
            chatId=chat_id,
            # TODO add type parameter in SDK
            where=where,  # type: ignore
            includeFailedContent=include_failed_content,
        )
        return map_contents(contents)
    except Exception as e:
        logger.error(f"Error while searching contents: {e}")
        raise e

unique_toolkit.content.functions.upload_content(user_id, company_id, path_to_content, content_name, mime_type, scope_id=None, chat_id=None, skip_ingestion=False, skip_excel_ingestion=False, ingestion_config=None, metadata=None)

Uploads content to the knowledge base.

Parameters:

Name Type Description Default
user_id str

The user ID.

required
company_id str

The company ID.

required
path_to_content str

The path to the content to upload.

required
content_name str

The name of the content.

required
mime_type str

The MIME type of the content.

required
scope_id str | None

The scope ID. Defaults to None.

None
chat_id str | None

The chat ID. Defaults to None.

None
skip_ingestion bool

Whether to skip ingestion. Defaults to False.

False
skip_excel_ingestion bool

Whether to skip excel ingestion. Defaults to False.

False
ingestion_config IngestionConfig | None

The ingestion configuration. Defaults to None.

None
metadata dict[str, Any] | None

The metadata for the content. Defaults to None.

None

Returns:

Name Type Description
Content Content

The uploaded content.

Source code in unique_toolkit/unique_toolkit/content/functions.py
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
def upload_content(
    user_id: str,
    company_id: str,
    path_to_content: str,
    content_name: str,
    mime_type: str,
    scope_id: str | None = None,
    chat_id: str | None = None,
    skip_ingestion: bool = False,
    skip_excel_ingestion: bool = False,
    ingestion_config: unique_sdk.Content.IngestionConfig | None = None,
    metadata: dict[str, Any] | None = None,
) -> Content:
    """
    Uploads content to the knowledge base.

    Args:
        user_id (str): The user ID.
        company_id (str): The company ID.
        path_to_content (str): The path to the content to upload.
        content_name (str): The name of the content.
        mime_type (str): The MIME type of the content.
        scope_id (str | None): The scope ID. Defaults to None.
        chat_id (str | None): The chat ID. Defaults to None.
        skip_ingestion (bool): Whether to skip ingestion. Defaults to False.
        skip_excel_ingestion (bool): Whether to skip excel ingestion. Defaults to False.
        ingestion_config (unique_sdk.Content.IngestionConfig | None): The ingestion configuration. Defaults to None.
        metadata ( dict[str, Any] | None): The metadata for the content. Defaults to None.

    Returns:
        Content: The uploaded content.
    """

    try:
        return _trigger_upload_content(
            user_id=user_id,
            company_id=company_id,
            content=path_to_content,
            content_name=content_name,
            mime_type=mime_type,
            scope_id=scope_id,
            chat_id=chat_id,
            skip_ingestion=skip_ingestion,
            skip_excel_ingestion=skip_excel_ingestion,
            ingestion_config=ingestion_config,
            metadata=metadata,
        )
    except Exception as e:
        logger.error(f"Error while uploading content: {e}")
        raise e

unique_toolkit.content.functions.download_content(user_id, company_id, content_id, content_name, chat_id=None, dir_path='/tmp')

Downloads content to temporary directory

Parameters:

Name Type Description Default
user_id str

The user ID.

required
company_id str

The company ID.

required
content_id str

The id of the uploaded content.

required
content_name str

The name of the uploaded content.

required
chat_id str | None

The chat_id, defaults to None.

None
dir_path str | Path

The directory path to download the content to, defaults to "/tmp". If not provided, the content will be downloaded to a random directory inside /tmp. Be aware that this directory won't be cleaned up automatically.

'/tmp'

Returns:

Name Type Description
content_path Path

The path to the downloaded content in the temporary directory.

Raises:

Type Description
Exception

If the download fails.

Source code in unique_toolkit/unique_toolkit/content/functions.py
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
def download_content(
    user_id: str,
    company_id: str,
    content_id: str,
    content_name: str,
    chat_id: str | None = None,
    dir_path: str | Path | None = "/tmp",
) -> Path:
    """
    Downloads content to temporary directory

    Args:
        user_id (str): The user ID.
        company_id (str): The company ID.
        content_id (str): The id of the uploaded content.
        content_name (str): The name of the uploaded content.
        chat_id (str | None): The chat_id, defaults to None.
        dir_path (str | Path): The directory path to download the content to, defaults to "/tmp". If not provided, the content will be downloaded to a random directory inside /tmp. Be aware that this directory won't be cleaned up automatically.

    Returns:
        content_path: The path to the downloaded content in the temporary directory.

    Raises:
        Exception: If the download fails.
    """

    logger.info(f"Downloading content with content_id: {content_id}")
    response = request_content_by_id(user_id, company_id, content_id, chat_id)

    random_dir = tempfile.mkdtemp(dir=dir_path)
    content_path = Path(random_dir) / content_name

    if response.status_code == 200:
        with open(content_path, "wb") as file:
            file.write(response.content)
    else:
        error_msg = f"Error downloading file: Status code {response.status_code}"
        logger.error(error_msg)
        raise Exception(error_msg)

    return content_path

Utilities

unique_toolkit.content.utils.sort_content_chunks(content_chunks)

Sorts the content chunks based on their 'order' in the original content. This function sorts the search results based on their 'order' in ascending order. It also performs text modifications by replacing the string within the tags <|/content|> with 'text part {order}' and removing any <|info|> tags (Which is useful in referencing the chunk). Parameters: - content_chunks (list): A list of ContentChunkt objects. Returns: - list: A list of ContentChunk objects sorted according to their order.

Source code in unique_toolkit/unique_toolkit/content/utils.py
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
def sort_content_chunks(content_chunks: list[ContentChunk]):
    """
    Sorts the content chunks based on their 'order' in the original content.
    This function sorts the search results based on their 'order' in ascending order.
    It also performs text modifications by replacing the string within the tags <|/content|>
    with 'text part {order}' and removing any <|info|> tags (Which is useful in referencing the chunk).
    Parameters:
    - content_chunks (list): A list of ContentChunkt objects.
    Returns:
    - list: A list of ContentChunk objects sorted according to their order.
    """
    doc_id_to_chunks = _map_content_id_to_chunks(content_chunks)
    sorted_chunks: list[ContentChunk] = []
    for chunks in doc_id_to_chunks.values():
        chunks.sort(key=lambda x: x.order)
        for i, s in enumerate(chunks):
            s.text = re.sub(
                r"<\|/content\|>", f" text part {s.order}<|/content|>", s.text
            )
            s.text = re.sub(r"<\|info\|>(.*?)<\|\/info\|>", "", s.text)
            pages_postfix = _generate_pages_postfix([s])
            s.key = s.key + pages_postfix if s.key else s.key
            s.title = s.title + pages_postfix if s.title else s.title
        sorted_chunks.extend(chunks)
    return sorted_chunks

unique_toolkit.content.utils.merge_content_chunks(content_chunks)

Merges multiple search results based on their 'id', removing redundant content and info markers.

This function groups search results by their 'id' and then concatenates their texts, cleaning up any content or info markers in subsequent chunks beyond the first one.

Parameters: - content_chunks (list): A list of objects, each representing a search result with 'id' and 'text' keys.

Returns: - list: A list of objects with merged texts for each unique 'id'.

Source code in unique_toolkit/unique_toolkit/content/utils.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
def merge_content_chunks(content_chunks: list[ContentChunk]):
    """
    Merges multiple search results based on their 'id', removing redundant content and info markers.

    This function groups search results by their 'id' and then concatenates their texts,
    cleaning up any content or info markers in subsequent chunks beyond the first one.

    Parameters:
    - content_chunks (list): A list of objects, each representing a search result with 'id' and 'text' keys.

    Returns:
    - list: A list of objects with merged texts for each unique 'id'.
    """

    doc_id_to_chunks = _map_content_id_to_chunks(content_chunks)
    merged_chunks: list[ContentChunk] = []
    for chunks in doc_id_to_chunks.values():
        chunks.sort(key=lambda x: x.order)
        for i, s in enumerate(chunks):
            ## skip first element
            if i > 0:
                ## replace the string within the tags <|content|>...<|/content|> and <|info|> and <|/info|>
                s.text = re.sub(r"<\|content\|>(.*?)<\|\/content\|>", "", s.text)
                s.text = re.sub(r"<\|info\|>(.*?)<\|\/info\|>", "", s.text)

        pages_postfix = _generate_pages_postfix(chunks)
        chunks[0].text = "\n".join(str(s.text) for s in chunks)
        chunks[0].key = (
            chunks[0].key + pages_postfix if chunks[0].key else chunks[0].key
        )
        chunks[0].title = (
            chunks[0].title + pages_postfix if chunks[0].title else chunks[0].title
        )
        chunks[0].end_page = chunks[-1].end_page
        merged_chunks.append(chunks[0])

    return merged_chunks

unique_toolkit.content.utils.count_tokens(text, encoding_model='cl100k_base')

Counts the number of tokens in the provided text.

This function encodes the input text using a predefined encoding scheme and returns the number of tokens in the encoded text.

Parameters: - text (str): The text to count tokens for.

Returns: - int: The number of tokens in the text.

Source code in unique_toolkit/unique_toolkit/content/utils.py
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
@deprecated("Use unique_toolkit._common.token.count_tokens(text, model_info) instead.")
def count_tokens(text: str, encoding_model="cl100k_base") -> int:
    """
    Counts the number of tokens in the provided text.

    This function encodes the input text using a predefined encoding scheme
    and returns the number of tokens in the encoded text.

    Parameters:
    - text (str): The text to count tokens for.

    Returns:
    - int: The number of tokens in the text.
    """
    encoding = tiktoken.get_encoding(encoding_model)
    return len(encoding.encode(text))