navigation_title: "Extract and transform" mapped_pages:
Elastic connectors offer a number of tools for extracting, filtering, and transforming content from your third-party data sources. Each connector has its own default logic, specific to the data source, and every Elastic Search deployment uses a default ingest pipeline to extract and transform data. Several tools are also available for more advanced use cases.
The following diagram provides an overview of how content extraction, sync rules, and ingest pipelines can be orchestrated in your connector’s data pipeline.
:::{image} images/pipelines-extraction-sync-rules.png :alt: Architecture diagram of data pipeline with content extraction :class: screenshot :::
By default, only the connector specific logic (2) and the default search-default-ingestion
pipeline (6) extract and transform your data, as configured in your deployment.
The following tools are available for more advanced use cases:
Learn more in the following documentation links.
Connectors have a default content extraction service, plus the self-hosted extraction service for advanced use cases.
Refer to Content extraction for details.
Use sync rules to help control which documents are synced between the third-party data source and Elasticsearch. Sync rules enable you to filter data early in the data pipeline, which is more efficient and secure.
Refer to Sync rules for details.
Ingest pipelines are a user-defined sequence of processors that modify documents before they are indexed into Elasticsearch. Use ingest pipelines for data enrichment, normalization, and more.
Elastic connectors use a default ingest pipeline, which you can copy and customize to meet your needs.
Refer to ingest pipelines in Search in the {{es}} documentation.