updated
stringclasses
4 values
metadata
dict
action
stringclasses
1 value
sourceName
stringclasses
4 values
body
stringlengths
209
8.98k
url
stringlengths
38
99
format
stringclasses
1 value
title
stringlengths
12
68
2024-05-20T17:30:49.148Z
{ "contentType": null, "productName": "MongoDB Atlas", "tags": [ "atlas", "docs" ], "version": null }
created
snooty-cloud-docs
# View Database Access History - This feature is not available for `M0` free clusters, `M2`, and `M5` clusters. To learn more, see Atlas M0 (Free Cluster), M2, and M5 Limits. - This feature is not supported on Serverless instances at this time. To learn more, see Serverless Instance Limitations. ## Overview Atlas parses the MongoDB database logs to collect a list of authentication requests made against your clusters through the following methods: - `mongosh` - Compass - Drivers Authentication requests made with API Keys through the Atlas Administration API are not logged. Atlas logs the following information for each authentication request within the last 7 days: <table> <tr> <th id="Field"> Field </th> <th id="Description"> Description </th> </tr> <tr> <td headers="Field"> Timestamp </td> <td headers="Description"> The date and time of the authentication request. </td> </tr> <tr> <td headers="Field"> Username </td> <td headers="Description"> The username associated with the database user who made the authentication request. For LDAP usernames, the UI displays the resolved LDAP name. Hover over the name to see the full LDAP username. </td> </tr> <tr> <td headers="Field"> IP Address </td> <td headers="Description"> The IP address of the machine that sent the authentication request. </td> </tr> <tr> <td headers="Field"> Host </td> <td headers="Description"> The target server that processed the authentication request. </td> </tr> <tr> <td headers="Field"> Authentication Source </td> <td headers="Description"> The database that the authentication request was made against. `admin` is the authentication source for SCRAM-SHA users and `$external` for LDAP users. </td> </tr> <tr> <td headers="Field"> Authentication Result </td> <td headers="Description"> The success or failure of the authentication request. A reason code is displayed for the failed authentication requests. </td> </tr> </table>Authentication requests are pre-sorted by descending timestamp with 25 entries per page. ### Logging Limitations If a cluster experiences an activity spike and generates an extremely large quantity of log messages, Atlas may stop collecting and storing new logs for a period of time. Log analysis rate limits apply only to the Performance Advisor UI, the Query Insights UI, the Access Tracking UI, and the Atlas Search Query Analytics UI. Downloadable log files are always complete. If authentication requests occur during a period when logs are not collected, they will not appear in the database access history. ## Required Access To view database access history, you must have `Project Owner` or `Organization Owner` access to Atlas. ## Procedure <Tabs> <Tab name="Atlas CLI"> To return the access logs for a cluster using the Atlas CLI, run the following command: ```sh atlas accessLogs list [options] ``` To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas accessLogs list. - Install the Atlas CLI - Connect to the Atlas CLI </Tab> <Tab name="Atlas Administration API"> To view the database access history using the API, see Access Tracking. </Tab> <Tab name="Atlas UI"> Use the following procedure to view your database access history using the Atlas UI: ### Navigate to the Clusters page for your project. - If it is not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar. - If it is not already displayed, select your desired project from the Projects menu in the navigation bar. - If the Clusters page is not already displayed, click Database in the sidebar. ### View the cluster's database access history. - On the cluster card, click . - Select View Database Access History. or - Click the cluster name. - Click . - Select View Database Access History. </Tab> </Tabs>
https://mongodb.com/docs/atlas/access-tracking/
md
View Database Access History
2024-05-20T17:30:49.148Z
{ "contentType": null, "productName": "MongoDB Atlas", "tags": [ "atlas", "docs" ], "version": null }
created
snooty-cloud-docs
# Manage Organization Teams You can create teams at the organization level and add teams to projects to grant project access roles to multiple users. Add any number of organization users to a team. Grant a team roles for specific projects. All members of a team share the same project access. Organization users can belong to multiple teams. To add teams to a project or edit team roles, see Manage Access to a Project. ## Required Access To perform any of the following actions, you must have `Organization Owner` access to Atlas. ## Create a Team Atlas limits the number of users to a maximum of 100 teams per project and a maximum of 250 teams per organization. <Tabs> <Tab name="Atlas CLI"> To create one team in your organization using the Atlas CLI, run the following command: ```sh atlas teams create <name> [options] ``` To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas teams create. - Install the Atlas CLI - Connect to the Atlas CLI To add users to your team, see Add Team Members. </Tab> <Tab name="Atlas UI"> To create a team using the Atlas UI: ### Navigate to the Access Manager page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click Access Manager in the sidebar, or click Access Manager in the navigation bar, then click your organization. ### Click Create Team. ### Enter a name for the team in the Name Your Team box. The name must be unique within an organization. ### Add team members. To add existing organization users to the team, click in the Add Members box and either start typing their Cloud Manager username or click on the name of a user that appears in the combo box. ### Click Create Team. </Tab> </Tabs> ## View Teams <Tabs> <Tab name="Atlas CLI"> To list all teams in your organization using the Atlas CLI, run the following command: ```sh atlas teams list [options] ``` To return the details for the team you specify using the Atlas CLI, run the following command: ```sh atlas teams describe [options] ``` To learn more about the syntax and parameters for the previous commands, see the Atlas CLI documentation for atlas teams list and atlas teams describe. - Install the Atlas CLI - Connect to the Atlas CLI </Tab> <Tab name="Atlas UI"> To view your teams using the Atlas UI: ### Navigate to the Access Manager page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click Access Manager in the sidebar, or click Access Manager in the navigation bar, then click your organization. ### Click the Teams tab. Your teams display. </Tab> </Tabs> ## Add Team Members Atlas limits Atlas user membership to a maximum of 250 Atlas users per team. <Tabs> <Tab name="Atlas CLI"> To add one user to the team you specify using the Atlas CLI, run the following command: ```sh atlas teams users add <userId>... [options] ``` To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas teams users add. - Install the Atlas CLI - Connect to the Atlas CLI </Tab> <Tab name="Atlas UI"> To add team members using the Atlas UI: ### Navigate to the Access Manager page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click Access Manager in the sidebar, or click Access Manager in the navigation bar, then click your organization. ### Click the Teams tab. ### Click the name of the team you want to modify. ### Add members to the team. - Click Add Members. - Type the name or email of the user from the combo box. You can add users that are part of the organization or users that have been sent an invitation to join the organization. - Click Add Members. </Tab> </Tabs> ## Remove Team Members <Tabs> <Tab name="Atlas CLI"> To delete one user from the team you specify using the Atlas CLI, run the following command: ```sh atlas teams users delete <userId> [options] ``` To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas teams users delete. - Install the Atlas CLI - Connect to the Atlas CLI </Tab> <Tab name="Atlas UI"> To remove team members using the Atlas UI: ### Navigate to the Access Manager page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click Access Manager in the sidebar, or click Access Manager in the navigation bar, then click your organization. ### Click the Teams tab. ### Click the name of the team you want to modify. ### Remove members from the team. Click to the right of the user you want to remove from a team. Removing a member from the team removes the user's project assignments granted by the team membership. If a user is assigned to a project through both a team and individual assignment, removing the user from a team does not remove the user's assignment to that project. </Tab> </Tabs> ## Rename a Team You can't rename a team using the Atlas CLI. ### Navigate to the Access Manager page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click Access Manager in the sidebar, or click Access Manager in the navigation bar, then click your organization. ### Click the Teams tab. ### Rename the team. For the team you want to rename: - Click the ellipsis (`...`) button under the Actions column. - Click Rename Team. - Enter a new name for the team. The team name must be unique within the organization. - Click Rename Team. ## Delete a Team <Tabs> <Tab name="Atlas CLI"> To delete one team from your organization using the Atlas CLI, run the following command: ```sh atlas teams delete <teamId> [options] ``` To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas teams delete. - Install the Atlas CLI - Connect to the Atlas CLI </Tab> <Tab name="Atlas UI"> To delete a team using the Atlas UI: ### Navigate to the Access Manager page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click Access Manager in the sidebar, or click Access Manager in the navigation bar, then click your organization. ### Click the Teams tab. ### Delete the team. For the team you want to delete: - Click the ellipsis (`...`) button under the Actions column. - Click Delete Team. - Confirm that you wish to proceed with team deletion. For users belonging to the team, deleting a team removes the users' project assignments granted by that team membership. </Tab> </Tabs> ## Next Steps For the organization users in a team to have access to a project, you must add the team to the project. To add teams to a project or edit team roles, see Manage Access to a Project.
https://mongodb.com/docs/atlas/access/manage-teams-in-orgs/
md
Manage Organization Teams
2024-05-20T17:30:49.148Z
{ "contentType": null, "productName": "MongoDB Atlas", "tags": [ "atlas", "docs" ], "version": null }
created
snooty-cloud-docs
# Manage Organizations In the organizations and projects hierarchy, an organization can contain multiple projects (previously referred to as groups). Under this structure: - Billing happens at the organization level while preserving visibility into usage in each project. - You can view all projects within an organization. - You can use teams to bulk assign organization users to projects within the organization. If you need to scale beyond the existing project limits, you can create multiple organizations. ## Create an Organization When you create an organization, you are added as an `Organization Owner` for the organization. ### View all of your organizations. - Expand the Organizations menu in the navigation bar. - Click View All Organizations. ### Click New Organization. ### Enter the name for your organization. Don't include sensitive information in your organization name. ### Select Atlas and click Next. You have the option of adding a new Cloud Manager organization or a new Atlas organization. For more information on Cloud Manager see the documentation. ### Add members. - For existing Atlas users, enter their username. Usually, this is the email the person used to register. - For new Atlas users, enter their email address to send an invitation. ### Specify the access for the members. ### (Optional) Disable the IP access list requirement for the Atlas Administration API. When you create a new organization with the Atlas UI, Atlas requires IP access lists for the Atlas Administration API by default. If you require an IP access list, your Atlas Administration API keys can make API (Application Programming Interface) requests only from the location-based IP or CIDR (Classless Inter-Domain Routing) addresses that you specify in the IP access list. To disable the IP access list requirement and allow your Atlas Administration API keys to make requests from any address on the internet, toggle Require IP Access List for the Atlas Administration API to OFF. To learn more, see Optional: Require an IP Access List for the Atlas Administration API. ### Click Create Organization. ## View Organizations <Tabs> <Tab name="Atlas CLI"> To list all organizations using the Atlas CLI, run the following command: ```sh atlas organizations list [options] ``` To return the details for the organization you specify using the Atlas CLI, run the following command: ```sh atlas organizations describe <ID> [options] ``` To learn more about the syntax and parameters for the previous commands, see the Atlas CLI documentation for atlas organizations list and atlas organizations describe. - Install the Atlas CLI - Connect to the Atlas CLI </Tab> <Tab name="Atlas UI"> ### Expand the Organizations menu in the navigation bar. ### Click View All Organizations. </Tab> </Tabs> ## Leave an Organization To leave an organization, at least another user must exist as an Owner for the organization. ### View all of your organizations. - Expand the Organizations menu in the navigation bar. - Click View All Organizations. ### Leave organization. For the organization you wish to leave, click its Leave button to bring up the Leave Organization dialog. ### Click Leave Organization in the Leave Organization dialog. ## Rename an Organization You must have the `Organization Owner` role for an organization to rename it. ### Navigate to the Settings page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click the Organization Settings icon next to the Organizations menu. ### Click next to the organization name. ### Enter the new name for the organization. ### Click Save. ## Delete an Organization To delete an organization, you must have `Organization Owner` role for the organization. You can't delete an organization that has active projects. You must delete the organization's projects before you can delete the organization. You can't delete an organization with outstanding payments. To learn more, see Troubleshoot Invoices and Payments. If you have a Backup Compliance Policy enabled, you can't delete a project if any snapshots exists. If you can't remove all projects, you can't delete the organization. <Tabs> <Tab name="Atlas CLI"> To delete an organization using the Atlas CLI, run the following command: ```sh atlas organizations delete <ID> [options] ``` To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas organizations delete. - Install the Atlas CLI - Connect to the Atlas CLI </Tab> <Tab name="Atlas UI"> ### Navigate to the Settings page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click the Organization Settings icon next to the Organizations menu. ### In the General Settings tab, click Delete. This displays the Delete Organization dialog. ### Click Delete Organization to confirm. </Tab> </Tabs>
https://mongodb.com/docs/atlas/access/orgs-create-view-edit-delete/
md
Manage Organizations
2024-05-20T17:30:49.148Z
{ "contentType": null, "productName": "MongoDB Atlas", "tags": [ "atlas", "docs" ], "version": null }
created
snooty-cloud-docs
# Alert Basics Atlas provides built-in tools, alerts, charts, integrations, and logs to help you monitor your clusters. Atlas provides alerts to help you monitor your clusters and improve performance in the following ways: 1. A variety of conditions can trigger an alert. 2. You can configure alerts settings based on specific conditions for your databases, users, accounts, and more. 3. When you resolve alerts, you can fix the immediate problem, implement a long-term solution, and monitor your progress. Atlas issues alerts for the database and server conditions configured in your alert settings. When a condition triggers an alert, Atlas displays a warning symbol on the cluster and sends alert notifications. Your alert settings determine the notification methods. Atlas continues sending notifications at regular intervals until the condition resolves or you delete or disable the alert. ## Useful Metrics and Alert Conditions When you configure alerts, you specify alert conditions and thresholds. Review the possible alert conditions for which you can trigger alerts related to your clusters. `M0` free clusters and `M2/M5` shared clusters only trigger alerts related to the metrics supported by those clusters. See Atlas M0 (Free Cluster), M2, and M5 Limits for complete documentation on `M0/M2/M5` alert and metric limitations. Consistently monitor metrics to help ensure efficient clusters. ### Tickets Available These alert conditions help you monitor the number of concurrent read or write operations that can occur. When all tickets are claimed, operations must wait and enter the queue. You can view these metrics on the Tickets Available chart, accessed through cluster monitoring. To learn more, see the Tickets Available alert conditions. ### Queues These alert conditions measure operations waiting on locks. You can view these metrics on the Queues chart, accessed through cluster monitoring. To learn more, see the Queues alert conditions. ### CPU Steal AWS EC2 clusters that support Burstable Performance might experience CPU steal when using shared CPU cores. This alert condition measures the percentage by which the CPU usage exceeds the guaranteed baseline CPU credit accumulation rate. CPU credits are units of CPU utilization that you accumulate. The credits accumulate at a constant rate to provide a guaranteed level of performance. These credits can be used for additional CPU performance. When the credit balance is exhausted, only the guaranteed baseline of CPU performance is provided, and the amount of excess is shown as steal percent. You can view CPU usage on the Normalized System CPU chart, accessed through cluster monitoring. To learn more, see the `System: CPU (Steal) % is` alert condition. ### Query Targeting Properly configured indexes can significantly improve query performance. These alert conditions help identify inefficient queries. Too many indexes can impact write performance. You can view these metrics on the Query Targeting chart, accessed through cluster monitoring. To learn more, see the Query Targeting alert conditions. ### Connection Limits Each Atlas instance has a connection limit. These alert conditions help you proactively address scaling needs or potential issues related to connection availability. You can view these metrics on the Connections chart, accessed through cluster monitoring. To learn more, see the Connection alert conditions. ## Configure Alerts To set which conditions trigger alerts and how users are notified, Configure Alert Settings. You can configure alerts at the organization or project level. Atlas provides default alerts at the project level. You can clone existing alerts and configure maintenance window alerts. Experiment with alert condition values based on your specific requirements. Periodically reassess these values for optimal performance. ### Tickets Available Configure the alert settings to send an alert if these metrics drop below 30 for at least a few minutes. You want to avoid false positives triggered by relatively harmless short-term drops, but catch issues when these metrics stay low for a while. To configure these alert conditions, see Configure Alert Settings. ### Queues Configure the alert settings to send an alert if these metrics rise above 100 for a minute. You want to avoid false positives triggered by relatively harmless short-term spikes, but catch issues when these metrics stay elevated for a while. To configure these alert conditions, see Configure Alert Settings. ### CPU Steal Configure the alert settings to send an alert if this metric rises above 10%. To configure this alert condition, see Configure Alert Settings. ### Query Targeting Configure the alert settings to send an alert if this metric rises above 50 or 100. To configure these alert conditions, see Configure Alert Settings. ### Connection Limits Configure the alert settings to send an alert if the Connection % of the configured limit rises above 80% or 90%. To configure these alert conditions, see Configure Alert Settings. ## Resolve Alerts When a condition triggers an alert, Atlas displays a warning symbol on the cluster and sends alert notifications. Resolve these alerts and work to prevent alert conditions from occurring in the future. To learn how to fix the immediate problem, implement a long-term solution, and monitor your progress, see Resolve Alerts. ### Tickets Available Tickets Available alerts can help you detect queries that took a little longer than expected due to load. Increasing your instance size, or sometimes disk speed, can help these metrics. ### Queues Queues alerts can help you detect queries that took a little longer than expected due to load. Increasing your instance size, or sometimes disk speed, can help these metrics. ### CPU Steal The `System: CPU (Steal) % is` alert occurs when the CPU usage exceeds the guaranteed baseline CPU credit accumulation rate by the specified threshold. To learn more, see Fix CPU Usage Issues. ### Query Targeting Query Targeting alerts often indicate inefficient queries. To learn more, see Fix Query Issues. ### Connection Limits Connection alerts typically occur when the maximum number of allowable connections to a MongoDB process has been exceeded. Once the limit is exceeded, no new connections can be opened until the number of open connections drops down below the limit. To learn more, see Fix Connection Issues. ## Alerts Workflow When an alert condition is met, the alert lifecycle begins. To learn more, see the Alerts Workflow.
https://mongodb.com/docs/atlas/alert-basics/
md
Alert Basics
2024-05-20T17:30:49.148Z
{ "contentType": null, "productName": "MongoDB Atlas", "tags": [ "atlas", "docs" ], "version": null }
created
snooty-cloud-docs
# Resolve Alerts Atlas issues alerts for the database and server conditions configured in your alert settings. When a condition triggers an alert, Atlas displays a warning symbol on the cluster and sends alert notifications. Your alert settings determine the notification methods. Atlas continues sending notifications at regular intervals until the condition resolves or you delete or disable the alert. You should fix the immediate problem, implement a long-term solution, and view metrics to monitor your progress. If you integrate with VictorOps, OpsGenie, or DataDog, you can recieve informational alerts from these third-party monitoring services in Atlas. However, you must resolve these alerts within each external service. <Tabs> <Tab name="Organization Alerts"> </Tab> <Tab name="Project Alerts"> </Tab> </Tabs> ## View Alerts <Tabs> <Tab name="Organization Alerts"> You can view all alerts, alert settings, and deleted alerts on the Organization Alerts page. To learn more, see Alerts Workflow. To view all open alerts: ### Navigate to the Alerts page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click Alerts in the sidebar. ### If it is not already displayed, click the All Alerts tab. </Tab> <Tab name="Project Alerts"> <Tabs> <Tab name="Atlas CLI"> To list all alerts for the specified Atlas project using the Atlas CLI, run the following command: ```sh atlas alerts list [options] ``` To return the details for one alert in the project you specify using the Atlas CLI, run the following command: ```sh atlas alerts describe <alertId> [options] ``` To learn more about the syntax and parameters for the previous commands, see the Atlas CLI documentation for atlas alerts list and atlas alerts describe. - Install the Atlas CLI - Connect to the Atlas CLI </Tab> <Tab name="Atlas UI"> You can view open alerts, closed alerts, and alert settings on the Project Alerts page. Atlas sends notifications for all alerts that appear on the Open tab. To learn more, see Alerts Workflow. To view all open alerts using the Atlas UI: ### Navigate to the Alerts page for your project. - If it is not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar. - If it is not already displayed, select your desired project from the Projects menu in the navigation bar. - Click the Project Alerts icon in the navigation bar, or click Alerts in the sidebar. ### If it is not already displayed, click the Open Alerts tab. </Tab> </Tabs> </Tab> </Tabs> ## Acknowledge Alerts <Tabs> <Tab name="Organization Alerts"> To acknowledge alerts: ### Navigate to the Alerts page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click Alerts in the sidebar. ### Select the alert you want to acknowledge, then click Mark Acknowledge. If an alert uses PagerDuty for alert notifications, you can acknowledge the alert only on your PagerDuty dashboard. </Tab> <Tab name="Project Alerts"> <Tabs> <Tab name="Atlas CLI"> To acknowledge one alert for the specified project using the Atlas CLI, run the following command: ```sh atlas alerts acknowledge <alertId> [options] ``` To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas alerts acknowledge. - Install the Atlas CLI - Connect to the Atlas CLI </Tab> <Tab name="Atlas UI"> To acknowledge alerts using the Atlas UI: ### Navigate to the Alerts page for your project. - If it is not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar. - If it is not already displayed, select your desired project from the Projects menu in the navigation bar. - Click the Project Alerts icon in the navigation bar, or click Alerts in the sidebar. ### Locate the alert you want to acknowledge, then click Acknowledge. If an alert uses PagerDuty for alert notifications, you can acknowledge the alert only on your PagerDuty dashboard. </Tab> </Tabs> </Tab> </Tabs> When you acknowledge an alert, Atlas sends no further notifications until either the acknowledgement period ends, you resolve the alert condition, or you unacknowledge the alert. If an alert condition ends during an acknowledgment period, Atlas sends a notification. ## Unacknowledge Alerts You can unacknowledge an alert that you previously acknowledged. After you unacknowledge an active alert, Atlas resumes sending notifications at regular intervals until the condition resolves or you delete, disable, or re-acknowledge the alert. <Tabs> <Tab name="Organization Alerts"> To unacknowledge alerts: ### Navigate to the Alerts page for your organization. - If it is not already displayed, select your desired organization from the Organizations menu in the navigation bar. - Click Alerts in the sidebar. ### Select the alert you want to acknowledge, then click Unacknowledge on the right side of the alert. If an alert uses PagerDuty for alert notifications, you can acknowledge the alert only on your PagerDuty dashboard. </Tab> <Tab name="Project Alerts"> <Tabs> <Tab name="Atlas CLI"> To unacknowledge one alert for the specified project using the Atlas CLI, run the following command: ```sh atlas alerts unacknowledge <alertId> [options] ``` To learn more about the command syntax and parameters, see the Atlas CLI documentation for atlas alerts unacknowledge. - Install the Atlas CLI - Connect to the Atlas CLI </Tab> <Tab name="Atlas UI"> To unacknowledge alerts using the Atlas UI: ### Navigate to the Alerts page for your project. - If it is not already displayed, select the organization that contains your desired project from the Organizations menu in the navigation bar. - If it is not already displayed, select your desired project from the Projects menu in the navigation bar. - Click the Project Alerts icon in the navigation bar, or click Alerts in the sidebar. ### Locate the alert you want to acknowledge, then click Unacknowledge on the right side of the alert. If an alert uses PagerDuty for alert notifications, you can acknowledge the alert only on your PagerDuty dashboard. </Tab> </Tabs> </Tab> </Tabs> ## Increase Cluster Capacity To resolve an alert by increasing your cluster's capacity, see Modify a Cluster. ## View All Activity To view and filter the activity feed for an organization or project, see View the Activity Feed. ## Retrieve the Activity Feed <Tabs> <Tab name="Organization Alerts"> You can retrieve events for an organization using the get all API (Application Programming Interface) resource. </Tab> <Tab name="Project Alerts"> You can retrieve events for a project using the get all API (Application Programming Interface) resource. </Tab> </Tabs> ## Resolutions for Specific Alerts The following sections describe Atlas alert conditions and suggest steps for resolving them. <table> <tr> <th id="Alert%20Type"> Alert Type </th> <th id="Description"> Description </th> </tr> <tr> <td headers="Alert%20Type"> Atlas Search Alerts </td> <td headers="Description"> Amount of CPU and memory used by Atlas Search processes reach a specified threshold. </td> </tr> <tr> <td headers="Alert%20Type"> Connection Alerts </td> <td headers="Description"> Number of connections to a MongoDB process exceeds the allowable maximum. </td> </tr> <tr> <td headers="Alert%20Type"> Disk Space % Used Alerts </td> <td headers="Description"> Percentage of used disk space on a partition reaches a specified threshold. </td> </tr> <tr> <td headers="Alert%20Type"> Query Targeting Alerts </td> <td headers="Description"> Indicates inefficient queries. The change streams cursors that the Atlas Search process (`mongot`) uses to keep Atlas Search indexes updated can contribute to the query targeting ratio and trigger query targeting alerts if the ratio is high. </td> </tr> <tr> <td headers="Alert%20Type"> Replica Set Has No Primary </td> <td headers="Description"> No primary is detected in replica set. </td> </tr> <tr> <td headers="Alert%20Type"> Replication Oplog Alerts </td> <td headers="Description"> Amount of oplog data generated on a primary cluster member is larger than the cluster's configured oplog size. </td> </tr> <tr> <td headers="Alert%20Type"> System CPU Usage Alerts </td> <td headers="Description"> CPU usage of the MongoDB process reaches a specified threshold. </td> </tr> </table>
https://mongodb.com/docs/atlas/alert-resolutions/
md
Resolve Alerts
2024-05-20T17:32:10.812Z
{ "contentType": null, "productName": "PyMongo", "tags": [ "docs", "driver", "python", "pymongo" ], "version": "v4.7 (current)" }
created
snooty-pymongo
# Aggregation Tutorials ## Overview Aggregation tutorials provide detailed explanations of common aggregation tasks in a step-by-step format. The tutorials are adapted from examples in the Practical MongoDB Aggregations book by Paul Done. Each tutorial includes the following sections: - **Introduction**, which describes the purpose and common use cases of the aggregation type. This section also describes the example and desired outcome that the tutorial demonstrates. - **Before You Get Started**, which describes the necessary databases, collections, and sample data that you must have before building the aggregation pipeline and performing the aggregation. - **Tutorial**, which describes how to build and run the aggregation pipeline. This section describes each stage of the completed aggregation tutorial, and then explains how to run and interpret the output of the aggregation. At the end of each aggregation tutorial, you can find a link to a fully runnable Python code file that you can run in your environment. ## Aggregation Template App Before you begin following an aggregation tutorial, you must set up a new Python app. You can use this app to connect to a MongoDB deployment, insert sample data into MongoDB, and run the aggregation pipeline in each tutorial. To learn how to install the driver and connect to MongoDB, see Get Started with PyMongo Once you install the driver, create a file called `agg_tutorial.py`. Paste the following code in this file to create an app template for the aggregation tutorials: ```python from pymongo import MongoClient # Replace the placeholder with your connection string. uri = "<connection string>" client = MongoClient(uri) try: agg_db = client["agg_tutorials_db"] # Get a reference to relevant collections. # ... some_coll = # ... another_coll = # Delete any existing documents in collections. # ... some_coll.delete_many({}) # Insert sample data into the collection or collections. # ... some_data = [...] # ... some_coll.insert_many(some_data) # Create an empty pipeline array. pipeline = [] # Add code to create pipeline stages. # ... pipeline.append({...}) # Run the aggregation. # ... aggregation_result = ... # Print the aggregation results. for document in aggregation_result: print(document) finally: client.close() ``` In the preceding code, read the code comments to find the sections of the code that you must modify for the tutorial you are following. If you attempt to run the code without making any changes, you will encounter a connection error. For every tutorial, you must replace the connection string placeholder with your deployment's connection string. To learn how to locate your deployment's connection string, see Create a Connection String. For example, if your connection string is `"mongodb+srv://mongodb-example:27017"`, your connection string assignment resembles the following: ```python uri = "mongodb+srv://mongodb-example:27017"; ``` To run the completed file after you modify the template for a tutorial, run the following command in your shell: ```bash python3 agg_tutorial.py ``` ## Available Tutorials - Filtered Subset - Group and Total - Unpack Arrays and Group - One-to-One Join - Multi-Field Join
https://mongodb.com/docs/languages/python/pymongo-driver/current/aggregation/aggregation-tutorials/
md
Aggregation Tutorials
2024-05-20T17:32:10.812Z
{ "contentType": null, "productName": "PyMongo", "tags": [ "docs", "driver", "python", "pymongo" ], "version": "v4.7 (current)" }
created
snooty-pymongo
# Specialized Data Formats ## Overview You can use several types of specialized data formats in your PyMongo application. To learn how to work with these data formats, see the following sections: - Learn how to encode and decode custom types in the Custom Types guide. - Learn how to work with Python `datetime` objects in PyMongo in the Dates and Times guide. - Learn about UUIDs and how to maintain cross-language compatibility while working with them in the Universally Unique IDs (UUIDs) guide.
https://mongodb.com/docs/languages/python/pymongo-driver/current/data-formats/
md
Specialized Data Formats
2024-05-20T17:32:10.812Z
{ "contentType": null, "productName": "PyMongo", "tags": [ "docs", "driver", "python", "pymongo" ], "version": "v4.7 (current)" }
created
snooty-pymongo
# Create a MongoDB Deployment You can create a free tier MongoDB deployment on MongoDB Atlas to store and manage your data. MongoDB Atlas hosts and manages your MongoDB database in the cloud. ## Create a Free MongoDB deployment on Atlas Complete the Get Started with Atlas guide to set up a new Atlas account and load sample data into a new free tier MongoDB deployment. ## Save your Credentials After you create your database user, save that user's username and password to a safe location for use in an upcoming step. After you complete these steps, you have a new free tier MongoDB deployment on Atlas, database user credentials, and sample data loaded in your database. If you run into issues on this step, ask for help in the MongoDB Community Forums or submit feedback by using the Rate this page tab on the right or bottom right side of this page.
https://mongodb.com/docs/languages/python/pymongo-driver/current/get-started/create-a-deployment/
md
Create a MongoDB Deployment
2024-05-20T17:32:10.812Z
{ "contentType": null, "productName": "PyMongo", "tags": [ "docs", "driver", "python", "pymongo" ], "version": "v4.7 (current)" }
created
snooty-pymongo
# Compound Indexes ## Overview Compound indexes hold references to multiple fields within a collection's documents, improving query and sort performance. ### Sample Data The examples in this guide use the `sample_mflix.movies` collection from the Atlas sample datasets. To learn how to create a free MongoDB Atlas cluster and load the sample datasets, see the Get Started with PyMongo. ## Create a Compound Index The following example creates a compound index on the `type` and `genre` fields: ```python movies.create_index([("type", pymongo.ASCENDING), ("genre", pymongo.ASCENDING)]) ``` The following is an example of a query that uses the index created in the preceding code example: ```python query = { "type": "movie", "genre": "Drama" } sort = [("type", pymongo.ASCENDING), ("genre", pymongo.ASCENDING)] cursor = movies.find(query).sort(sort) ``` For more information, see Compound Indexes in the MongoDB Server manual.
https://mongodb.com/docs/languages/python/pymongo-driver/current/indexes/compound-index/
md
Compound Indexes
2024-05-20T17:32:10.812Z
{ "contentType": null, "productName": "PyMongo", "tags": [ "docs", "driver", "python", "pymongo" ], "version": "v4.7 (current)" }
created
snooty-pymongo
# Previous Versions The following links direct you to documentation for previous versions of PyMongo. - Version 4.6 - Version 4.5 - Version 4.4 - Version 4.3 - Version 4.2 - Version 4.1 - Version 4.0
https://mongodb.com/docs/languages/python/pymongo-driver/current/previous-versions/
md
Previous Versions
2024-05-20T17:31:07.735Z
{ "contentType": null, "productName": "MongoDB Server", "tags": [ "docs", "manual" ], "version": "v7.0 (current)" }
created
snooty-docs
# About MongoDB Documentation The MongoDB Manual contains comprehensive documentation on MongoDB. This page describes the manual's licensing, editions, and versions, and describes how to make a change request and how to contribute to the manual. ## License This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License © MongoDB, Inc. 2008-2022 ## Man Pages In addition to the MongoDB Manual, you can access the MongoDB Man Pages, which are also distributed with the official MongoDB Packages. ## Version and Revisions This version of the manual reflects version 7.0 of MongoDB. See the MongoDB Documentation Project Page for an overview of all editions and output formats of the MongoDB Manual. You can see the full revision history and track ongoing improvements and additions for all versions of the manual from its GitHub repository. The most up-to-date, current, and stable version of the manual is always available at "https://www.mongodb.com/docs/manual/". ## Report an Issue or Make a Change Request To report an issue with this manual or to make a change request, file a ticket at the MongoDB DOCS Project on Jira. ## Contribute to the Documentation The entire documentation source for this manual is available in the mongodb/docs repository, which is one of the MongoDB project repositories on GitHub. To contribute to the documentation, you can open a GitHub account, fork the mongodb/docs repository, make a change, and issue a pull request. In order for the documentation team to accept your change, you must complete the MongoDB Contributor Agreement. You can clone the repository by issuing the following command at your system shell: ```bash git clone git://github.com/mongodb/docs.git ``` ### About the Documentation Process The MongoDB Manual uses Sphinx, a sophisticated documentation engine built upon Python Docutils. The original reStructured Text files, as well as all necessary Sphinx extensions and build tools, are available in the same repository as the documentation. For more information on the MongoDB documentation process, see the Meta Documentation. If you have any questions, please feel free to open a Jira Case.
https://mongodb.com/docs/manual/about/
md
About MongoDB Documentation
2024-05-20T17:31:07.735Z
{ "contentType": null, "productName": "MongoDB Server", "tags": [ "docs", "manual" ], "version": "v7.0 (current)" }
created
snooty-docs
# Administration The administration documentation addresses the ongoing operation and maintenance of MongoDB instances and deployments. This documentation includes both high level overviews of these concerns as well as tutorials that cover specific procedures and processes for operating MongoDB.
https://mongodb.com/docs/manual/administration/
md
Administration
2024-05-20T17:31:07.735Z
{ "contentType": null, "productName": "MongoDB Server", "tags": [ "docs", "manual" ], "version": "v7.0 (current)" }
created
snooty-docs
# MongoDB Performance As you develop and operate applications with MongoDB, you may need to analyze the performance of the application and its database. When you encounter degraded performance, it is often a function of database access strategies, hardware availability, and the number of open database connections. Some users may experience performance limitations as a result of inadequate or inappropriate indexing strategies, or as a consequence of poor schema design patterns. Locking Performance discusses how these can impact MongoDB's internal locking. Performance issues may indicate that the database is operating at capacity and that it is time to add additional capacity to the database. In particular, the application's working set should fit in the available physical memory. In some cases performance issues may be temporary and related to abnormal traffic load. As discussed in Number of Connections, scaling can help relax excessive traffic. Database profiling can help you to understand what operations are causing degradation. ## Locking Performance MongoDB uses a locking system to ensure data set consistency. If certain operations are long-running or a queue forms, performance will degrade as requests and operations wait for the lock. Lock-related slowdowns can be intermittent. To see if the lock has been affecting your performance, refer to the locks section and the globalLock section of the `serverStatus` output. Dividing `locks.<type>.timeAcquiringMicros` by `locks.<type>.acquireWaitCount` can give an approximate average wait time for a particular lock mode. `locks.<type>.deadlockCount` provide the number of times the lock acquisitions encountered deadlocks. If `globalLock.currentQueue.total` is consistently high, then there is a chance that a large number of requests are waiting for a lock. This indicates a possible concurrency issue that may be affecting performance. If `globalLock.totalTime` is high relative to `uptime`, the database has existed in a lock state for a significant amount of time. Long queries can result from ineffective use of indexes; non-optimal schema design; poor query structure; system architecture issues; or insufficient RAM resulting in disk reads. ## Number of Connections In some cases, the number of connections between the applications and the database can overwhelm the ability of the server to handle requests. The following fields in the `serverStatus` document can provide insight: - `connections` is a container for the following two fields: - `connections.current` the total number of current clients connected to the database instance. - `connections.available` the total number of unused connections available for new clients. If there are numerous concurrent application requests, the database may have trouble keeping up with demand. If this is the case, increase the capacity of your deployment. For write-heavy applications, deploy sharding and add one or more shards to a sharded cluster to distribute load among `mongod` instances. Spikes in the number of connections can also be the result of application or driver errors. All of the officially supported MongoDB drivers implement connection pooling, which allows clients to use and reuse connections more efficiently. An extremely high number of connections, particularly without corresponding workload, is often indicative of a driver or other configuration error. Unless constrained by system-wide limits, the maximum number of incoming connections supported by MongoDB is configured with the `maxIncomingConnections` setting. On Unix-based systems, system-wide limits can be modified using the `ulimit` command, or by editing your system's `/etc/sysctl` file. See UNIX `ulimit` Settings for more information. ## Full Time Diagnostic Data Capture To help MongoDB engineers analyze server behavior, `mongod` and `mongos` processes include a Full Time Diagnostic Data Capture (FTDC) mechanism. FTDC is enabled by default. Due to its importance in debugging deployments, FTDC thread failures are fatal and stop the parent `mongod` or `mongos` process. FTDC data files are compressed and not human-readable. They inherit the same file access permissions as the MongoDB data files. Only users with access to FTDC data files can transmit the FTDC data. MongoDB engineers cannot access FTDC data without explicit permission and assistance from system owners or operators. FTDC data **never** contains any of the following information: - Samples of queries, query predicates, or query results - Data sampled from any end-user collection or index - System or MongoDB user credentials or security certificates FTDC data contains certain host machine information such as hostnames, operating system information, and the options or settings used to start the `mongod` or `mongos`. This information may be considered protected or confidential by some organizations or regulatory bodies, but is not typically considered to be Personally Identifiable Information (PII). For clusters where these fields are configured with protected, confidential, or PII data, please notify MongoDB engineers before sending FTDC data to coordinate appropriate security measures. On Windows, to collect system data such as disk, cpu, and memory, FTDC requires Microsoft access permissions from the following groups: - Performance Monitor Users - Performance Log Users If the user running `mongod` and `mongos` is not an administrator, add them to these groups to log FTDC data. For more information, see the Microsoft documentation here. FTDC periodically collects statistics produced by the following commands: - `serverStatus` - `replSetGetStatus` (`mongod` only) - `collStats` for the `local.oplog.rs` collection (`mongod` only) - `connPoolStats` (`mongos` only) Depending on the host operating system, the diagnostic data may include one or more of the following utilization statistics: - CPU utilization - Memory utilization - Disk utilization related to performance. FTDC does not include data related to storage capacity. - Network performance statistics. FTDC only captures metadata and does not capture or inspect any network packets. If the `mongod` process runs in a container, FTDC reports utilization statistics from the perspective of the container instead of the host operating system. For example, if a the `mongod` runs in a container that is configured with RAM restrictions, FTDC calculates memory utilization against the container's RAM limit, as opposed to the host operating system's RAM limit. FTDC collects statistics produced by the following commands on file rotation or startup: - `getCmdLineOpts` - `buildInfo` - `hostInfo` `mongod` processes store FTDC data files in a `diagnostic.data` directory under the instances `storage.dbPath`. All diagnostic data files are stored under this directory. For example, given a `dbPath` of `/data/db`, the diagnostic data directory would be `/data/db/diagnostic.data`. `mongos` processes store FTDC data files in a diagnostic directory relative to the `systemLog.path` log path setting. MongoDB truncates the logpath's file extension and concatenates `diagnostic.data` to the remaining name. For example, given a `path` setting of `/var/log/mongodb/mongos.log`, the diagnostic data directory would be `/var/log/mongodb/mongos.diagnostic.data`. You can view the FTDC source code on the MongoDB Github Repository. The `ftdc_system_stats_*.ccp` files specifically define any system-specific diagnostic data captured. FTDC runs with the following defaults: - Data capture every 1 second - 200MB maximum `diagnostic.data` folder size. These defaults are designed to provide useful data to MongoDB engineers with minimal impact on performance or storage size. These values only require modifications if requested by MongoDB engineers for specific diagnostic purposes. To disable FTDC, start up the `mongod` or `mongos` with the `diagnosticDataCollectionEnabled: false` option in the `setParameter` settings of your configuration file: ```yaml setParameter: diagnosticDataCollectionEnabled: false ``` Disabling FTDC may increase the time or resources required when analyzing or debugging issues with support from MongoDB engineers. For information on MongoDB Support, visit Get Started With MongoDB Support.
https://mongodb.com/docs/manual/administration/analyzing-mongodb-performance/
md
MongoDB Performance
2024-05-20T17:31:07.735Z
{ "contentType": null, "productName": "MongoDB Server", "tags": [ "docs", "manual" ], "version": "v7.0 (current)" }
created
snooty-docs
# Backup and Restore Sharded Clusters The following tutorials describe backup and restoration for sharded clusters: To use `mongodump` and `mongorestore` as a backup strategy for sharded clusters, you must stop the sharded cluster balancer and use the `fsync` command or the `db.fsyncLock()` method on `mongos` to block writes on the cluster during backups. Sharded clusters can also use one of the following coordinated backup and restore processes, which maintain the atomicity guarantees of transactions across shards: - MongoDB Atlas - MongoDB Cloud Manager - MongoDB Ops Manager Use file system snapshots back up each component in the sharded cluster individually. The procedure involves stopping the cluster balancer. If your system configuration allows file system backups, this might be more efficient than using MongoDB tools. Create backups using `mongodump` to back up each component in the cluster individually. Limit the operation of the cluster balancer to provide a window for regular backup operations. An outline of the procedure and consideration for restoring an *entire* sharded cluster from backup.
https://mongodb.com/docs/manual/administration/backup-sharded-clusters/
md
Backup and Restore Sharded Clusters
2024-05-20T17:31:07.735Z
{ "contentType": null, "productName": "MongoDB Server", "tags": [ "docs", "manual" ], "version": "v7.0 (current)" }
created
snooty-docs
# Configuration and Maintenance This section describes routine management operations, including updating your MongoDB deployment's configuration. Outlines common MongoDB configurations and examples of best-practice configurations for common use cases. Upgrade a MongoDB deployment to a different patch release within the same major release series. Start, configure, and manage running `mongod` process. Stop in progress MongoDB client operations using `db.killOp()` and `maxTimeMS()`. Archive the current log files and start new ones.
https://mongodb.com/docs/manual/administration/configuration-and-maintenance/
md
Configuration and Maintenance
2024-05-20T17:32:23.500Z
{ "contentType": "Video", "productName": null, "tags": [ "Atlas" ], "version": null }
created
devcenter
# The Atlas Search 'cene: Season 1 # The Atlas Search 'cene: Season 1 Welcome to the first season of a video series dedicated to Atlas Search! This series of videos is designed to guide you through the journey from getting started and understanding the concepts, to advanced techniques. ## What is Atlas Search? [Atlas Search][1] is an embedded full-text search in MongoDB Atlas that gives you a seamless, scalable experience for building relevance-based app features. Built on Apache Lucene, Atlas Search eliminates the need to run a separate search system alongside your database. By integrating the database, search engine, and sync mechanism into a single, unified, and fully managed platform, Atlas Search is the fastest and easiest way to build relevance-based search capabilities directly into applications. > Hip to the *'cene* > > The name of this video series comes from a contraction of "Lucene", > the search engine library leveraged by Atlas. Or it's a short form of "scene". ## Episode Guide ### **[Episode 1: What is Atlas Search & Quick Start][2]** In this first episode of the Atlas Search 'cene, learn what Atlas Search is, and get a quick start introduction to setting up Atlas Search on your data. Within a few clicks, you can set up a powerful, full-text search index on your Atlas collection data, and leverage the fast, relevant results to your users queries. ### **[Episode 2: Configuration / Development Environment][3]** In order to best leverage Atlas Search, configuring it for your querying needs leads to success. In this episode, learn how Atlas Search maps your documents to its index, and discover the configuration control you have. ### **[Episode 3: Indexing][4]** While Atlas Search automatically indexes your collections content, it does demand attention to the indexing configuration details in order to match users queries appropriately. This episode covers how Atlas Search builds an inverted index, and the options one must consider. ### **[Episode 4: Searching][5]** Atlas Search provides a rich set of query operators and relevancy controls. This episode covers the common query operators, their relevancy controls, and ends with coverage of the must-have Query Analytics feature. ### **[Episode 5: Faceting][6]** Facets produce additional context for search results, providing a list of subsets and counts within. This episode details the faceting options available in Atlas Search. ### **[Episode 6: Advanced Search Topics][7]** In this episode, we go through some more advanced search topics including embedded documents, fuzzy search, autocomplete, highlighting, and geospatial. ### **[Episode 7: Query Analytics][8]** Are your users finding what they are looking for? Are your top queries returning the best results? This episode covers the important topic of query analytics. If you're using search, you need this! ### **[Episode 8: Tips & Tricks][9]** In this final episode of The Atlas Search 'cene Season 1, useful techniques to introspect query details and see the relevancy scoring computation details. Also shown is how to get facets and search results back in one API call. [1]: https://www.mongodb.com/atlas/search [2]: https://www.mongodb.com/developer/videos/what-is-atlas-search-quick-start/ [3]: https://www.mongodb.com/developer/videos/atlas-search-configuration-development-environment/ [4]: https://www.mongodb.com/developer/videos/mastering-indexing-for-perfect-query-matches/ [5]: https://www.mongodb.com/developer/videos/query-operators-relevancy-controls-for-precision-searches/ [6]: https://www.mongodb.com/developer/videos/faceting-mastery-unlock-the-full-potential-of-atlas-search-s-contextual-insights/ [7]: https://www.mongodb.com/developer/videos/atlas-search-mastery-elevate-your-search-with-fuzzy-geospatial-highlighting-hacks/ [8]: https://www.mongodb.com/developer/videos/atlas-search-query-analytics/ [9]: https://www.mongodb.com/developer/videos/tips-and-tricks-the-atlas-search-cene-season-1-episode-8/
https://www.mongodb.com/developer/products/atlas/atlas-search-cene-1
md
The Atlas Search 'cene: Season 1
2024-05-20T17:32:23.500Z
{ "contentType": "Tutorial", "productName": null, "tags": [ "MongoDB", "JavaScript", "AI", "Node.js" ], "version": null }
created
devcenter
# Using MongoDB Atlas Triggers to Summarize Airbnb Reviews with OpenAI In the realm of property rentals, reviews play a pivotal role. MongoDB Atlas triggers, combined with the power of OpenAI's models, can help summarize and analyze these reviews in real-time. In this article, we'll explore how to utilize MongoDB Atlas triggers to process Airbnb reviews, yielding concise summaries and relevant tags. This article is an additional feature added to the hotels and apartment sentiment search application developed in Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality. ## Introduction MongoDB Atlas triggers allow users to define functions that execute in real-time in response to database operations. These triggers can be harnessed to enhance data processing and analysis capabilities. In this example, we aim to generate summarized reviews and tags for a sample Airbnb dataset. Our original data model has each review embedded in the listing document as an array: ```javascript "reviews": { "_id": "2663437", "date": { "$date": "2012-10-20T04:00:00.000Z" }, \ "listing_id": "664017", "reviewer_id": "633940", "reviewer_name": "Patricia", "comments": "I booked the room at Marinete's apartment for my husband. He was staying in Rio for a week because he was studying Portuguese. He loved the place. Marinete was very helpfull, the room was nice and clean. \r\nThe location is perfect. He loved the time there. \r\n\r\n" }, { "_id": "2741592", "date": { "$date": "2012-10-28T04:00:00.000Z" }, "listing_id": "664017", "reviewer_id": "3932440", "reviewer_name": "Carolina", "comments": "Es una muy buena anfitriona, preocupada de que te encuentres cómoda y te sugiere que actividades puedes realizar. Disfruté mucho la estancia durante esos días, el sector es central y seguro." }, ... ] ``` ## Prerequisites - App Services application (e.g., application-0). Ensure linkage to the cluster with the Airbnb data. - OpenAI account with API access. ![Open AI Key ### Secrets and Values 1. Navigate to your App Services application. 2. Under "Values," create a secret named `openAIKey` with your OPEN AI API key. 3. Create a linked value named OpenAIKey and link to the secret. ## The trigger code The provided trigger listens for changes in the sample_airbnb.listingsAndReviews collection. Upon detecting a new review, it samples up to 50 reviews, sends them to OpenAI's API for summarization, and updates the original document with the summarized content and tags. Please notice that the trigger reacts to updates that were marked with `"process" : false` flag. This field indicates that there were no summary created for this batch of reviews yet. Example of a review update operation that will fire this trigger: ```javascript listingsAndReviews.updateOne({"_id" : "1129303"}, { $push : { "reviews" : new_review } , $set : { "process" : false" }}); ``` ### Sample reviews function To prevent overloading the API with a large number of reviews, a function sampleReviews is defined to randomly sample up to 50 reviews: ```javscript function sampleReviews(reviews) { if (reviews.length <= 50) { return reviews; } const sampledReviews = ]; const seenIndices = new Set(); while (sampledReviews.length < 50) { const randomIndex = Math.floor(Math.random() * reviews.length); if (!seenIndices.has(randomIndex)) { seenIndices.add(randomIndex); sampledReviews.push(reviews[randomIndex]); } } return sampledReviews; } ``` ### Main trigger logic The main trigger logic is invoked when an update change event is detected with a `"process" : false` field. ```javascript exports = async function(changeEvent) { // A Database Trigger will always call a function with a changeEvent. // Documentation on ChangeEvents: https://www.mongodb.com/docs/manual/reference/change-events // This sample function will listen for events and replicate them to a collection in a different Database function sampleReviews(reviews) { // Logic above... if (reviews.length <= 50) { return reviews; } const sampledReviews = []; const seenIndices = new Set(); while (sampledReviews.length < 50) { const randomIndex = Math.floor(Math.random() * reviews.length); if (!seenIndices.has(randomIndex)) { seenIndices.add(randomIndex); sampledReviews.push(reviews[randomIndex]); } } return sampledReviews; } // Access the _id of the changed document: const docId = changeEvent.documentKey._id; const doc= changeEvent.fullDocument; // Get the MongoDB service you want to use (see "Linked Data Sources" tab) const serviceName = "mongodb-atlas"; const databaseName = "sample_airbnb"; const collection = context.services.get(serviceName).db(databaseName).collection(changeEvent.ns.coll); // This function is the endpoint's request handler. // URL to make the request to the OpenAI API. const url = 'https://api.openai.com/v1/chat/completions'; // Fetch the OpenAI key stored in the context values. const openai_key = context.values.get("openAIKey"); const reviews = doc.reviews.map((review) => {return {"comments" : review.comments}}); const sampledReviews= sampleReviews(reviews); // Prepare the request string for the OpenAI API. const reqString = `Summerize the reviews provided here: ${JSON.stringify(sampledReviews)} | instructions example:\n\n [{"comment" : "Very Good bed"} ,{"comment" : "Very bad smell"} ] \nOutput: {"overall_review": "Overall good beds and bad smell" , "neg_tags" : ["bad smell"], pos_tags : ["good bed"]}. No explanation. No 'Output:' string in response. Valid JSON. `; console.log(`reqString: ${reqString}`); // Call OpenAI API to get the response. let resp = await context.http.post({ url: url, headers: { 'Authorization': [`Bearer ${openai_key}`], 'Content-Type': ['application/json'] }, body: JSON.stringify({ model: "gpt-4", temperature: 0, messages: [ { "role": "system", "content": "Output json generator follow only provided example on the current reviews" }, { "role": "user", "content": reqString } ] }) }); // Parse the JSON response let responseData = JSON.parse(resp.body.text()); // Check the response status. if(resp.statusCode === 200) { console.log("Successfully received code."); console.log(JSON.stringify(responseData)); const code = responseData.choices[0].message.content; // Get the required data to be added into the document const updateDoc = JSON.parse(code) // Set a flag that this document does not need further re-processing updateDoc.process = true await collection.updateOne({_id : docId}, {$set : updateDoc}); } else { console.error("Failed to generate filter JSON."); console.log(JSON.stringify(responseData)); return {}; } }; ``` Key steps include: - API request preparation: Reviews from the changed document are sampled and prepared into a request string for the OpenAI API. The format and instructions are tailored to ensure the API returns a valid JSON with summarized content and tags. - API interaction: Using the context.http.post method, the trigger sends the prepared data to the OpenAI API. - Updating the original document: Upon a successful response from the API, the trigger updates the original document with the summarized content, negative tags (neg_tags), positive tags (pos_tags), and a process flag set to true. Here is a sample result that is added to the processed listing document: ``` "process": true, "overall_review": "Overall, guests had a positive experience at Marinete's apartment. They praised the location, cleanliness, and hospitality. However, some guests mentioned issues with the dog and language barrier.", "neg_tags": [ "language barrier", "dog issues" ], "pos_tags": [ "great location", "cleanliness", "hospitality" ] ``` Once the data is added to our documents, providing this information in our VUE application is as simple as adding this HTML template: ```html Overall Review (ai based) : {{ listing.overall_review }} {{tag}} {{tag}} ``` ## Conclusion By integrating MongoDB Atlas triggers with OpenAI's powerful models, we can efficiently process and analyze large volumes of reviews in real-time. This setup not only provides concise summaries of reviews but also categorizes them into positive and negative tags, offering valuable insights to property hosts and potential renters. Questions? Comments? Let’s continue the conversation over in our [community forums.
https://www.mongodb.com/developer/products/mongodb/atlas-open-ai-review-summary
md
Using MongoDB Atlas Triggers to Summarize Airbnb Reviews with OpenAI
2024-05-20T17:32:23.500Z
{ "contentType": "Tutorial", "productName": null, "tags": [ "MongoDB", "JavaScript", "Java", "Python", "AWS", "AI" ], "version": null }
created
devcenter
# Getting Started with MongoDB and AWS Codewhisperer **Introduction** ---------------- Amazon CodeWhisperer is trained on billions of lines of code and can generate code suggestions — ranging from snippets to full functions — in real-time, based on your comments and existing code. AI code assistants have revolutionized developers’ coding experience, but what sets Amazon CodeWhisperer apart is that MongoDB has collaborated with the AWS Data Science team, enhancing its capabilities! At MongoDB, we are always looking to enhance the developer experience, and we've fine-tuned the CodeWhisperer Foundational Models to deliver top-notch code suggestions — trained on, and tailored for, MongoDB. This gives developers of all levels the best possible experience when using CodeWhisperer for MongoDB functions. This tutorial will help you get CodeWhisperer up and running in VS Code, but CodeWhisperer also works with a number of other IDEs, including IntelliJ IDEA, AWS Cloud9, AWS Lambda console, JupyterLab, and Amazon SageMaker Studio. On the [Amazon CodeWhisperer site][1], you can find tutorials that demonstrate how to set up CodeWhisperer on different IDEs, as well as other documentation. *Note:* CodeWhisperer allows users to start without an AWS account because usually, creating an AWS account requires a credit card. Currently, CodeWhisperer is free for individual users. So it’s super easy to get up and running. **Installing CodeWhisperer for VS Code** CodeWhisperer doesn’t have its own VS Code extension. It is part of a larger extension for AWS services called AWS Toolkit. AWS Toolkit is available in the VS Code extensions store. 1. Open VS Code and navigate to the extensions store (bottom icon on the left panel). 2. Search for CodeWhisperer and it will show up as part of the AWS Toolkit. ![Searching for the AWS ToolKit Extension][2] 3. Once found, hit Install. Next, you’ll see the full AWS Toolkit Listing ![The AWS Toolkit full listing][3] 4. Once installed, you’ll need to authorize CodeWhisperer via a Builder ID to connect to your AWS developer account (or set up a new account if you don’t already have one). ![Authorise CodeWhisperer][4] **Using CodeWhisperer** ----------------------- Navigating code suggestions ![CodeWhisperer Running][5] With CodeWhisperer installed and running, as you enter your prompt or code, CodeWhisperer will offer inline code suggestions. If you want to keep the suggestion, use **TAB** to accept it. CodeWhisperer may provide multiple suggestions to choose from depending on your use case. To navigate between suggestions, use the left and right arrow keys to view them, and **TAB** to accept. If you don’t like the suggestions you see, keep typing (or hit **ESC**). The suggestions will disappear, and CodeWhisperer will generate new ones at a later point based on the additional context. **Requesting suggestions manually** You can request suggestions at any time. Use **Option-C** on Mac or **ALT-C** on Windows. After you receive suggestions, use **TAB** to accept and arrow keys to navigate. **Getting the best recommendations** For best results, follow these practices. - Give CodeWhisperer something to work with. The more code your file contains, the more context CodeWhisperer has for generating recommendations. - Write descriptive comments in natural language — for example ``` // Take a JSON document as a String and store it in MongoDB returning the _id ``` Or ``` //Insert a document in a collection with a given _id and a discountLevel ``` - Specify the libraries you prefer at the start of your file by using import statements. ``` // This Java class works with MongoDB sync driver. // This class implements Connection to MongoDB and CRUD methods. ``` - Use descriptive names for variables and functions - Break down complex tasks into simpler tasks **Provide feedback** ---------------- As with all generative AI tools, they are forever learning and forever expanding their foundational knowledge base, and MongoDB is looking for feedback. If you are using Amazon CodeWhisperer in your MongoDB development, we’d love to hear from you. We’ve created a special “codewhisperer” tag on our [Developer Forums][6], and if you tag any post with this, it will be visible to our CodeWhisperer project team and we will get right on it to help and provide feedback. If you want to see what others are doing with CodeWhisperer on our forums, the [tag search link][7] will jump you straight into all the action. We can’t wait to see your thoughts and impressions of MongoDB and Amazon CodeWhisperer together. [1]: https://aws.amazon.com/codewhisperer/resources/#Getting_started [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1bfd28a846063ae9/65481ef6e965d6040a3dcc37/CW_1.png [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltde40d5ae1b9dd8dd/65481ef615630d040a4b2588/CW_2.png [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt636bb8d307bebcee/65481ef6a6e009040a740b86/CW_3.png [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf1e0ebeea2089e6a/65481ef6077aca040a5349da/CW_4.png [6]: https://www.mongodb.com/community/forums/ [7]: https://www.mongodb.com/community/forums/tag/codewhisperer
https://www.mongodb.com/developer/products/mongodb/getting-started-with-mongodb-and-codewhisperer
md
Getting Started with MongoDB and AWS Codewhisperer
2024-05-20T17:32:23.500Z
{ "contentType": "Code Example", "productName": null, "tags": [ "Java", "Spring" ], "version": null }
created
devcenter
# REST APIs with Java, Spring Boot, and MongoDB ## GitHub repository If you want to write REST APIs in Java at the speed of light, I have what you need. I wrote this template to get you started. I have tried to solve as many problems as possible in it. So if you want to start writing REST APIs in Java, clone this project, and you will be up to speed in no time. ```shell git clone https://github.com/mongodb-developer/java-spring-boot-mongodb-starter ``` That’s all folks! All you need is in this repository. Below I will explain a few of the features and details about this template, but feel free to skip what is not necessary for your understanding. ## README All the extra information and commands you need to get this project going are in the `README.md` file which you can read in GitHub. ## Spring and MongoDB configuration The configuration can be found in the MongoDBConfiguration.java class. ```java package com.mongodb.starter; import ...] import static org.bson.codecs.configuration.CodecRegistries.fromProviders; import static org.bson.codecs.configuration.CodecRegistries.fromRegistries; @Configuration public class MongoDBConfiguration { @Value("${spring.data.mongodb.uri}") private String connectionString; @Bean public MongoClient mongoClient() { CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build()); CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry); return MongoClients.create(MongoClientSettings.builder() .applyConnectionString(new ConnectionString(connectionString)) .codecRegistry(codecRegistry) .build()); } } ``` The important section here is the MongoDB configuration, of course. Firstly, you will notice the connection string is automatically retrieved from the `application.properties` file, and secondly, you will notice the configuration of the `MongoClient` bean. A `Codec` is the interface that abstracts the processes of decoding a BSON value into a Java object and encoding a Java object into a BSON value. A `CodecRegistry` contains a set of `Codec` instances that are accessed according to the Java classes that they encode from and decode to. The MongoDB driver is capable of encoding and decoding BSON for us, so we do not have to take care of this anymore. All the configuration we need for this project to run is here and nowhere else. You can read [the driver documentation if you want to know more about this topic. ## Multi-document ACID transactions Just for the sake of it, I also used multi-document ACID transactions in a few methods where it could potentially make sense to use ACID transactions. You can check all the code in the `MongoDBPersonRepository` class. Here is an example: ```java private static final TransactionOptions txnOptions = TransactionOptions.builder() .readPreference(ReadPreference.primary()) .readConcern(ReadConcern.MAJORITY) .writeConcern(WriteConcern.MAJORITY) .build(); @Override public List saveAll(List personEntities) { try (ClientSession clientSession = client.startSession()) { return clientSession.withTransaction(() -> { personEntities.forEach(p -> p.setId(new ObjectId())); personCollection.insertMany(clientSession, personEntities); return personEntities; }, txnOptions); } } ``` As you can see, I’m using an auto-closeable try-with-resources which will automatically close the client session at the end. This helps me to keep the code clean and simple. Some of you may argue that it is actually too simple because transactions (and write operations, in general) can throw exceptions, and I’m not handling any of them here… You are absolutely right and this is an excellent transition to the next part of this article. ## Exception management Transactions in MongoDB can raise exceptions for various reasons, and I don’t want to go into the details too much here, but since MongoDB 3.6, any write operation that fails can be automatically retried once. And the transactions are no different. See the documentation for retryWrites. If retryable writes are disabled or if a write operation fails twice, then MongoDB will send a MongoException (extends RuntimeException) which should be handled properly. Luckily, Spring provides the annotation `ExceptionHandler` to help us do that. See the code in my controller `PersonController`. Of course, you will need to adapt and enhance this in your real project, but you have the main idea here. ```java @ExceptionHandler(RuntimeException.class) public final ResponseEntity handleAllExceptions(RuntimeException e) { logger.error("Internal server error.", e); return new ResponseEntity<>(e, HttpStatus.INTERNAL_SERVER_ERROR); } ``` ## Aggregation pipeline MongoDB's aggregation pipeline is a very powerful and efficient way to run your complex queries as close as possible to your data for maximum efficiency. Using it can ease the computational load on your application. Just to give you a small example, I implemented the `/api/persons/averageAge` route to show you how I can retrieve the average age of the persons in my collection. ```java @Override public double getAverageAge() { List pipeline = List.of(group(new BsonNull(), avg("averageAge", "$age")), project(excludeId())); return personCollection.aggregate(pipeline, AverageAgeDTO.class).first().averageAge(); } ``` Also, you can note here that I’m using the `personCollection` which was initially instantiated like this: ```java private MongoCollection personCollection; @PostConstruct void init() { personCollection = client.getDatabase("test").getCollection("persons", PersonEntity.class); } ``` Normally, my personCollection should encode and decode `PersonEntity` object only, but you can overwrite the type of object your collection is manipulating to return something different — in my case, `AverageAgeDTO.class` as I’m not expecting a `PersonEntity` class here but a POJO that contains only the average age of my "persons". ## Swagger Swagger is the tool you need to document your REST APIs. You have nothing to do — the configuration is completely automated. Just run the server and navigate to http://localhost:8080/swagger-ui.html. the interface will be waiting for you. for more information. ## Nyan Cat Yes, there is a Nyan Cat section in this post. Nyan Cat is love, and you need some Nyan Cat in your projects. :-) Did you know that you can replace the Spring Boot logo in the logs with pretty much anything you want? and the "Epic" font for each project name. It's easier to identify which log file I am currently reading. ## Conclusion I hope you like my template, and I hope I will help you be more productive with MongoDB and the Java stack. If you see something which can be improved, please feel free to open a GitHub issue or directly submit a pull request. They are very welcome. :-) If you are new to MongoDB Atlas, give our Quick Start post a try to get up to speed with MongoDB Atlas in no time. [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt876f3404c57aa244/65388189377588ba166497b0/swaggerui.png [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf2f06ba5af19464d/65388188d31953242b0dbc6f/nyancat.png
https://www.mongodb.com/developer/code-examples/java/rest-apis-java-spring-boot
md
REST APIs with Java, Spring Boot, and MongoDB
2024-05-20T17:32:23.500Z
{ "contentType": "News & Announcements", "productName": null, "tags": [ "Swift", "MongoDB" ], "version": null }
created
devcenter
# Halting Development on MongoDB Swift Driver MongoDB is halting development on our server-side Swift driver. We remain excited about Swift and will continue our development of our mobile Swift SDK. We released our server-side Swift driver in 2020 as an open source project and are incredibly proud of the work that our engineering team has contributed to the Swift community over the last four years. Unfortunately, today we are announcing our decision to stop development of the MongoDB server-side Swift driver. We understand that this news may come as a disappointment to the community of current users. There are still ways to use MongoDB with Swift: - Use the MongoDB driver with server-side Swift applications as is - Use the MongoDB C Driver directly in your server-side Swift projects - Usage of another community Swift driver, mongokitten Community members and developers are welcome to fork our existing driver and add features as you see fit - the Swift driver is under the Apache 2.0 license and source code is available on GitHub. For those developing client/mobile applications, MongoDB offers the Realm Swift SDK with real time sync to MongoDB Atlas. We would like to take this opportunity to express our heartfelt appreciation for the enthusiastic support that the Swift community has shown for MongoDB. Your loyalty and feedback have been invaluable to us throughout our journey, and we hope to resume development on the server-side Swift driver in the future.
https://www.mongodb.com/developer/languages/swift/halting-development-on-swift-driver
md
Halting Development on MongoDB Swift Driver

Overview

This dataset consists of a small subset of MongoDB's technical documentation.

Dataset Structure

The dataset consists of the following fields:

  • sourceName: The source of the document.
  • url: Link to the article.
  • action: Action taken on the article.
  • body: Content of the article in Markdown format.
  • format: Format of the content.
  • metadata: Metadata such as tags, content type etc. associated with the document.
  • title: Title of the document.
  • updated: The last updated date of the document.

Usage

This dataset can be useful for prototyping RAG applications. This is a real sample of data we have used to build the MongoDB Documentation Chatbot.

Ingest Data

To experiment with this dataset using MongoDB Atlas, first create a MongoDB Atlas account.

You can then use the following script to load this dataset into your MongoDB Atlas cluster:

import os
from pymongo import MongoClient
import datasets
from datasets import load_dataset
from bson import json_util


uri = os.environ.get('MONGODB_ATLAS_URI')
client = MongoClient(uri)
db_name = 'your_database_name'  # Change this to your actual database name
collection_name = 'mongodb_docs'

collection = client[db_name][collection_name]

dataset = load_dataset("MongoDB/mongodb-docs")

insert_data = []

for item in dataset['train']:
    doc = json_util.loads(json_util.dumps(item))
    insert_data.append(doc)

    if len(insert_data) == 1000:
        collection.insert_many(insert_data)
        print("1000 records ingested")
        insert_data = []

if len(insert_data) > 0:
    collection.insert_many(insert_data)
    insert_data = []

print("Data ingested successfully!")
Downloads last month
74