--- license: other language: - code - en task_categories: - question-answering - text-generation - text2text-generation tags: - code viewer: true pretty_name: StackOverflow Posts Markdown size_categories: - 10M<n<100M --- # StackOverflow Posts Markdown ![StackOverflow Logo](https://stackoverflow.design/assets/img/logos/so/logo-stackoverflow.png) ## Dataset Summary This dataset contains all posts submitted to StackOverflow before the 14th of June 2023 formatted as **Markdown text**.
The dataset contains over 60 Million posts, totaling ~40GB in size and ~65 billion characters of text.
The data is sourced from [Internet Archive StackExchange Data Dump](https://archive.org/download/stackexchange). ## Data Fields ```typescript { Id: long, PostTypeId: long AcceptedAnswerId: long | null, ParentId: long | null, Score: long, ViewCount: long | null, Body: string | null, Title: string | null ContentLicense: string | null, FavoriteCount: long | null, CreationDate: string | null, LastActivityDate: string | null, LastEditDate: string | null, LastEditorUserId: long | null, OwnerUserId: long | null Tags: array | null } ``` ## How to use? ```python from datasets import load_dataset # predownload full dataset ds = load_dataset('mikex86/stackoverflow-posts', split='train') # dataset streaming (will only download the data as needed) ds = load_dataset('mikex86/stackoverflow-posts', split='train', streaming=True) for sample in iter(ds): print(sample["Body"]) ``` ## How is the text stored? The original Data Dump formats the "Body" field as html, using tags such as ``, `

`, `