QianT/autotrain-auto_train-38325101316
Translation
•
Updated
•
1
Error code: DatasetGenerationError Exception: ArrowNotImplementedError Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table self._build_writer(inferred_schema=pa_table.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize self._build_writer(self.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
sequence | _format_kwargs
dict | _format_type
null | _indexes
dict | _output_all_columns
bool | _split
null |
---|---|---|---|---|---|---|---|
[
{
"filename": "dataset.arrow"
}
] | b3ab6d0ab7825d0a | [
"source",
"target"
] | {} | null | {} | false | null |
This dataset has been automatically processed by AutoTrain for project auto_train.
The BCP-47 code for the dataset's language is unk.
A sample from this dataset looks as follows:
[
{
"source": "\u79fb\u5c45\u9999\u6e2f\u5f8c\uff0c\u60a8\u53ef\u4ee5\u524d\u5f80\u6211\u5011\u7684\u5176\u4e2d\u4e00\u5bb6\u5206\u884c\u7533\u8acb\u4fe1\u7528\u5361\u2014\u2014\u5728\u9019\u88e1\u627e\u5230\u60a8\u6700\u65b9\u4fbf\u7684\u5730\u9ede\u3002",
"target": "After you move to Hong Kong you can apply for a Credit Card by visiting one of our branches \u2013 find your most convenient location here."
},
{
"source": "\u79fb\u5c45\u9999\u6e2f\u5f8c\uff0c\u60a8\u53ef\u4ee5\u524d\u5f80\u6211\u5011\u7684\u5176\u4e2d\u4e00\u5bb6\u5206\u884c\u7533\u8acb\u5132\u84c4/\u652f\u7968\u8cec\u6236\u2014\u2014\u5728\u9019\u88e1\u627e\u5230\u60a8\u6700\u65b9\u4fbf\u7684\u5730\u9ede\u3002\u5982\u679c\u60a8\u9858\u610f\uff0c\u6211\u5011\u53ef\u4ee5\u5728\u60a8\u62b5\u9054\u5f8c\u70ba\u60a8\u5b89\u6392\u5728\u60a8\u9078\u64c7\u7684\u5206\u884c\u7684\u9810\u7d04\u8a0e\u8ad6\u60a8\u7684\u9280\u884c\u548c\u8ca1\u5bcc\u7ba1\u7406\u9700\u6c42\u3002\u8981\u5b89\u6392\u7d04\u6703\uff0c\u8acb\u806f\u7e6b\u60a8\u7576\u5730\u7684\u82b1\u65d7\u9280\u884c\u4ee3\u8868\u3002",
"target": "After you move to Hong Kong you can apply for a Savings / Checking Account by visiting one of our branches \u2013 find your most convenient location here.If you wish so, we can schedule an appointment for you in a Branch of your choice upon your arrival to discuss your banking and wealth management needs. To schedule an appointment, contact your local Citibank representative."
}
]
The dataset has the following fields (also called "features"):
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
This dataset is split into a train and validation split. The split sizes are as follow:
Split name | Num samples |
---|---|
train | 332 |
valid | 83 |