--- dataset_info: features: - name: original_image dtype: image - name: prompt dtype: string - name: transformed_image dtype: image splits: - name: train num_bytes: 604990210.0 num_examples: 994 download_size: 604849707 dataset_size: 604990210.0 --- # Canny DiffusionDB This dataset is the [DiffusionDB dataset](https://huggingface.co/datasets/poloclub/diffusiondb) that is transformed using Canny transformation. You can see samples below 👇 **Sample:** Original Image: ![image](/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fmerve%2Fcanny_diffusiondb%2F--%2Fmerve--canny_diffusiondb%2Ftrain%2F0%2Foriginal_image%2Fimage.jpg) Transformed Image: ![image](/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fmerve%2Fcanny_diffusiondb%2F--%2Fmerve--canny_diffusiondb%2Ftrain%2F0%2Ftransformed_image%2Fimage.jpg) Caption: "a small wheat field beside a forest, studio lighting, golden ratio, details, masterpiece, fine art, intricate, decadent, ornate, highly detailed, digital painting, octane render, ray tracing reflections, 8 k, featured, by claude monet and vincent van gogh " Below you can find a small script used to create this dataset: ```python def canny_convert(image): image_array = np.array(image) gray_image = cv2.cvtColor(image_array, cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray_image, 100, 200) edge_image = Image.fromarray(edges) return edge_image dataset = load_dataset("poloclub/diffusiondb", split = "train") dataset_list = [] for data in dataset: image_path = data["image"] prompt = data["prompt"] transformed_image_path = canny_convert(image_path) new_data = { "original_image": image, "prompt": prompt, "transformed_image": transformed_image, } dataset_list.append(new_data) ```