Spaces:
Running
on
L40S
why!?
your image to video model It was the best free model I've seen why did you stop it?!😢
it was very good model but you closed why??
Because people keep doing inappropriate stuff with it.
So a filter should be created in it, not closed!
Use AI models that prevent content you don't want to be produced.
Some individuals are continuously generating a large volume of content, thereby misusing our resources. To curb this trend, we may need to periodically suspend the service. Additionally, please note that this service is provided for demonstration purposes only. Thanks to your interest and support, we will keep it running for a few more hours, so please feel free to use it during this time.
Best Result: https://huggingface.co/spaces/ginigen/theater
thanks a lot
If the prompting fails and leads to it doing something you as a user didn't ask for, I feel like it's understandable that some people are gonna need to continuously retry to get what they actually want. Is this misuse because it's inherently continual? Asking because the service can't be fully called "demonstrated" by definition until you get the desired result you wanted exactly, and if not so, you're demonstration just wouldn't be functional fully. If you can't make the prompting dead accurate and perfect, and no one probably can do such a thing, multiple uses is not only normal but should be expected, right?
From the moment we launched the Dokdo demo service, I observed a small but significant number of users generating content that dramatically deviates from standard social norms. These attempts involved deeply inappropriate, non-consensual, and horrifically offensive content spanning sexual, gore, and horror domains. Some instances even extended to terrorism-related materials, such as detailed scenarios depicting assassination attempts on political leaders in ongoing conflict zones.
Through monitoring, I've identified what appears to be a single user generating thousands of videos with similar imagery and prompts. The volume and persistence of these attempts are notable. While our current prompt filtering mechanisms provide some defense, they are not comprehensive. These users actively and creatively attempt to circumvent existing filters, demonstrating a persistent intent to generate harmful content.
The scale and sophistication of these attempts are so substantial that the collected data could potentially form the basis of a comprehensive academic research paper. Although no system can be perfect, we are seriously considering implementing individual user authentication as a potential mitigation strategy.
Despite these challenges, our commitment remains clear: we must continue to progress, refine our systems, and maintain a safe, responsible platform.
I think we can all agree that individuals shouldn't be permitted to abuse the intended volume of generation. Please consider however that generating content that "deviates from standard social norms" is highly subjective. If the content is against the law in the place where the models are trained or hosted, that's probably worth blocking, but nobody worth listening to is going to blame the people who trained the model for not censoring legal content, extreme as some of it might seem.