We (the Middlebury Institute's CTEC) are an extremism and terrorism research lab, and so we're tracking the ways that tech is used by terrorists and extremists.
For a lot of nonstate orgs with sophisticated propaganda arms, an ideologically cohesive text generation capability would be a huge advantage in scaling up info ops. We are looking to measure whether or not GPT-2 or other neural text generators are useful for this, or if that risk is, as you say, nonsense.
I think their point isn't that terrorists leveraging this tech not a problem. It is certainly a problem. But the greater problem being a few large entities being the only ones who have access to or control over it.
I think it's pretty clear that terrorists or any other bad actor will find great value & utility in this tech. The article from OpenAI says 'Humans can be convinced by synthetic text.' & research at Cornell found people find it almost as convincing as New York Times articles. I would be interested in learning about the methods you guys are using to determine of it? I wonder how that could be measured?
So let's assume the answer is 'YES! This technology is dangerous". The Middlebury program, Cornell, and more and more universities and research groups find the same thing. Then what will the recommendations be? Certainly not to release it into the wild. I think they will be to keep it locked up. To keep it in the hands of a few large and powerful companies, with the resources to 'manage' such a thing.
This seems to be what the original comment is trying to illustrate, and I think it's an interesting point to consider the implications of long term. The tech exists now. There is no going back. So is it worse to let it out of the box, or to let a but a few have control over it?
In spite of all that we're studying wrt abuse potential, I (and my team) generally support open-sourcing tech, and I hope that we can contribute not to "oh this is dangerous, don't release" but rather to "oh this is dangerous, it's already released, what are we going to do now?"
Great, keep up the good work! Are you able to discuss how studies like yours work? Is it along the lines of determining if people can distinguish between human written and AI generated text? Sounds like a difficult question to answer.
I suspect they will release the full model in time. It's already trending in that direction.
do you not consider the scenario described in your comment's parent to be worse than any terrorist scenario?
Clearly. I also think that the pain of the centralization of tech like this will be felt in the scope of years, while the increase in the automation of propaganda and radicalization will be felt in the coming months.
Like I replied to the other poster, I strongly support open-sourcing tech. Centralization of tech like this helps exacerbate the problem: state and sophisticated nonstate groups have the resources to develop it indigenously, while the public can't dig into it and start developing a set of norms and best practices to approach detection and mitigation.