RubyRidgeRandy 2 years ago

Something I've wondered lately is what will life be like in a post-truth society? we already see examples of this now where a large number of people get their news from fake memes on facebook. There are huge swathes of people who live in their own make-believe world, like those believing wholeheartedly that the 2020 election was stolen.

What will life be like when you can't trust any video or interview you see because it could be completely fake? How long before someone uses this technology to frame someone for a crime? Could the FBI create a deepfake of a cartel leader meeting with them and leak it so they think he's a snitch?

I don't think we'll have the ability to handle this kind of tech responsibly.

  • fxtentacle 2 years ago

    I believe we'll go back to trusting local experts that you can meet in person to confirm that they are not a bot.

    Because anything online will be known to be untrustworthy. Most blogs, chat groups and social media posts will be spam bots. And it'll be impossible for the average person to tell the difference between chatting with a bot and chatting with a human. But humans crave social connections and intimate physical contact. So people will get used to the fact that whoever you meet online is likely fake and so they'll start meeting people in the real world again.

    I also predict that some advanced AIs will be classified as drugs, because people get so hooked on them that it destroys their life. We've already banned abusive loot box gambling mechanics in some EU countries, and I think abusive AI systems are next. We'll probably also age-limit generative AI models like DALL-E, due to their ability to generate naughty and/or disturbing images.

    But overall, I believe we will just starting to treat everything online as fake, except in the rare case that you message a person which you have previously met in real life (to confirm their human-ness).

    • fartcannon 2 years ago

      I want to agree with you, deeply, but the number of people who fall for simple PR/advertising in today's world suggests otherwise.

      I think we'd have a chance if they taught PR tricks in schools starting at a young age. Or at minimum, if websites that aggregate news would identify sources that financially benefit from you believing what they're saying.

      • corrral 2 years ago

        I've long thought that high school should require at least one course that I like to call "defense against the dark arts" (kids still dig Harry Potter, right? Hahaha).

        The curriculum would mostly be reasoning, how to spot people lying with graphs and statistics, some rhetoric, and extensive coverage of Cialdini's Influence. The entire focus would be studying, and then learning to spot and resist, tricks, liars, and scam artists.

        • weaksauce 2 years ago

          That's a good thing to teach but I do think that there are a large number of people out there that just don't have the capacity to do that. By virtue of being on this forum you are likely in the top quartile or near it of the population in terms of intelligence for whatever good that metric is. There is a cognitive bias that everyone frames most people as more or less the same as the person sees themselves and for me, a pretty skeptical person, it's tough to view the world through the lens of someone less skeptical. (I think it's the false consensus bias)

          > In the US, 14% of the adult population is at the "below basic" level for prose literacy; 12% are at the "below basic" level for document literacy, and 22% are at that level for quantitative literacy. Only 13% of the population is proficient in each of these three areas—able to compare viewpoints in two editorials; interpret a table about blood pressure, age, and physical activity; or compute and compare the cost per ounce of food items.

          Maybe teaching those skills would increase that 13% but I am not sure by how much.

          • creakingstairs 2 years ago

            > > In the US, 14% of the adult population is at the "below basic" level for prose literacy; 12% are at the "below basic" level for document literacy, and 22% are at that level for quantitative literacy. Only 13% of the population is proficient in each of these three areas—able to compare viewpoints in two editorials; interpret a table about blood pressure, age, and physical activity; or compute and compare the cost per ounce of food items.

            > Maybe teaching those skills would increase that 13% but I am not sure by how much.

            It's a hard battle against status quo and bureaucratic institutions but I still think it's possible to reduce that by a lot. I'm willing to bet that a lot of those people are below basic because they weren't given chance to succeed due to child poverty, schools playing numbers game[1] and various other factors. We don't even have to add new curriculums. Just by getting the "basics" right, we can lift those numbers up.

            [1] https://www.youtube.com/watch?v=4Uonc7BEZ4g

          • fartcannon 2 years ago

            A lot of the messages that are valuable exist as regular idioms that are pretty simple. 'Don't believe everything you read.' for example is only a few ideas away from 'the government, media and corporations are manipulating you for their benefit.'

            One is probably generally considered good advice, the other is likely dismissed as conspiracy theory nonsense.

            We probably need a few more simple ideas like for modern times. "Facebook profits most when they make you depressed" "Tiktok is exploiting your sexdrive to influence your personal politics" "Google is telling people what you masterbate too for money"

            They'd have to be less direct of course. I guess one modern one is, "if it's free, you're the product". But that tragically overlooks the generosity of the FOSS community so I'm not a fan.

    • jayd16 2 years ago

      When you say "everything online" do you mean every untrusted source? Surely the genie is out of the bottle on communication over the web. That local source will have a website. Because of that I feel like we'll always just have to be vigilant, just like we always should have been. After all, local scams still exist. Real humans are behind the bots.

      • fxtentacle 2 years ago

        > Real humans are behind the bots.

        Yes, but those humans are usually anonymous and on the other side of the planet which makes them feel safe. And that allows them to be evil without repercussions.

        Back in the days, I went to LAN parties. If someone spotted a cheater, they would gang up with their friends and literally throw the offender out of the building. That was a pretty reliable deterrent. But now with all games being played online, cheating is rampant.

        Similarly, imagine if those Indian call centers that scam old ladies out of their life savings were located just a quick drive away from their victims' families. I'm pretty sure they would have enough painful family visits such that nobody would want to work there.

        Accordingly, I'm pretty sure the local expert would have strong incentives to behave better than an anonymous online expert would.

        • jayd16 2 years ago

          To argue that scams didn't exist or weren't a problem before the internet is pretty indefensible, no matter the anecdotes.

          • fxtentacle 2 years ago

            I was merely trying to argue that scams within a local community would be less severe than scams between strangers, because they are easier to punish and/or deter.

    • userabchn 2 years ago

      I suspect that many chat groups (such as Facebook groups), even small niche ones, already have GPT-3-like bots posting messages that seem to fit into the group but that are trained to provide opinions on certain topics that align with the message that the organisation/country controlling them wishes to push, or to nudge conversations in that direction.

      • fxtentacle 2 years ago

        Aww, that reminds me of the good old IRC days where everyone would start their visit with !l to get a XDCC bot listing.

    • newswasboring 2 years ago

      Your second paragraph is very intriguing. I never really thought about this. I wonder if people will actually be able to restrict usage though. Its software, and historically it has been hard to restrict it. Of course cloud based systems have two advantages, software is hidden behind the API and they have really powerful systems. But the former requires a single lapse in security to leak and latter just requires time till consumer hardware can catch up. If I use past data to predict future (which might be a bad idea in this case), it might be almost impossible to restrict AI software.

      • formerkrogemp 2 years ago

        I've heard this for years, but software will eventually face its own regulation and barriers to entry much as healthcare and accounting have theirs.

    • wongarsu 2 years ago

      I'm not sure the experts have to be local. I can't be sure that a random twitter account isn't a bot, but I can be pretty sure that tweets from @nasa are reasonably trustworthy. People will form webs-of-trust: they trust one source, the people viewed as trustworthy by them, etc. Anyone outside of that will be untrustworthy.

      That's not too dissimilar from what we do today, after all people have always been able to lie. The problem is just that if you start trusting one wrong person this quickly sucks you into a world of misinformation.

      I find your point about regulating AI interesting. We already see some of this, with good recommendation systems being harmful to vulnerable people (and to a lesser degree most of us). This will probably explode once we get chatbots that can provide a strong personal connection, replacing real human relationships for people.

      • freetinker 2 years ago

        HN is a good example (or precursor) of webs-of-trust. Nice phrase.

    • wussboy 2 years ago

      This is the outcome I see as well, and I think it is a good thing. Every form of communication beyond physical face to face will be completely untrustworthy. It will affect banking. Remote work. Dating. Clubs. Everything.

    • dools 2 years ago

      So you’re predicting a return to Stone Age tribalism

      • wussboy 2 years ago

        Huh? How does that follow? Also, I’m not entirely sure we ever left Stone Age tribalism…

  • narrator 2 years ago

    Surprise Plot Twist: Maybe we're already living in a post-truth society and you are still sure you know what the truth is. How would you even know that what you were ferociously defending as the truth wasn't a lie? What makes you think you're not smart enough to not fall for lies?

    Largely, I think most people's means of finding the truth is just to take a vote of the information sources they find credible and go with whatever they say. I was talking with some friends about the California propositions a while back. Some of them were not clear cut which way we should vote on them. Instead of discussing the actual issue, people just wanted to know what various authority figures thought. These were not dumb people I was talking to, and I used to remember an era in the 90s maybe where you could actually have a reasoned debate and come to the truth that way. It seems that's obsolete these days since nobody seems to agree on the basic facts about anything.

    • cwkoss 2 years ago

      Disinformation is very common in traditional news media. This technology just democratizes this tool and allows anyone to engage in it.

      There will probably be a net increase in disinformation, but citizens will likely also get better at being skeptical of currently unquestioned modes of disinformation.

      • corrral 2 years ago

        > There will probably be a net increase in disinformation, but citizens will likely also get better at being skeptical of currently unquestioned modes of disinformation.

        Russia seems to be farther along this path than we are and every account I've read of their experience of disinfo isn't that they got better at seeking the truth, but instead just assume everything's a lie & nothing's trustworthy, and disappear into apathy.

        • narrator 2 years ago

          Implementing dropout to avoid overfitting on bad data.

  • VanillaCafe 2 years ago

    The real problem isn't the veracity of the information, but the consensus protocol we use to agree on what's true. Before the internet, we were more likely to debate with their neighbors to come to an understanding. Now, with the large bubbles we can find ourselves in, afforded by the internet social media, we can find a community to agree on anything, true or not. It's that lack of challenge that allows false information to flourish and is the real problem we need to solve.

    • whimsicalism 2 years ago

      I would be curious if false information is actually more common now. It seems like people regularly believed all sorts of false things not too long ago.

  • adhesive_wombat 2 years ago

    A skeptic (i.e. someone who cares to verify) not being able to trust media because it might be fake is only a minor problem as long as you have at least one trusted channel.

    The president, say, can just release the statement on that channel and it can be verified there (including cryptographically, say by signing the file or even using HTTPS).

    If you lose that channel, then you're pretty much screwed because you'll never know which one is the real president. But there are physical access controls on some channels, say the Emergency Alert System, which can be used to bootstrap a trust chain.

    What will be much more possible is that someone who will not check the veracity of the message will take it at face value without bothering to validate it. This is your news-via-Facebook crowd.

    At that point, it's less a technical issue than simply people don't want to know the truth. No amount of fact-checking and secure tamper-proofing of information chains of custody will help that.

    • toss1 2 years ago

      Agree, and it's even worse than that

      An incredibly small minority of people even understand your phrase with any actual fidelity and depth of meaning:

      >>it can be verified there (including cryptographically, say by signing the file or even using HTTPS)

      Even fewer of that microscopic minority have and understand how to use the tools required to verify the video cryptographically, AND even fewer know how to fully validate that the tools themselves are valie (e.g., not compromised by a bogus cert).

      Worse yet, even in the good case where everyone is properly skeptical, and 90+% of us figure out that no source is trustworthy, the criminals have won.

      The goal of disinformation is not to get people to believe your lie (although the few useful idiots who do may be a nice bonus).

      The goal of disinformation is to get people to give up an even seeking the truth - just give up and say "we can't know who's right or what's real" — that is the opening that authoritarians need to take over governments and end democracies.

      So yes, this is next-level-screwed material.

      • adhesive_wombat 2 years ago

        > AND even fewer know how to fully validate that the tools themselves are valie (e.g., not compromised by a bogus cert).

        Kind of, but once you have a single verifiable channel back to the source (in this case, some statement by the president) it's now possible for anyone to construct a web of trust that leads back to that source. For example, multiple trustworthy independent outlets reporting on the same statement in the same way, providing some way to locate the original source. This is why new articles that do not link to (on the web) or otherwise unambiguously identify a source wind me up. "Scientists say" is a common one. It's so hard to find the original source from such things.

        This falls over in two ways: sources become non-independent and/or non-trustworthy as an ensemble. Then you can't use them as an information proxy. This is what is often claimed about "the mainstream media" and the "non-mainstream media" by the adherents if the other. All the fact checks in the world are worthless if they are immediately written of y those they are aimed at as lies-from-the-system.

        The second way is that people simply do not care. It was said, it sounds plausible, and they want to believe it.

        So I would say that actually the risks here are social, not technological. Granted, perhaps a deepfake-2'd video might convince more people than a Photoshopped photo. The core issue isn't the quality of the fake, it's that a significant number of people simply wouldn't care if it were fake.

        Doesn't mean we're not screwed, just not specifically and proximally because of falsification technology, that's accelerant but not the fuel.

        • toss1 2 years ago

          >>that's accelerant but not the fuel.

          Yes, indeed! Which is why I'm having so much trouble with ppl proposing technological solutions - technically it might solve the problem in some situations, but the bigger problem is indeed some combination of general confusion, highly adversarial information environment laden with disinformation, and people's all-too-frequent love of confirmation bias and willingly believing BS and overlooking warning signs.

          I hope we can sort it...

  • GuB-42 2 years ago

    There has never been a truth society.

    This tech will certainly be used to frame someone for a crime, like I am sure Photoshop was used in such a way, and thousands of other techniques. And modern technology offers counters. It is an arms race but because of the sheer amount of data that is collected, I think that truth is more accessible than ever. The more data you have, the harder it is to fake and keep consistent.

    • jl6 2 years ago

      I don’t know, it seems like the existence of widespread, easy photo/video/audio faking technology could be a really strong argument for dismissing any purported photo/video/audio evidence.

      Wouldn’t it be funny if deepfakes destroyed the blackmail industry?

      • mazlix 2 years ago

        I don't much about the blackmail industry, but I would imagine the reverse would be more likely.

        I would think blackmail works best when being used on things which are actually true. If someone wanted to blackmail me by sharing a fake photo of my cheating on my spouse I wouldn't cave for anything because I know I haven't so I feel confident I have enough other evidence to fight that claim. My location data, who is the "other woman", etc.?

        On the flip side if I had commit some crime and someone made a deepfake of me doing that crime but they didn't actually have a legitimate photo, I might cave at that point since I would think it's a genuine photo and presumably they know even more about the incident.

  • thedorkknight 2 years ago

    I don't think it'll be way too much different than it has been for most of human history. We really only had a brief blip of having video, which was generally trustable, but keep in mind that before that for thousands of years it was just as hard to know truth.

    Someone told you stuff about the outside world, and you either had the skepticism to take it with a grain of salt, or you didn't.

  • idontwantthis 2 years ago

    This doesn’t bother me that much because evidence isn’t required to convince millions of people that a lie is true. We already know this. Why make fake evidence that could be debunked when you can just have no evidence instead?

    • fleetwoodsnack 2 years ago

      Different instruments can be used to capture different segments of the population. You’re right there are gullible people who are more likely to believe things with limited or no evidence. But it isn’t necessarily about the most impressionable people, nor is it about installing sincerely held beliefs.

      Instead, what may be a cause for concern is simply the installation of doubt in an otherwise reasonable person because of the perceived validity of contrived evidence. Not so much that it becomes a sincerely held belief, but but just enough that it paralyzes action and encourages indifference due to ambiguity.

      • idontwantthis 2 years ago

        Think about how few people believe in Bigfoot when video, photographs, footprints, eye witness testimony all exist.

        Think about how many people believe in Jesus without any of that physical evidence.

        If anything, the physical evidence turns most people off. And I'd argue that most Bigfooters don't even believe in the physical evidence, but use it as a tool to hopelessly attempt to convince other people to believe in what they already believe is true.

        • mazlix 2 years ago

          Completely agree.

          For some reason many people react to learning about deep fakes' potential with huge concern as if photos and video used to be infallible and it's suddenly being overturned when this really hasn't been the case.

      • mgkimsal 2 years ago

        > You’re right there are gullible people who are more likely to believe things with limited or no evidence

        Often the lack of evidence is the proof of whatever is being peddled. "No evidence for $foo? OF COURSE NOT! Because 'they' scrubbed it so you wouldn't be any wiser! But I have the 'truth' here... just sign up for my newsletter..."

      • astrange 2 years ago

        This discussion isn't useful because you're assuming people actually care if something is true before they "believe" it, which they don't, so they don't need evidence. "Believing" doesn't even mean people actually hold beliefs. It means they're willing to agree with something in public, and that's just tribal affiliation.

        • fleetwoodsnack 2 years ago

          >you're assuming people actually care if something is true before they "believe" it, which they don't

          This seems like an assumption too. I know there are instances like you’ve described but they’re not absolute nor universal and I accounted for that in my original comment.

  • redox99 2 years ago

    People with some degree of knowledge already know that any photo could be photoshopped. People that don't care will blindly trust a picture of someone with a quote or caption saying whatever, as long as it fits their narrative.

    This has been the case for photos for almost 2 decades. The fact that you can now do it with video or audio doesn't change that much IMO.

    • hutrdvnj 2 years ago

      I think it does, because while you obviously couldn't trust images since two decades or so, you could resort to video which wasn't easy to believably deep fake until recently. But if everything online could be a deep fake, how can you find out the truth?

      • whimsicalism 2 years ago

        Videos can be faked too, it is just cheaper now.

        • makapuf 2 years ago

          It's called special fx, is more than a century old (people are now aware it's fake but remember the word is that the train coming in la ciotat movie made people run out of the movie theatre).

    • micromacrofoot 2 years ago

      Photos have been altered for much longer than 2 decades. Think of airbrushing models in magazines (used to be literal airbrushes painting over photos). This has had a serious impact on our perception of beauty and reality.

  • wildmanx 2 years ago

    > I don't think we'll have the ability to handle this kind of tech responsibly.

    Makes you also think whether anybody from the Hacker News crowd working on any contributing tech is acting ethically responsibly. For myself, I have answered this question with "no", which rules out many jobs for me, but at least my kids won't eventually look me in the eye and ask "how could you?"

    Sure it's cool tech. But so was what eventually brought us nuclear war heads.

  • gernb 2 years ago

    I was listening to "This American Life" and they had a segment on someone who setup an site to give you a random number to call in Russia where you were supposed to give them info about what's happening in the Ukraine. It was someone shocking to hear their side of the story, that Russia is a hero for helping oppresed Russians in Ukraine.

    But then I stepped back and wondered, I'm assuming that the story I've been told is also 100% correct. What proof do I have it is? I get my news from sources who've been wrong before or who have a record of reporting only the official line. My gut still tells me the story I'm being told in the west is correct, but still, the bigger picture is how do I know who to trust?

    I see this all over the news. I think/assume the news I get about Ukraine in the west is correct but then I see so much spinning on every other topic that it's hard to know how much spinning is going on here too.

    • RobertRoberts 2 years ago

      I was asked "What are we going to do about Ukraine!?" And I said, "It's a civil war that's been going on for almost 10 years, what is different now?" and their response was "what? I'd never heard that." And I added, "In 2014 there was an overthrow of an elected president there and it started the war." Blank stares.

      I have a friend who traveled to Europe regularly for tech training, including Ukraine, and he was surprised about how little people know what is going because people's news sources are so limited. (mostly by choice I assume)

      No special tech needed to manipulate people, just lack of multiple information sources?

      • HyperSane 2 years ago

        The president that lost power in 2014, Viktor Yanukovych, was a Russian puppet who refused to sign the European Union–Ukraine Association Agreement in favor of closer ties to Russa. The Ukrainian parliament voted to remove him from office by 328 to 0. He then fled to Russia.

        • synu 2 years ago

          It’s hard to fathom believing there’s nothing new or relevant happening with the 2022 invasion, or why if there was a lead-in to the conflict that would be on its own a reason to conclude that there’s nothing to be done now.

        • RobertRoberts 2 years ago

          See this is the problem. While I follow plenty of international news, I didn't know this.

          There is often times just too much to know to fully understand a situation. So how can anyone form a valid opinion?

          As a follow up, was the election of Victor Yanukovych lawful? If not, then why not point out he was a puppet from a manipulated election? That would be worth a coupe, but not because you disagree with his politics, that's just insanity. Look what Trump believed and supported, we didn't start civil war because Trump wouldn't sign a treaty and he was accused of being a Russian puppet too. There is just more to this story than you are letting on.

      • mcphage 2 years ago

        > what is different now?

        Um. Is this a serious question?

        • RobertRoberts 2 years ago

          Yes, I didn't know there was an invasion when asked about Ukraine, but I knew about the past history. (some at least)

          • mcphage 2 years ago

            So you were asked something about a major current event, you're surprised that they don't know about what happened 10 years ago, and they're surprised you don't know about what's happening now. Blank stares all around, I guess.

            • RobertRoberts 2 years ago

              Yes, it was the morning after the Russian invasion. I just wasn't on the internet. Before that moment, Russia was just posturing on the border.

              This is my point, it's easy to just not know something at any point in time, let alone some things you will only know if you have varying news sources, even if it's many years old.

              How many of your friends, family and acquaintances know about the civil war in Ukraine for the nearly past decade?

              • synu 2 years ago

                Everyone I know knew, but we live in Europe. Your mileage may vary depending on where you live.

                • RobertRoberts 2 years ago

                  I live in the US, and so far the only people that knew are the very few news skeptics that follow international news sources online. And even then, we had to share with each other a lot.

                  But I had watched documentaries about Ukraine years ago, and since most people watch only Fox news, CNN, MSNBC, etc... they knew nothing at all.

                  So when Russia invaded most people were shocked, where I, and those few others that were information, were not as suprised.

                  • synu 2 years ago

                    I don’t think that being aware of the history of the conflict had anything to do with being surprised or not by the invasion. I have family in both Russia and Ukraine and nobody expected it.

                    Even the Russian soldiers who found themselves being sent into Ukraine were surprised.

                    • RobertRoberts 2 years ago

                      But did they know there was war in Ukraine for years? I suspect this is common knowledge in Europe, but it's extremely rare knowledge here.

  • mrandish 2 years ago

    I guess I'm an outlier on this topic because I don't think deep fakes will continue to have significant societal impact after an initial disruptive period of perhaps a few years. During that time DFs will become so common that any impactful example will immediately be suspected and discounted by most people. Yes, there will always be those who choose to believe the unbelievable but the lunatic fringe already believe outlandish claims without the need for deep fakes, thus there will be little net change.

  • munificent 2 years ago

    I think we'll solve it the same way we solved similar transitions when text and image faking became easy: provenance.

    For many years now, most have understood that you can't take text and images as truth because they can easily be simulated or modified. In other words, the media itself is not self-verifying. Instead, we rely on knowing where a piece of media came from, and we associate truth value with those institutions. (Of course, people disagree on which institutions to trust, but that's a separate issue.)

    • kbenson 2 years ago

      In other words, the same way we dealt with information before photographs and videos were invented. The answer to how to we deal with the fact that images and videos can't be trusted is to look at what we did before we relied on them. If we're smart about it we'll try to pick out the good things that worked and try to build in safeguards (as much as possible) against the things that didn't, but I won't hold my breath. We're already heading back towards some of the more problematic behavior, such as popularity or celebrity equating to trust.

  • pfisherman 2 years ago

    I think that people will adapt. Humans are very clever and have been evolutionarily successful because of the ability to adapt to a wide range of environments.

    Think about how devastatingly effective print, radio, and television propaganda at the time each medium was widely adopted compared to how effective they are now. They still work, but for the most part people have caught on to the game and adjusted their cognitive filters.

    My guess is that we will see a bifurcation of society into those who are able to successfully weed out bullshit from those who can’t. The people who are able to process information and build better internal models of the world will be more successful, and eventually people will start imitating what they do.

    Edit: I do think that these tools coupled with widespread surveillance and persuasion tech (aka ad networks), have set up conditions where the few can rule over the many min a way that was not possible before. I do think some of the decentralized / autonomous organization tech - scaling bottom up decision making to make it more transparent and efficient - is a possible counter. Imo, this struggle between technological mediated centralization and top down enforcement and control vs decentralization and bottom up consensus will be the defining ideological struggle of our time.

    • azinman2 2 years ago

      I think you’re underestimating the power of cognitive biases and violence from those who only believe the information that want to hear.

      • pfisherman 2 years ago

        I think there are quite a few recent historical examples of this - WW2, US invasion of Iraq, Russian Invasion of Ukraine, etc.

        However there is a price to pay for operating on beliefs that do not align with reality. It’s why almost all organizations engage in some form of intelligence gathering. Those who are at an information disadvantage get weeded out.

        Philip K Dick has a great quote “Reality is that which, when you stop believing in it, doesn't go away.”

  • decafmomma 2 years ago

    Here's the thing though: in theory, we should already be skeptical of video and audio evidence on its own.

    Most of our institutions, in theory, do not focus on single mediums for assessing veracity of truth. The strength of claims and our ability to split the difference between noise and truth comes down to corroboration. How many other sources strength and work consistently with a claim? That's, in theory, how law enforcement, intelligence, and reporting should work.

    In practice, there are massive gaps here and people's attention -> decision is lower than ever.

    I don't think it's impossible for us to handle deep fakes, but I sense the same fear you have. I think ultimately it is more about our attention spans, and the "urgency" we feel to act quickly, that will be more of our down fall than the ability to produce fakes more easily.

    You don't in fact need a convincing fake to create a powerful conspiracy theory. Honestly you only need an emotional provocation, maybe even some green text on an anonymous web form.

  • beisner 2 years ago

    So there are information theoretic ways to certify that media is genuine, if you assume trust at least somewhere in the process. Basically just cryptographic signing.

    For instance, a camera sensor could be designed such that every image that is captured on the sensor gets signed by the sensor at the hardware level, with a certificate that is embedded by the manufacturer. Then any video released could be verified against a certificate provided by the manufacturer. Of course, you have to trust the manufacturer, but that’s an easier pill to swallow (and better supported by our legal framework) than having to try and authenticate each video you watch independently.

    There are issues that can arise (what if I put a screen in front of a real camera??, what if the CIA compromises the supply chain???), but at the end of the day it makes attacks much more challenging than just running some deepfake software. So there are things that can be done, we’re not destined for a post truth world where we can’t trust any media we see.

    • pahn 2 years ago

      I'll probably get downvoted into oblivion for mentioning blockchain tech, but this might at least help, maybe not in its current form but... I did not follow that project, but there do exist some concepts in this direction, e.g. this: https://techcrunch.com/2021/01/12/numbers-protocols-blockcha...

      • beisner 2 years ago

        The idea of having a public record that attests to when an event happened is interesting, although not sure it has to be blockchain for it to be useful.

    • bogwog 2 years ago

      That's helpful for the legal system, but it's not going to help for attacks designed to cause mass panic/unrest/revolts. If another US president wants to attempt a coup, it'll be much more successful if they're competent and determined enough to produce deepfakes that support their narrative.

      The only way to prevent stuff like that is to educate the public and teach people how important it is to be skeptical of anything they see on the internet. Even then, human emotions are a hell of a drug so idk how much it'd help.

      • notahacker 2 years ago

        US Presidents have had the ability to make false claims based on video of something completely different, create material using actors and/or compromised communications, stage events or use testimony that information has been obtained via secret channels from appointees heading up agencies whose job it is to obtain information via secret channels for a long time now.

        If anything, recent events suggests the opposite: deepfakes can't be that much of a game changer when an election candidate doesn't even have to try to manufacture evidence to get half the people who voted for him to believe his most outlandish claims.

    • notfed 2 years ago

      > a camera sensor could be designed such that every image that is captured on the sensor gets signed by the sensor at the hardware level

      A hardware-based private key like this will inevitably be leaked.

      • beisner 2 years ago

        Each sensor could have a unique cert

  • mrshadowgoose 2 years ago

    The concerns in your second paragraph can be mostly mititgated using a combination of trusted timestamping, PKI, cryptographically chained logs and trusted hardware. Recordings from normal hardware will increasingly approach complete untrustworthiness as time goes on.

    The concerns raised in the first paragraph however... the next few decades are going to be a wild ride. Hopefully humanity eventually reaches an AI-supported utopic state where people can wrap themselves in their own realities, without it meaningfuly affecting anyone else. Perception of reality is already highly subjective, most of the fundamental issues are due to resource scarcity/inequality. Most other issues evaporate once that's solved.

    • FastMonkey 2 years ago

      I think you can technically mitigate some concerns for the people who understand that, but practically it's going to be a very different story. People will believe who/what they believe, and an expert opinion on trustworthiness is unlikely to change that.

      I think being in the real world and meeting real people is the only way to create a real, functional society. Allowing people to drift away into their own AI supported worlds would eventually make cooperation very difficult. I think it would just accelerate the tendency we've seen with social media, creating ever more extreme positions and ideologies.

  • hutzlibu 2 years ago

    "I don't think we'll have the ability to handle this kind of tech responsibly."

    I do not think so either, but so far we survived 75+years with nukes around.

    But you can argue, it was mainly by chance. Technological progress is awesome, but our societies cannot keep up yet. They will have to do heavy transition anyway, or perish. Or rather, we are in the process of transition. 20 years ago most people did not really know, what the internet is, now most are always online. Data mining, personalised algorithms for ad exposing, ..

    So deepfakes are a concern, but not my biggest. Rather the contrary, when people see how easy it is to fake things, they might start developing a healthy sceptism to illuminating youtube videos.

  • mazlix 2 years ago

    I don't know exactly what you mean by "post-truth", but I think for the vast majority of human history we've lived in a time where it was rare to have easy/cheap to create and hard to fake records of what happened (photography, videos, audio recordings).

    In terms of a single photograph though that's been very easy to fake in compelling ways for at least the past 30 years look at lochness and bigfoot photos.

    Video's harder, but still pretty much all of this is not horribly difficult to fake. We've long been able to edit things out and mix audio to make a compelling video of a cartel leader meeting with them, no need for deep fakes.

  • kleer001 2 years ago

    Thankfully things that a real are very cheap to follow up on. Questioning some security footage? No worries there's 100+ hours of it to cross check with the footage and three other cameras too.

    IMHO, consilience and parsimony will save us.

  • mc32 2 years ago

    I think a bigger question is whether reputable sources --those people trust for whatever reason, would use this technology to prop up ideas and or to create narratives.

    I don't think it's far-fetched. We've already seen where videos are misattributed[1] to stoke fear or to promote narratives --by widely trusted news sources.

    [1] This was foreshadowed with "Wag the Dog" but happens often enough in the media today that I don't think use of "deepfake" technology is beyond the pale for any of them.

    • ketralnis 2 years ago

      It almost doesn't matter now that people have fractured on which sources they consider reputable. Trump called a CNN reporter "fake news" and presumably his followers think of them the same way I think of Fox. I absolutely think that Fox would use technology to lie, and I'm sure Fox fans think that "the liberal media" would. So people are going to think that reporting is fake whether or not it is

      • mc32 2 years ago

        Wasn't 'The Ghost of Kiev' almost entirely fake but the news carried it as real?

        • ketralnis 2 years ago

          I don't know. How would you "prove" it? Google it, and look for what other people that agree with you think?

          • mc32 2 years ago

            Well... https://www.washingtonpost.com/world/2022/05/01/ghost-of-kyi... admitted by Ukrainians themselves...

            • ketralnis 2 years ago

              According to an article you found online. That's exactly my point, if we can't trust news sources then we can't really know anything. Because of my aforementioned distrust of Fox News, if it were written by them I'd dismiss that article out of hand placing no truth value on it either way. I'd expect somebody that distrusts CNN to do the same if it were written by them.

              "It's confirmed!", "They admitted it!", and other unprovable internet turns of phrase in comments sections are really just "I believe it because somebody I trust said so" and that only has weight as long as trust has weight.

              • mc32 2 years ago

                If the accused admit to something it's more believable than the alternative (that they were forced into false admission).

                So in this case if the Ukrainian government admit to making things up then I would think it's believable that they made something up for the sake of propaganda. We can also check more independent sources --read Japanese news, or Indian news sources, etc.

                • ketralnis 2 years ago

                  We don't know that the accused admitted to anything. We know that the Washington Post says that they did. The world becomes very solipsistic when you lose trust in reporting.

                  • l33t2328 2 years ago

                    Well…yeah. If you don’t trust anything you can’t know very much.

  • berdon 2 years ago

    Reminds me of Stephenson's "Fall; Or Dodge in Hell" where all digital media is signed by their anonymous author and public keys become synonymous with identities. An entire industry of media curation existed in the book to handle bucketing media as spam, "fake", "true", interesting, etc.

  • WanderPanda 2 years ago

    I think society will adapt within 1 generation. The tech is already there (signing messages with asymmetric encryption)

    • toss1 2 years ago

      And how many use or will use the tech, and how many of those will use it competently, and how many of those are competent to validate and know that their checking technology has not been compromised (e.g., hacking or distributing bad authenticity checkers and/or certs like hacking or distributing bad crypto-wallets)?

      • bee_rider 2 years ago

        End-to-end encryption was a giant pain in the butt that required dinking around with PGP or whatever, but now it is a pretty mainstream feature for chat apps (once they figured out how to monetize despite it). Tech takes a while to trickle down to mainstream applications, but it'll get there if the problem becomes well known enough.

        • toss1 2 years ago

          I agree that e2e encryption is becoming more widespread and "user friendly".

          However, the friendliness seems inversely proportional with the ability of the users to detect that their tool is failing/corrupted/hacked/etc. So while we might have more widespread tools, we also have a more widespread ability to give a false sense of security.

  • kache_ 2 years ago

    The one unwavering thing about technology is that it doesn't stop advancing, and we can't use it responsibly.

    The good news is that we've been going through rapid, rapid tech advancements the past 50 years and we're still here.

    • mckirk 2 years ago

      The thing I don't like about these 'well people have been complaining about this forever' arguments is that, it's entirely possible to have a) people pointing at an issue for a long time and b) still have that issue get progressively, objectively worse over time.

      There's that example of people pointing out smartphones might be bad for children, then someone counters with 'well thirty years ago people complained about children reading too much instead of playing outside', with the implication being: adults of all ages will find some fault with newer generations, and not to worry so much.

      But just because it is true that adults will probably always worry about 'new, evil things' corrupting the youth, this does not mean that the 'new, evil things' aren't getting _objectively more dangerous_ over time. Today adults would be happy if children still had the attention span and motivation necessary to read a book. They'd be happy if they themselves still had it, actually.

      Graphing the progress of a sinking ship and pointing out that the downwards gradient has been stable for a while now and we should therefore be okay is generally not a useful extrapolation, I would say.

      • cupofpython 2 years ago

        >Graphing the progress of a sinking ship and pointing out that the downwards gradient has been stable for a while now and we should therefore be okay is generally not a useful extrapolation,

        I like this analogy. I've had similar thoughts for a while too. Granted I also saw some research that society has been objectively getting better in a lot of areas people think is getting worse (like violence, specifically police abuse) compared to the past. theoretically this is because we have a lot more information now than before, so smaller occurrences are generating a larger impression.

        that said, I still very much agree with your point and that it is very applicable to specific individualized issues. Saying that people have been concerned for a while and nothing bad has happened yet is accurate for the situation where nothing bad will happen, AND the situation that it was bad then and is worse now, AND the situation where we are approaching a tipping point / threshold where the bad will start.

    • sitkack 2 years ago

      > rapid tech advancements the past 50 years and we're still here

      This is a tautology. At some point the music stops and you aren't here to make the argument that we are still here.

      • cupofpython 2 years ago

        not entirely tautological. the probability that something bad happens tomorrow if we do X today for the first time is very different than the probability that something bad happens tomorrow if we do X today GIVEN weve been doing X every day for 50 years.

        It is still insufficient to say nothing bad will happen, of course

        • sitkack 2 years ago

          Thats not the argument. The one you are making is the same one people make when they conflate weather and climate.

          • cupofpython 2 years ago

            Conditional probability applies to many things

    • andruby 2 years ago

      > The one unwavering thing about technology is that it doesn't stop advancing, and we can't use it responsibly.

      While I think that is true in general, I am optimistic that we've seen at least one technology where we were able to constraint ourselves from self-destruction: nuclear weapons.

      Of course, nuclear weapons tech is not in reach of individuals or corporations, which means there are only a handful of players in this game-theory setting.

  • jkaptur 2 years ago

    Wasn't this bridge crossed when Photoshop became popular?

    • bberrry 2 years ago

      It takes some level of skill to produce a convincing Photoshopped image.

      • BoorishBears 2 years ago

        Does that matter when they stakes are as high as these arguments always claim?

        If we're doomsaying about a "post-truth society", we're talking about high-stakes society-scaled skullduggery.

        If you're aiming for that level of disruption, easy deepfakes vs hard video/photo editing is not an issue, getting people to trust your made up chain of custody is.

        -

        This is like when people worry about general AI becoming self-aware and enslaving mankind... the "boring version" of the danger is already happening: ML models being trained on biased data are getting embedded in products that are core to our society (policing, credit ratings, etc.), and that's really dangerous.

        Likewise, people worry about being able to easily make fake news, when the real danger is people not being equipped to evaluate the trustworthiness of a source... and that's already happening.

        You don't even need a deepfake, you tweet that so and so said X, write a fake article saying they said X, amplify it all with some bots, and suddenly millions of people believe you.

  • tomgraham 2 years ago

    The good news is that public awareness of potentially manipulated media is on the rise. Coupled with good laws, good detection tech - public awareness and media literacy is important. At Metaphysic.ai, we created the @DeepTomCruise account on Tiktok to raise awareness.

    We also created www.Everyany.one to help regular people claim their hyperreal ID and protect their biometric face and voice data. We think that the metaverse of the future will feature the hyperreal likenesses of regular people - so we all have to work hard today to empower people to be in control of their identity.

    • SantalBlush 2 years ago

      Creating yet another product to monetize is not a solution, it's just more of the problem. It incentivizes a perpetual arms race between fabrication and verification at the cost of everyday users. No thanks.

      • tomgraham 2 years ago

        It is free. Protecting individuals' right is more important that making money!

        • random-human 2 years ago

          Free but collecting and storing peoples biometric data on your servers (per the FAQ). How do I know it's not a clearview ai clone but with easier data gathering? What is that saying about what the real product is if something is free?

  • efrbwrh 2 years ago

    I think the uncomfortable truth is that we're already there and it just doesn't matter because everything works on trust anyway. Sure, quality faked media has a ways to go before you're watching an Avengers quality deepfake porn but for groups that care to a professional degree (intelligence agencies, militaries, digital media companies, etc.) the quality is already good enough for intelligently planted media. The real limiter is access to trusted distribution channels for your faked media. A homunculi media consumer who gets their information from AP News/Reuters/Fox News/CNN/LA Times/New York Times/Verified Twitter Professionals/etc. is probably not at all in a position to judge the veracity of the material presented. If they were in a position to make such judgements they wouldn't need to care so much about whichprestigious media outlets they favor. But they do care, precisely because they know that they're gullible and so they want the media outlet that will give them the real truth (which usually means either the truth they like or the truth they don't like but from a brand that conforms to their ideals of what a news media brand should be). If CNN and Fox News both ran the same deepfaked video of Joe Biden shooting his veep with supplementary text stories and such then I wager a good portion of the American populace would believe it had happened and have serious conern.

  • nathias 2 years ago

    you could never trust it, now you'll know you can't trust it

  • Zababa 2 years ago

    > What will life be like when you can't trust any video or interview you see because it could be completely fake?

    I don't understand your point. This has been the case for a while. People were editing photos to remove people in the time of Stalin. And even before that, you can lie, write false records, destroy them.

  • gadders 2 years ago

    >> There are huge swathes of people who live in their own make-believe world, like those believing wholeheartedly that the 2020 election was stolen

    There are also those that wholeheartedly believe Trump colluded with Russia to win the 2016 election, or that the Steele dossier was factual.

    • dTal 2 years ago

      These are not equivalent. Russia did interfere, there were links between Trump and Russia, therefore there is circumstantial evidence that collusion occurred, sufficient to trigger a widely publicized investigation. The allegations of election fraud in 2020 however are 100% alternate universe yarns spun for political gain with no basis in fact whatsoever.

      • gadders 2 years ago

        Russia may have interfered but I would imagine it does every year to sew what they percieve to be maximum discord. There are no links between Trump and Russia, and the Mueller and subsequent investigations proved that. Recent trials also proved that the "server links" to Russia from Trump Towers originated with the Hillary campaign.

        In the 2020 election, plenty of rules were changed to favour postal votes and vote harvesting. Was it sufficient to change the results? No idea. Did it definitely occur? Yes.

  • brightball 2 years ago

    Honestly, when people have gone so far as to redefine common words it makes it really difficult to have conversations with people.

    1. Hate going from one of the most visceral and obsessive emotions that exists to being tossed around at everything

    2. The advent of your truth instead of the truth

    3. Constant injections of "x" into every existing word apparently?

    "Womxn in Mathematics at Berkeley" - https://wim.berkeley.edu/

    This is all before we get people to understand that having a discussion where some of their points might not be a strong as they think they are...somehow means you're attacking.

    The world that we have created for ourselves over the last 20 years is weird.

wazoox 2 years ago

There are more dangereous AI than deepfakes. Blackrock 10 trillions investments are driven by an AI, Aladdin. It was also sold to some other investors, and controls about 21$ trillion globally. It has basically the power to drive markets worldwide. It's a systemic problem nobody talks about...

  • astrange 2 years ago

    Blackrock sells passive index funds. There's no room for an AI to make decisions there, so it probably doesn't do anything interesting.

  • nix0n 2 years ago

    That's not an AI problem, it's a concentration of wealth problem.

    Giving that power to a person or group of people would be almost as bad.

joeld42 2 years ago

Not sure about NeRF's impact on deepfakes specifically, but they are looking like a seismic shift in the graphics world. Even without the "neural" part, as a scene representation they are already having a huge impact, and will have implications everywhere. It feels almost like a new primitive is being discovered.

Maybe I'm just getting caught up in all the hype, but I can't remember the last time I saw a topic with this much momentum.

  • FrostKiwi 2 years ago

    > It feels almost like a new primitive is being discovered.

    That's a really nice way of putting it. The impact on the field of computer graphics is tough to predict, but ohh my am I excited.

Workaccount2 2 years ago

I know there are a lot of groups working on how to prevent AI disruptions to society, or how to mitigate their impact, but are there any groups working on how to adapt society to a full blown unchained AI world?

Like throw out all the safeguards (which seems inevitable) and how does society best operate in a world where no media can be trusted as authentic? Or where "authentic" is cut from the same cloth as "fake"? Is anyone working on this?

  • Agamus 2 years ago

    One thing we should be doing is supporting critical thinking at the high school and university level. Unfortunately, it seems we have been dedicated to the opposite of this for about 50 years, at least in the US.

    • Der_Einzige 2 years ago

      Critical thinking is the most overrated skill ever.

      "Critical theorists" are the people who fetishize "critical thinking" and all it got them was to embrace cultural Marxism.

      Constructive thinking is far better than learning how to shit on people, as the skill of critique teaches us...

      • gknoy 2 years ago

        At the risk of falling for a joke, I'm not sure "critical thinking" means what you think it means. It just means thinking objectively about things before making judgments, it has nothing to do with criticizing people. The things one criticizes are our own beliefs and reasons for believing them.

        What do I believe? Why do I believe that? Why do I feel that evidence supports that belief, but not this one? For example, I can explain in a fair bit of detail why I believe that the Apollo landing was not faked. I wouldn't normally bother to explain those reasons, but all of them are based on beliefs and evidence that I've read about, and most of those beliefs are subject to reversal should counter-evidence surface.

      • whimsicalism 2 years ago

        This is reasoning by word chaining.

      • Agamus 2 years ago

        I think of critical thinking as the art of being critical toward oneself when one is thinking.

        In other words, when I read something and hear myself think, "oh yeah, that sounds right", there is another part of my mind that thinks, "maybe not".

        Critical thinking is precisely what could have spared us from all of that 'cultural marxism' you mentioned, or at least, to do it in a way that is... constructive.

  • cameronh90 2 years ago

    I suspect we'll need to return to the idea of getting our news from trusted sources, rather than being able to rely on videos on social media being trustworthy.

    Technically, we could try and build a trusted computing-like system to ensure trust from the sensor all the way to a signed video file output, but keeping that perfectly secure is likely to be virtually impossible except in narrow situations, such as CCTV installations. I believe Apple may be attempting to do things like this with how Face ID is implemented on iPhone, but I suspect we'll always find ways to trick any such device.

    • wongarsu 2 years ago

      80% of the problem could be solved with a reliable signature scheme that allows some remixing of video content. So if CNN publishes a video, signed with their key so it's verifiably CNN, we need the ability to take a 20 second bit of it and still have a valid key attached that verifies that the source is CNN (without trusting the editor). Then you can share clips, remix it, etc, and have integration in social media that attests the source.

      • intrasight 2 years ago

        My plan to solve this "20 second bit of it" is that it's done at the analog hole. Whatever is painting those pixels, a smart TV for instance, will be coordinating with cloud services to fingerprint at a relatively high temporal resolution - maybe 5 seconds. The video itself is the signature. But we will need either trusted analog hole vendors or some trusted non-profit organization - or likely both. I think that "viewing" will be delayed by perhaps 30 seconds to allow for that signature analysis. These smart TVs will overlay a scorecard for all displayed content, and owners will be able to set device scorecard thresholds such that low-scoring content will be fuzzed out.

        • dTal 2 years ago

          I sincerely hope this dystopian vision of the future is satire, but it's already a worrying sign of the times that I'm not sure.

      • _tom_ 2 years ago

        Once you remix it, it's no longer reliable. So, you don't want it signed if it's modified.

  • endtime 2 years ago

    I think most of those people believe that humans will no longer exist in a "full blown unchained AI world".

  • strohwueste 2 years ago

    What about nft-based camera recording?

    • irjustin 2 years ago

      This has been discussed many times and it doesn't work.

      Simple answer is I can just record a deep fake and get it cryptographically signed.

      • andruby 2 years ago

        If you trust a person (or source) and they have a private key that they can properly secure, they could always sign their material with that key. That would prove that the source provided that material.

        A blockchain could be a way to store and publish that signature & hash.

        It can't say "this is real", it can only say "that signature belongs to source X".

        • blamestross 2 years ago

          > A blockchain could be a way to store and publish that signature & hash.

          yes but it would be a bad one. We have multiple key distribution mechanism that are better for the use case.

          • intrasight 2 years ago

            I disagree. The key alone is not sufficient nor secure. We will need crowdsourced validity data as well. We need a zero-trust model - and I too believe that blockchains will play a role.

            • notfed 2 years ago

              If a video is already cryptographcally signed, then you can safely distribute the signature on an untrusted channel.

              Adding Blockchain into the mix is superfluous, and destroys scalability.

              • intrasight 2 years ago

                We don't watch signature keys - we watch videos.

                The TV will have to match every short segment - perhaps 5 seconds of video - against a blockchain which scores the validity of that segment - and of course looking back to it's original source. Signing the whole video is necessary but not sufficient.

                But yes, this is going to be resource-intensive.

                • blamestross 2 years ago

                  > against a blockchain which scores the validity of that segment

                  Why not just allow a cert with that information to be delivered alongside the video? Where would the "score" come from?

                  • intrasight 2 years ago

                    score is ai and/or crowdsourced and comes from a special blockchain

                    • blamestross 2 years ago

                      Why would either of those things need a block-chain?

                      Crowd-sourced is already "network of trust".

                      An AI based score would have to be produced by a centralized provider, so network of trust is the reality for that too.

                      The only way blockchains would provide benefit would be as a distributed discovery mechanism for "review" of the video chunk and having an open ecosystem for that (a dht or trackers) would work better.

                      Blockchains only ever had a reasonable use case under the assumptions of functional capitalism (and we don't have one of those). The reality is that they can't be sustainable without capture and the market incentives only increase the incentive to capture it.

                      DHTs and Networks of Trust only have the value of what service they provide and while that is less exciting for scamming people, they have survived and been high functioning for decades.

            • blamestross 2 years ago

              Zero trust models don't exist and the laws of physics (probably) don't provide for them. (Materialism is a real problem in physics nowadays)

kobalsky 2 years ago

why do websites feel the need to hijack the browser's scrolling logic?

this is very annoying to browse on chrome, but it works well on firefox.

  • arky527 2 years ago

    Very much agreed. It just results in a terrible UX when 98% of other websites have a standard scrolling mechanism

  • sigspec 2 years ago

    Agree. It's a jolt to scrolling expectations

getcrunk 2 years ago

I just started playing cyberpunk 2077. Spoilers:

The idea of the "black wall" to keep out bad ai comes to mind. Not arguing for it but just acknowledging that maybe one day will all have to live in walled gardens to stay safe from rouge ai's or rather rouge actors using powerful ai's

  • henriquecm8 2 years ago

    One thing I've always wondered about that, they explained where those are running, but how they still have autonomy? Are they producing they own power? How about when they need to replace hardware?

    I am not saying it's impossible, but I would like to see that part being explored, even if it's in the tie-ins comics.

natly 2 years ago

I was initially annoyed by this title but now I'm gonna switch my perspective to being happy that ideas like this are floating around since it acts as a really cheap signal to tell if someone knows what they're talking about or not when it comes to ML.

Davonbon 2 years ago

Each day technology becomes scarier, even if technology exists to detect deepfakes and all, and there are infinite methods to tell reality appart from digital images, it doesn't matter, because most people don't know about that, and it's not their fault, not everyone has to get into tech that deeply. However, technology being used to lie to people has always been a thing, think back to photoshop, you can also create fake pictures in there, doesn't matter if they're a hundred percent realistic, many people are still gonna believe it's real, videos of ARMA 3 being used as war footage has been a thing for a long time, even in the ukraine war it happened again, so it's not news, the scary thing is how easily accessible this technology is becoming, and I'm thinking a lot of new laws on usage of technology that will have to be created because of this.

  • FrostKiwi 2 years ago

    > videos of ARMA 3 being used as war footage

    I am genuinely disappointed in the public for falling for that.

echelon 2 years ago

What's the difference in NeRF from a classical photogrammetry pointcloud workflow? It seems like the representation and outputs are identical.

Why would you prefer NeRF to photogrammetry? Or vice versa?

  • flor1s 2 years ago

    Neural Radiance Fields are a technique from the neural rendering research field, while photogrammetry is a research field on its own. However these are just turf wars and in practice there is a lot of overlap between both fields.

    For example, most NeRF implementations recommend the use of COLMAP (traditionally a photogrammetry tool) to obtain camera positions/rotations that are used alongside their images. So this multi-view stereo step is shared between both NeRF (except a few research works that also optimize for camera positions/rotations through a neural network) and photogrammetry.

    After the multi-view stereo step in NeRF you train a neural renderer, while in photogrammetry you would run a multi-view geometry step/package that uses more traditional optimization algorithms.

    The expected output of both techniques is slightly different. NeRF produces renderings and can optionally export a mesh (using the marching cubes algorithm). Photogrammetry produces meshes and in the process might render the scene for editting purposes.

  • randyrand 2 years ago

    Nerfs can represent reflection, speculars, refractions.

    They also are proving to be faster, more accurate, etc

    The input data is the same. Nerfs have the chance of requiring less input data.

jdthedisciple 2 years ago

The bad news: This can and obviously will be abused - be it by the secret services or hackers.

The good news, I suppose: As fast and scarily as the tech to fake things is evolving, so is presumably the tech that detects fakes.

  • _tom_ 2 years ago

    Technology to detect fakes necessarily lags a bit from technology to create fakes.

    In general, you need to have examples of a type of fake to detect it.

EGreg 2 years ago

Is this what Matterport 2 app uses?

When you take many photos of a scene or indoor place and it’s stitched together?

Can this be used for metaverses?

ALSO why not synchronize videos of the same event and make an animated 3d movie from 2d ones !!! Great as a “disney ride metaverse”

Who is doing that in our space?

nathias 2 years ago

can't wait until deepfakes completely revolutionizes people's relation to information

1970-01-01 2 years ago

Neural Radiance Fields (NeRF)