70
linuxxx
7y

"secure" messaging apps which aren't open source.

Isn't it common sense that, when you can't check an app for anything because it's closed source (backdoors, vulnerabilities etc), you technically can't be sure whether it's actually secure or not?

And no, I'm not going to trust an app dev on his/her blue fucking eyes on this one.

Comments
  • 6
    You're right. But isn't that a dutch expression xD.
  • 3
    @Jifuna Yeah haha "iemand niet op zijn blauwe ogen vertrouwen" :P
  • 2
    @MrJimmy I only do it if It uses my personal data or can snoop on it.
  • 3
    @MrJimmy well I won't look at the whole application, just at the critical parts I deem to be important. For example, a password manager (has to be quite secure of course)

    I'll check for the encryption method, the login/unlock method, how that unlock key is stored / created, and if the application makes any weird connections
  • 1
    @MrJimmy I have no clue, I'd imagine some people would do it? But peer review can go a long way :)
  • 3
    @MrJimmy they could obfuscate the code in such a way that you think it works like x but behind a lot of random code actually performs y. In that case I won't know it because I don't look at for such things. So now I'm getting a little more paranoid 😅 thanks...

    But in all seriousness, if there are programs like that, then there must be comments by people who read the whole thing and warn for that (unless you use some new code which doesn't have any reputation and / or creditable (like other well known open source software) developers
  • 1
    I just realized how racist of an expression that actually is 😂
  • 0
    Well you can still check what data you are sending. And why the fuck do you trust DDG then? As far as I know there's no way to check if they're not storing data.
  • 1
    @incognito well there is a working "feature" for this in Intel cpus. There is a prefix that you can use to hide your code. Most decompilers will decode it good but Intel doesn't care about it. So this can be used to run different code in emulated environments as emulators like vbox in software emulation mode will interpret the prefix as intended.
  • 4
    And then there are people who say "open code is dangerous everyone can see everything hurr durr".
  • 0
    I was actually thinking of that earlier on the bus, that's exactly like the company saying: "you have to take our words and our words only that we're *insert statement here* and no-one can check if it's true or not, it's dependent on our team knowledge"... Weird but true, I tried not to make it political 😗
  • 1
    It's best when the community is involved in the development of the application. That way you can be relatively certain that someone has looked at most parts of the code. ^^
  • 1
    @MrJimmy first... Yes. There are always people that look at the source first. That said, as far as "secure" software goes... There are organizations who build a reputation on testing and verifying the security of various systems and software. Usually, if they say the security is legit, then it's probably damn legit.

    Why open vs closed in this case? While these professionals test both, they can crawl through the open source with a fine tooth comb to see if there is any underhanded things going on. They most likely can't with closed source.

    In the end, though, as long as you do your due diligence to the best of your technical skills or, at least take the word of a group that's at the top of their game, you should be fine.
  • 1
    @ObsidianBlk But what makes you certain that the result of the audit wasn't bought as well? Especially if there are political interests involved...
  • 1
    @theCalcaholic Again, it's a matter of reputation. Security minded individuals are kinda the tech equivalent of survivalists. If an audit is "purchased", as you suggest, and that fact is revealed, no security expert worth their salt will ever trust that organization's audits ever again. I don't think the short term money would be worth that risk to such an organization.

    Does that mean your suggestion is impossible? Not at all. Humans are stupid. That said, I don't believe it happens as often as would be feared.

    Just my opinion, of course.
  • 1
    Tox! Tox! Tox! :)
  • 0
    @ObsidianBlk We can't be sure of that if you have a look at how often even the most reputable software and even stuff like internet standards have secret service funded back doors.
    If the NSA can inject broken crypto into widely used standards (I'm referring to Dual_EC_DRBG), there is no reason why they shouldn't have at least the same level of influence on at least some 'reputable' security auditors.
  • 0
    @theCalcaholic *slowly blinks* yeah... Ok. The NSA may pay a 'reputable' auditor off. The KGB, MI5, and Lord knows what other agencies of nefarious agenda may be doing the same. For every balance we attempt to impose, there's a check to counter it. Humans... To shreds to us all...

    Now, unless you are a 1337 haxor, or a G-man with top secret data... Well, taking the word of, say, the top three to five digital security auditor organizations should leave you in relatively safe hands. One of them may... MAY... Be a shill for the NSA, but probably not all of them.
  • 0
    @ObsidianBlk I think you missed the point. In the example I have been giving we all have been using insecure, broken crypto because the NSA recommended it. If they or someone else use it against you doesn't matter. Also you don't have to be actively targeted by secret services (at least the biggest ones) to benefit from their surveillance program, it you use insecure software. Because if it doesn't take a lot of effort they can just survey *all* the traffic (something we know they do).

    Third and last, this is not primarily about the consumer. The greatest harm is done when the actual crypto libraries are compromised. They need to be trusted by companies which base their own product on them. Because these libraries are few and ubiquitous they are also the first attack vector for anyone capable of compromising them.

    That's why they *need* to be public to be trusted, and that also enables transparent audits (otherwise you just get to know the resulting rating).
  • 0
    @ObsidianBlk For consumer products you could take the risk, but I'd recommend checking out alternatives first.
  • 1
    @MrJimmy considering bugs like heartbleed exist, i’m going to say no.

    I love open source as much as everyone else, but i don’t live in a fantasy world. The reality is open source provides a false sense of security.

    Meanwhile 99% of devs be like:
    “Oh, i’m sure some experts carefully reviewed OpenSsl, i’ll just trust it’s OK! They surely wouldn’t miss something as stupid as an extra goto statement.”
  • 1
    @starless You're a toxer as well?!
  • 0
    @incognito What if developer builds another application on the top of same database and extracts the user details
  • 0
    @linuxxx I was for quite a while. My long distance S.O. and I used it to stay in touch. Did they ever get voice/video chat fixed?
  • 2
    @starless It's been working great for years for me haha!
  • 2
    I would say, you never ever can be sure if something is secure. No matter if its open or closed source.

    You can verify open source code to a certain degree, but this does not necessarily increase it's real overall security/vulnerability score as some of the comments here already suggest.

    But. Since security is not only about vulnerabilities, but also a process by itself, the fact that you can have a look into the code for yourself is a plus in the trustability if your are capable of checking it.

    I think security is not the best argument/category for open source. But of course there are many others!

    Being sure btw. would require a full fletched, unbiased, automated security audit by some genius open source algorithm, which then whould have to also predict new attack scenarios. If it's even possible. At least (ignoring zero day attacks) it would require a distributed public security flaw database through e.g. blockchain (wasn't there a hype about it recently?). Notice that the open source aspect is of higher importance in this case, since it'd be a meta-security project.
  • 0
    How about "trustworthy offline password managers who aren't open source"?
  • 1
    @w4tsn I agree but when something in relation to security/privacy is closed source, any option of verification is entirely gone.
Add Comment