568 links
97 private links
  • palkeo - liens
  • Home
  • Login
  • RSS Feed
  • ATOM Feed
  • Tag cloud
  • Picture wall
  • Daily
Links per page: 20 50 100
◄Older
page 1 / 3
56 results tagged hacking x
  • thumbnail
    A penetration tester’s guide to sub-domain enumeration
    November 12, 2017 at 10:53:25 PM GMT+1 - permalink - https://blog.appsecco.com/a-penetration-testers-guide-to-sub-domain-enumeration-7d842d5570f6
    pentest sécu sécurité hacking
  • Mots de casse — David Larlet

    Je me dis parfois qu’il serait tellement simple pour un service web de refuser à leurs usagers l’authentification ponctuellement (et/ou à un faible pourcentage) pour obtenir un mot de passe alternatif qui serait très probablement utilisable sur un autre service. C’est même pire car avec ces deux mots de passe (ou plus…), je couvre potentiellement 100% de leurs vies numériques. Ces hypothèses demanderaient à être vérifiées.

    December 1, 2015 at 3:45:31 PM GMT+1 - permalink - https://larlet.fr/david/stream/2015/12/01/
    sécurité hacking
  • american fuzzy lop

    Awesome fuzzer.

    October 28, 2015 at 5:00:51 PM GMT+1 - permalink - http://lcamtuf.coredump.cx/afl/
    fuzzer sécurité hacking
  • http://lcamtuf.coredump.cx/signals.txt

    "Delivering Signals for Fun and Profit"

         Understanding, exploiting and preventing signal-handling 
                        related vulnerabilities.
    
               Michal Zalewski <lcamtuf@razor.bindview.com>
                 (C) Copyright 2001 BindView Corporation

    0) Introduction

    According to a popular belief, writing signal handlers has little or nothing
    to do with secure programming, as long as handler code itself looks good.
    At the same time, there have been discussions on functions that shall be
    invoked from handlers, and functions that shall never, ever be used there.
    Most Unix systems provide a standarized set of signal-safe library calls.
    Few systems have extensive documentation of signal-safe calls - that includes
    OpenBSD, Solaris, etc.:

    http://www.openbsd.org/cgi-bin/man.cgi?query=sigaction:

    "The following functions are either reentrant or not interruptible by sig-
    nals and are async-signal safe. Therefore applications may invoke them,
    without restriction, from signal-catching functions:

        _exit(2), access(2), alarm(3), cfgetispeed(3), cfgetospeed(3),
        cfsetispeed(3), cfsetospeed(3), chdir(2), chmod(2), chown(2),
        close(2), creat(2), dup(2), dup2(2), execle(2), execve(2),
        fcntl(2), fork(2), fpathconf(2), fstat(2), fsync(2), getegid(2),
        geteuid(2), getgid(2), getgroups(2), getpgrp(2), getpid(2),
        getppid(2), getuid(2), kill(2), link(2), lseek(2), mkdir(2),
        mkfifo(2), open(2), pathconf(2), pause(2), pipe(2), raise(3),
        read(2), rename(2), rmdir(2), setgid(2), setpgid(2), setsid(2),
        setuid(2), sigaction(2), sigaddset(3), sigdelset(3),
        sigemptyset(3), sigfillset(3), sigismember(3), signal(3),
        sigpending(2), sigprocmask(2), sigsuspend(2), sleep(3), stat(2),
        sysconf(3), tcdrain(3), tcflow(3), tcflush(3), tcgetattr(3),
        tcgetpgrp(3), tcsendbreak(3), tcsetattr(3), tcsetpgrp(3), time(3),
        times(3), umask(2), uname(3), unlink(2), utime(3), wait(2),
        waitpid(2), write(2). sigpause(3), sigset(3).

    All functions not in the above list are considered to be unsafe with re-
    spect to signals. That is to say, the behaviour of such functions when
    called from a signal handler is undefined. In general though, signal
    handlers should do little more than set a flag; most other actions are
    not safe."

    It is suggested to take special care when performing any non-atomic
    operations while signal delivery is not blocked, and/or not to rely on
    internal program state in signal handler. Generally, signal handlers should
    do not much more than setting a flag, whenever it is acceptable.

    Unfortunately, there were no known, practical security considerations of
    such bad coding practices. And while signal can be delivered anywhere
    during the userspace execution of given program, most of programmers never
    take enough care to avoid potential implications caused by this fact.
    Approximately 80 to 90% of signal handlers we have examined were written
    in insecure manner.

    This paper is an attempt to demonstrate and analyze actual risks caused by
    this kind of coding practices, and to discuss threat scenarios that can be
    used by an attacker in order to escalate local privileges, or, sometimes,
    gain remote access to a machine. This class of vulnerabilities affects
    numerous complex setuid programs (Sendmail, screen, pppd, etc.) and
    several network daemons (ftpd, httpd and so on).

    Thanks to Theo de Raadt for bringing this problem to my attention;
    to Przemyslaw Frasunek for remote attack possibilities discussion; Dvorak,
    Chris Evans and Pekka Savola for outstanding contribution to heap corruption
    attacks field; Gregory Neil Shapiro and Solar Designer for their comments
    on the issues discussed below. Additional thanks to Mark Loveless,
    Dave Mann, Matt Power and other RAZOR team members for their support and
    reviews.

    1) Impact: handler re-entry (Sendmail case)

    Before we discuss more generalized attack scenarios, I would like to explain
    signal handler races starting with very simple and clean example. We would
    try to exploit non-atomic signal handler. The following code generalizes, in
    simplified way, very common bad coding practice (which is present, for
    example, in setuid root Sendmail program up to 8.11.3 and 8.12.0.Beta7):

    /*****

    • This is a generic verbose signal handler - does some *
    • logging and cleanup, probably calling other routines. *
      *****/

    void sighndlr(int dummy) {
    syslog(LOG_NOTICE,user_dependent_data);
    // Initial cleanup code, calling the following somewhere:
    free(global_ptr2);
    free(global_ptr1);
    //
    1 *** >> Additional clean-up code - unlink tmp files, etc <<
    exit(0);
    }

    /**

    • This is a signal handler declaration somewhere *
    • at the beginning of main code. *
      **/

      signal(SIGHUP,sighndlr);
      signal(SIGTERM,sighndlr);

      // Other initialization routines, and global pointer
      //
      assignment somewhere in the code (we assume that
      // *** nnn is partially user-dependent, yyy does not have to be):

      global_ptr1=malloc(nnn);
      global_ptr2=malloc(yyy);

      // 2 >> further processing, allocated memory <<
      // 2 >> is filled with any data, etc... <<

    This code seems to be pretty immune to any kind of security compromises. But
    this is just an illusion. By delivering one of the signals handled by
    sighndlr() function somewhere in the middle of main code execution (marked
    as ' 2 ' in above example) code execution would reach handler function.
    Let's assume we delivered SIGHUP. Syslog message is written, two pointers are
    freed, and some more clean-up is done before exiting ( 1 ).

    Now, by quickly delivering another signal - SIGTERM (note that already
    delivered signal is masked and would be not delivered, so you cannot
    deliver SIGHUP, but there is absolutely nothing against delivering SIGTERM) -
    attacker might cause sighndlr() function re-entry. This is a very common
    condition - 'shared' handlers are declared for SIGQUIT, SIGTERM, SIGINT,
    and so on.

    Now, for the purpose of this demonstration, we would like to target heap
    structures by exploiting free() and syslog() behavior. It is very important
    to understand how [v]syslog() implementation works. We would focus on Linux
    glibc code - this function creates a temporary copy of the logged message in
    so-called memory-buffer stream, which is dynamically allocated using two
    malloc() calls - the first one allocates general stream description
    structure, and the other one creates actual buffer, which would contain
    logged message.

    Please refer the following URL for vsyslog() function sources:

    http://src.openresources.com/debian/src/libs/HTML/S/glibc_2.0.7t.orig%20glibc-2.0.7t.orig%20misc%20syslog.c.html#101

    Stream management functions (open_memstream, etc.) can be found at:

    http://src.openresources.com/debian/src/libs/HTML/S/glibc_2.0.7t.orig%20glibc-2.0.7t.orig%20libio%20memstream.c.html#63

    In order for this particular attack to be successful, two conditions have
    to be met:

    • syslog() data must be user-dependent (like in Sendmail log messages
      describing transferred mail traffic),

    • second of these two global memory blocks must be aligned the way
      that would be re-used in second open_memstream() malloc() call.

    The second buffer (global_ptr2) would be free()d during the first
    sighndlr() call, so if these conditions are met, the second syslog()
    call would re-use this memory and overwrite this area, including
    heap-management structures, with user-dependent syslog() buffer.

    Of course, this situation is not limited to two global buffers - generally,
    we need one out of any number of free()d buffers to be aligned that way.
    Additional possibilities are related to interrupting free() chain by precise
    SIGTERM delivery and/or influencing buffer sizes / heap data order by
    using different input data patterns.

    If so, the attacker can cause second free() pass to be called with a pointer
    to user-dependent data (syslog buffer), this leads to instant root compromise

    • see excellent article by Chris Evans (based on observations by Pekka Savola):

      http://lwn.net/2000/1012/a/traceroute.php3

    Practical discussion and exploit code for the vulnerability discussed in
    above article can be found there:

    http://security-archive.merton.ox.ac.uk/bugtraq-200010/0084.html

    Below is a sample 'vulnerable program' code:

    --- vuln.c ---
    #include <signal.h>
    #include <syslog.h>
    #include <string.h>
    #include <stdlib.h>

    void global1, global2;
    char *what;

    void sh(int dummy) {
    syslog(LOG_NOTICE,"%s\n",what);
    free(global2);
    free(global1);
    sleep(10);
    exit(0);
    }

    int main(int argc,char* argv[]) {
    what=argv[1];
    global1=strdup(argv[2]);
    global2=malloc(340);
    signal(SIGHUP,sh);
    signal(SIGTERM,sh);
    sleep(10);
    exit(0);
    }
    ---- EOF ----

    You can exploit it, forcing free() to be called on a memory region filled
    with 0x41414141 (you can see this value in the registers at the time
    of crash -- the bytes represented as 41 in hex are set by the 'A'
    input characters in the variable $LOG below). Sample command lines
    for a Bash shell are:

    $ gcc vuln.c -o vuln
    $ PAD=perl -e '{print "x"x410}'
    $ LOG=perl -e '{print "A"x100}'
    $ ./vuln $LOG $PAD & sleep 1; killall -HUP vuln; sleep 1; killall -TERM vuln

    The result should be a segmentation fault followed by nice core dump
    (for Linux glibc 2.1.9x and 2.0.7).

    (gdb) back
    #0 chunk_free (ar_ptr=0x4013dce0, p=0x80499a0) at malloc.c:3069
    #1 0x4009b334 in libc_free (mem=0x80499a8) at malloc.c:3043
    #2 0x80485b8 in sh ()
    #4 0x400d5971 in __libc_nanosleep () from /lib/libc.so.6
    #5 0x400d5801 in
    sleep (seconds=10) at ../sysdeps/unix/sysv/linux/sleep.c:85
    #6 0x80485d6 in sh ()

    So, as you can see, failure was caused when signal handler was re-entered.
    __libf_free function was called with a parameter of 0x080499a8, which points
    somewhere in the middle of our AAAs:

    (gdb) x/s 0x80499a8
    0x80499a8: 'A' <repeats 94 times>, "\n"

    You can find 0x41414141 in the registers, as well, showing this data
    is being processed. For more analysis, please refer to the paper mentioned
    above.

    For the description, impact and fix information on Sendmail signal
    handling vulnerability, please refer to the RAZOR advisory at:

    http://razor.bindview.com/publish/advisories/adv_sm8120.html

    Obviously, that is just an example of this attack. Whenever signal handler
    execution is non-atomic, attacks of this kind are possible by re-entering
    the handler when it is in the middle of performing non-reentrant operations.
    Heap damage is the most obvious vector of attack, in this case, but not the
    only one.

    2) Impact: signal in the middle (screen case)

    The attack described above usually requires specific conditions
    to be met, and takes advantage of non-atomic signal handler execution,
    which can be easily avoided by using additional flags or blocking
    signal delivery.

    But, as signal can be delivered at any moment (unless explictly blocked),
    this is obvious that it is possible to perform an attack without re-entering
    the handler itself. It is enough to deliver a signal in a 'not appropriate'
    moment. There are two attack schemes:

    A) re-entering libc functions:

    Every function that is not listed as reentry-safe is a potential source
    of vulnerabilities. Indeed, numerous library functions are operating
    on global variables, and/or modify global state in non-atomic way.
    Once again, heap-management routines are probably the best example.
    By delivering a signal when malloc(), free() or any other libcall of
    this kind is being called, all subsequent calls to the heap management
    routines made from signal handler would have unpredictable effect,
    as heap state is completely unpredictable for the programmer.

    Other good examples are functions working on static/global variables
    and buffers like certain implementations of strtok(), inet_ntoa(),
    gethostbyname() and so on. In all cases, results will be unpredictable.

    B) interrupting non-atomic modifications:

    This is basically the same problem, but outside library functions.
    For example, the following code:

     dropped_privileges = 1;
     setuid(getuid());

    is, technically speaking, using safe library functions only. But,
    at the same time, it is possible to interrupt execution between
    substitution and setuid() call, causing signal handler to be executed
    with dropped_privileges flag set, but superuser privileges not dropped.

    This, very often, might be a source of serious problems.

    First of all, we would like to come back to Sendmail example, to
    demonstrate potential consequences of re-entering libc. Note that signal
    handler is NOT re-entered - signal is delivered only once:

    #0 0x401705bc in chunk_free (ar_ptr=0x40212ce0, p=0x810f900) at malloc.c:3117 #1 0x4016fd12 in chunk_alloc (ar_ptr=0x40212ce0, nb=8200) at malloc.c:2601
    #2 0x4016f7e6 in __libc_malloc (bytes=8192) at malloc.c:2703
    #3 0x40168a27 in open_memstream (bufloc=0xbfff97bc, sizeloc=0xbfff97c0) at memstream.c:112
    #4 0x401cf4fa in vsyslog (pri=6, fmt=0x80a5e03 "%s: %s", ap=0xbfff99ac) at syslog.c:142
    #5 0x401cf447 in syslog (pri=6, fmt=0x80a5e03 "%s: %s") at syslog.c:102
    #6 0x8055f64 in sm_syslog ()
    #7 0x806793c in logsender ()
    #8 0x8063902 in dropenvelope ()
    #9 0x804e717 in finis ()
    #10 0x804e9d8 in intsig () <---- SIGINT
    #11 <signal handler called>
    #12 chunk_alloc (ar_ptr=0x40212ce0, nb=4104) at malloc.c:2968
    #13 0x4016f7e6 in __libc_malloc (bytes=4097) at malloc.c:2703

    Heap corruption is caused by interruped malloc() call and, later, by
    calling malloc() once again from vsyslog() function invoked from handler.

    There are two another examples of very interesting stack corruption caused by
    re-entering heap management routines in Sendmail daemon - in both cases,
    signal was delivered only once:

    A)

    #0 0x401705bc in chunk_free (ar_ptr=0xdbdbdbdb, p=0x810b8e8) at malloc.c:3117
    #1 0xdbdbdbdb in ?? ()

    B)

    /.../
    #9 0x79f68510 in ?? ()
    Cannot access memory at address 0xc483c689

    We'd like to leave this one as an exercise for a reader - try to figure
    out why this happens and why this problem can be exploitable. For now,
    we would like to come back to our second scenario, interrupting non-atomic
    code to show that targeting heap is not the only possibility.

    Some programs are temporarily returning to superuser UID in cleanup
    routines, e.g., in order to unlink specific files. Very often, by entering
    the handler at given moment, is possible to perform all the cleanup file
    access operations with superuser privileges.

    Here's an example of such coding, that can be found mainly in
    interactive setuid software:

    --- vuln2.c ---
    #include <signal.h>
    #include <string.h>
    #include <stdlib.h>

    void sh(int dummy) {
    printf("Running with uid=%d euid=%d\n",getuid(),geteuid());
    }

    int main(int argc,char* argv[]) {
    seteuid(getuid());
    setreuid(0,getuid());
    signal(SIGTERM,sh);
    sleep(5);

    // this is a temporarily privileged code:
    seteuid(0);
    unlink("tmpfile");
    sleep(5);
    seteuid(getuid());

    exit(0);
    }
    ---- EOF ----

    gcc vuln.c -o vuln; chmod 4755 vuln

    su user

    $ ./vuln & sleep 3; killall -TERM vuln; sleep 3; killall -TERM vuln
    Running with uid=500 euid=500
    Running with uid=500 euid=0

    Such a coding practice can be found, par example, in 'screen' utility
    developed by Oliver Laumann. One of the most obvious locations is CoreDump
    handler [screen.c]:

    static sigret_t
    CoreDump SIGDEFARG
    {
    /.../
    setgid(getgid());
    setuid(getuid());
    unlink("core");
    /.../

    SIGSEGV can be delivered in the middle of user-initiated screen detach
    routine, for example. To better understand what and why is going on,
    here's an strace output for detach (Ctrl+A, D) command:

    23534 geteuid() = 0
    23534 geteuid() = 0
    23534 getuid() = 500
    23534 setreuid(0, 500) = 0 HERE IT HAPPENS
    23534 getegid() = 500
    23534 chmod("/home/lcamtuf/.screen/23534.tty5.nimue", 0600) = 0
    23534 utime("/home/lcamtuf/.screen/23534.tty5.nimue", NULL) = 0
    23534 geteuid() = 500
    23534 getuid() = 0

    Marked line sets uid to zero. If SIGSEGV is delivered somewhere near this
    point, CoreDump() handler would run with superuser privileges, due to
    initial setuid(getuid()).

    3) Remote exploitation of signal delivery (WU-FTPD case)

    This is a very interesting issue, directly related to re-entering libc
    functions and/or interrupting non-atomic code. Many complex daemons,
    like ftp, some http/proxy services, MTAs, etc., have SIGURG handlers declared -
    very often these handlers are pretty verbose, calling syslog(), or freeing
    some resources allocated for specific connection. The trick is that SIGURG,
    obviously, can be delivered over the network, using TCP/IP OOB message.
    Thus, it is possible to perform attacks using network layer without
    any priviledges.

    Below is a SIGURG handler routine, which, with small modifications,
    is shared both by BSD ftpd and WU-FTPD daemons:

    static VOIDRET myoob FUNCTION((input), int input)
    {
    /.../
    if (getline(cp, 7, stdin) == NULL) {
    reply(221, "You could at least say goodbye.");
    dologout(0);
    }
    /.../
    }

    As you can see in certain conditions, dologout() function is called.
    This routine looks this way:

    dologout(int status)
    {
    /.../
    if (logged_in) {
    delay_signaling(); / we can't allow any signals while euid==0: kinch /
    (void) seteuid((uid_t) 0);
    wu_logwtmp(ttyline, "", "");
    }
    if (logging)
    syslog(LOG_INFO, "FTP session closed");
    /.../
    }

    As you can see, the authors took an additional precaution not to allow
    signal delivery in the "logged_in" case. Unfortunately, syslog() is
    a perfect example of a libc function that should NOT be called during
    signal handling, regardless of whether "logged_in" or any other
    special condition happens to be in effect.

    As mentioned before, heap management functions such as malloc() are
    called within syslog(), and these functions are not atomic. The OOB
    message might arrive when the heap is in virtually any possible state.
    Playing with uids / privileges / internal state is an option, as well.

    4) Practical considerations: timing

    In most cases this is a non-issue for local attacks, as the attacker
    might control the execution environment (e.g., the load average, the
    number of local files that the daemon needs to access, etc.) and try
    a virtually infinite number of times by invoking the same program over
    and over again, increasing the possibility of delivering signal at
    given point. For remote attacks, this is a major issue, but as long
    as the attack itself won't cause service to stop responding, thousands of
    attempts might be performed.

    5) Solving signal race problems

    This is a very complex and difficult task. There are at least three aspects
    of this:

    • Using reentrant-safe libcalls in signal handlers only. This would
      require major rewrites of numerous programs. Another half-solution is
      to implement a wrapper around every insecure libcall used, having
      special global flag checked to avoid re-entry,

    • Blocking signal delivery during all non-atomic operations and/or
      constructing signal handlers in the way that would not rely on
      internal program state (e.g. unconditional setting of specific flag
      and nothing else),

    • Blocking signal delivery in signal handlers.

                                                      Michal Zalewski
                                         <lcamtuf@razor.bindview.com>
                                                      16-17 May, 2001
    October 25, 2015 at 6:38:18 PM GMT+1 - permalink - http://lcamtuf.coredump.cx/signals.txt
    hacking sécurité exploit
  • Security Response & Bug Bounty Platform
    June 23, 2015 at 5:20:03 PM GMT+2 - permalink - https://hackerone.com/
    job travail sécurité hacking bounty
  • thumbnail
    Wifi Phisher

    Plutôt que de cracker du WPA… Suffit de créer un faux AP, de faire un peu de social engineering, et pouf…

    January 5, 2015 at 5:57:34 AM GMT+1 - permalink - https://github.com/sophron/wifiphisher
    hacking sécurité
  • thumbnail
    Why are free proxies free?

    Faire du MITM sur un wifi ouvert, ok. Mais envoyer des headers pour demander au nivagateur de cacher des fichiers JS vérolés, c'est incroyablement ingénieux…

    January 5, 2015 at 4:52:31 AM GMT+1 - permalink - https://blog.haschek.at/post/fd9bc
    hacking sécurité
  • Linux Exploit Suggester by PenturaLabs
    December 28, 2014 at 3:14:34 PM GMT+1 - permalink - https://penturalabs.github.io/Linux_Exploit_Suggester/
    hacking sécurité kernel linux
  • Dashboard | CrowdCurity

    Site pour faire de freelance en sécurité informatique !

    September 30, 2014 at 6:49:15 PM GMT+2 - permalink - https://www.crowdcurity.com/
    job travail jobs sécu hacking sécurité
  • Indexeus

    Avec Shodan, ça m'a l'air d'être un gros indispensable.

    August 5, 2014 at 2:45:03 AM GMT+2 - permalink - http://indexeus.org/
    sécurité hacking
  • Dropbear SSH
    July 28, 2014 at 10:14:00 PM GMT+2 - permalink - https://matt.ucc.asn.au/dropbear/dropbear.html
    sécurité hacking ssh adminsys
  • thumbnail
    Stealing unencrypted SSH-agent keys from memory - NetSPI Blog
    July 26, 2014 at 1:37:51 AM GMT+2 - permalink - https://www.netspi.com/blog/entryid/235/stealing-unencrypted-ssh-agent-keys-from-memory
    hacking sécurité ssh
  • Metasploit Unleashed

    Guide extrêmement bien fait pour apprendre à utiliser metasploit.

    July 22, 2014 at 9:39:26 PM GMT+2 - permalink - http://www.offensive-security.com/metasploit-unleashed/Main_Page
    metasploit hacking sécurité pentest
  • We lost the war. Welcome to the world of tomorrow.

    Losing a war is never a pretty situation. So it is no wonder that most people do not like to acknowledge that we have lost. We had a reasonable chance to tame the wild beast of universal surveillance technology, approximately until september 10th, 2001. One day later, we had lost. All the hopes we had, to keep the big corporations and “security forces” at bay and develop interesting alternative concepts in the virtual world, evaporated with the smoke clouds of the World Trade Center.

    Just right before, everything looked not too bad. We had survived Y2K with barely a scratch. The world’s outlook was mildly optimistic after all. The “New Economy” bubble gave most of us fun things to do and the fleeting hope of plenty of cash not so far down the road. We had won the Clipper-Chip battle, and crypto-regulation as we knew it was a thing of the past. The waves of technology development seemed to work in favor of freedom, most of the time. The future looked like a yellow brick road to a nirvana of endless bandwith, the rule of ideas over matter and dissolving nation states. The big corporations were at our mercy because we knew what the future would look like and we had the technology to built it. Those were the days. Remember them for your grandchildren’s bedtime stories. They will never come back again.

    We are now deep inside the other kind of future, the future that we speculated about as a worst case scenario, back then. This is the ugly future, the one we never wanted, the one that we fought to prevent. We failed. Probably it was not even our fault. But we are forced to live in it now.
    Democracy is already over

    By its very nature the western democracies have become a playground for lobbyists, industry interests and conspiracies that have absolutely no interest in real democracy. The “democracy show” must go on nonetheless. Conveniently, the show consumes the energy of those that might otherwise become dangerous to the status quo. The show provides the necessary excuse when things go wrong and keeps up the illusion of participation. Also, the system provides organized and regulated battleground rules to find out which interest groups and conspiracies have the upper hand for a while. Most of the time it prevents open and violent power struggles that could destabilize everything. So it is in the best interest of most players to keep at least certain elements of the current “democracy show” alive. Even for the more evil conspiracies around, the system is useful as it is. Certainly, the features that could provide unpleasant surprises like direct popular votes on key issues are the least likely to survive in the long run.

    Of course, those in power want to minimize the influence of random chaotic outbursts of popular will as much as possible. The real decisions in government are not made by ministers or the parliament. The real power of government rests with the undersecretaries and other high-level, non-elected civil servants who stay while the politicians come and go. Especially in the bureaucracies of the intelligence agencies, the ministry of interior, the military, and other key nodes of power the long-term planning and decision-making is not left to the incompetent mediocre political actors that get elected more or less at random. Long term stability is a highly valued thing in power relations. So even if the politicians of states suddenly start to be hostile to each other, their intelligence agencies will often continue to cooperate and trade telecommunication interception results as if nothing has happened.

    Let’s try for a minute to look at the world from the perspective of such an 60-year-old bureaucrat that has access to the key data, the privilege to be paid to think ahead, and the task to prepare the policy for the next decades. What he would see, could look like this:


    First,
    paid manual labor will be eaten away further by technology, even more rapidly than today. Robotics will evolve far enough to kill a sizeable chunk of the remaining low-end manual jobs. Of course, there will be new jobs, servicing the robots, biotech, designing stuff, working on the nanotech developments etc. But these will be few, compared with today, and require higher education. Globalization continues its merciless course and will also export a lot of jobs of the brain-labor type to India and China, as soon as education levels there permit it.

    So the western societies will end up with a large percentage of population, at least a third, but possibly half of those in working age, having no real paid work. There are those whose talents are cheaper to be had elsewhere, those who are more inclined to manual labor. Not only the undereducated but all those who simply cannot find a decent job anymore. This part of the population needs to be pacified, either by Disney or by Dictatorship, most probably by both. The unemployment problem severely affects the ability of states to pay for social benefits. At some point it becomes cheaper to put money into repressive police forces and rule by fear than put the money into pay-outs to the unemployed population and buy the social peace. Criminal activities look more interesting when there is no decent job to be had. Violence is the unavoidable consequence of degrading social standards. Universal surveillance might dampen the consequences for those who remain with some wealth to defend.


    Second,
    climate change increases the frequency and devastation of natural disasters, creating large scale emergency situations. Depending on geography, large parts of land may become uninhabitable due to draught, flood, fires or plagues. This creates a multitude of unpleasant effects. A large number of people need to move, crop and animal production shrinks, industrial centers and cities may be damaged to the point where abandoning them is the only sensible choice left. The loss of property like non-usable (or non-insurable) real estate will be frightening. The resulting internal migratory pressures towards “safe areas” become a significant problem. Properly trained personal, equipment, and supplies to respond to environmental emergencies are needed standby all the time, eating up scarce government resources. The conscript parts of national armed forces may be formed into disaster relief units as they hang around anyway with no real job to do except securing fossil energy sources abroad and helping out the border police.

    Third,
    immigration pressure from neighboring regions will raise in all western countries. It looks like the climate disaster will strike worst at first in areas like Africa and Latin America and the economy there is unlikely to cope any better than the western countries with globalization and other problems ahead. So the number of people who want to leave from there to somewhere inhabitable at all costs will rise substantially. The western countries need a certain amount of immigration to fill up their demographic holes but the number of people who want to come will be far higher. Managing a controlled immigration process according to the demographic needs is a nasty task where things can only go wrong most of the time. The nearly unavoidable reaction will be a Fortress Europe: serious border controls and fortifications, frequent and omnipresent internal identity checks, fast and merciless deportation of illegal immigrants, biometrics on every possible corner. Technology for border control can be made quite efficient once ethical hurdles have fallen.

    Fourth,
    at some point in the next decades the energy crisis will strike with full force. Oil will cost a fortune as production capacities can no longer be extended economically to meet the rising demand. Natural gas and coal will last a bit longer, a nuclear renaissance may dampen the worst of the pains. But the core fact remains: a massive change in energy infrastructure is unavoidable. Whether the transition will be harsh, painful and society-wrecking, or just annoying and expensive depends on how soon before peak oil the investments into new energy systems start on a massive scale as oil becomes to expensive to burn. Procrastination is a sure recipe for disaster. The geo-strategic and military race for the remaining large reserves of oil has already begun and will cost vast resources.

    Fifth,
    we are on the verge of technology developments that may require draconic restrictions and controls to prevent the total disruption of society. Genetic engineering and other biotechnology as well as nanotechnology (and potentially free energy technologies if they exist) will put immense powers into the hands of skilled and knowledgeable individuals. Given the general raise in paranoia, most people (and for sure those in power) will not continue to trust that common sense will prevent the worst. There will be a tendency of controls that keep this kind of technology in the hands of “trustworthy” corporations or state entities. These controls, of course, need to be enforced, surveillance of the usual suspects must be put in place to get advanced knowledge of potential dangers. Science may no longer be a harmless, self-regulating thing but something that needs to be tightly controlled and regulated, at least in the critical areas. The measures needed to contain a potential global pandemic from the Strange Virus of the Year are just a subset of those needed to contain a nanotech or biotech disaster.

    Now what follows from this view of the world? What changes to society are required to cope with these trends from the viewpoint of our 60-year-old power brokering bureaucrat?

    Strategically it all points to massive investments into internal security.
    Presenting the problem to the population as a mutually exclusive choice between an uncertain dangerous freedom and an assured survival under the securing umbrella of the trustworthy state becomes more easy the further the various crises develop. The more wealthy parts of the population will certainly require protection from illegal immigrants, criminals, terrorists and implicitly also from the anger of less affluent citizens. And since the current system values rich people more then poor ones, the rich must get their protection. The security industry will certainly be of happy helpful assistance, especially where the state can no longer provide enough protection for the taste of the lucky ones.

    Traditional democratic values have been eroded to the point where most people don’t care anymore. So the loss of rights our ancestors fought for not so long ago is at first happily accepted by a majority that can easily be scared into submission. “Terrorism” is the theme of the day, others will follow. And these “themes” can and will be used to mold the western societies into something that has never been seen before: a democratically legitimated police state, ruled by an unaccountable elite with total surveillance, made efficient and largely unobtrusive by modern technology. With the enemy (immigrants, terrorists, climate catastrophe refugees, criminals, the poor, mad scientists, strange diseases) at the gates, the price that needs to be paid for “security” will look acceptable.

    Cooking up the “terrorist threat” by apparently stupid foreign policy and senseless intelligence operations provides a convenient method to get through with the establishment of a democratically legitimized police state. No one cares that car accidents alone kill many more people than terrorists do. The fear of terrorism accelerates the changes in society and provides the means to get the suppression tools required for the coming waves of trouble.

    What we call today “anti-terrorism measures” is the long-term planned and conscious preparation of those in power for the kind of world described above.
    The Technologies of Oppression

    We can imagine most of the surveillance and oppression technology rather well. Blanket CCTV coverage is reality in some cities already. Communication pattern analysis (who talks to whom at what times) is frighteningly effective. Movement pattern recording from cellphones, traffic monitoring systems, and GPS tracking is the next wave that is just beginning. Shopping records (online, credit and rebate cards) are another source of juicy data. The integration of all these data sources into automated behavior pattern analysis currently happens mostly on the dark side.

    The key question for establishing an effective surveillance based police state is to keep it low-profile enough that “the ordinary citizen” feels rather protected than threatened, at least until all the pieces are in place to make it permanent. First principle of 21st century police state: All those who “have nothing to hide” should not be bothered unnecessarily. This goal becomes even more complicated as with the increased availability of information on even minor everyday infringements the “moral” pressure to prosecute will rise. Intelligence agencies have always understood that effective work with interception results requires a thorough selection between cases where it is necessary to do something and those (the majority) where it is best to just be silent and enjoy.

    Police forces in general (with a few exceptions) on the other hand have the duty to act upon every crime or minor infringement they get knowledge of. Of course, they have a certain amount of discretion already. With access to all the information outlined above, we will end up with a system of selective enforcement. It is impossible to live in a complex society without violating a rule here and there from time to time, often even without noticing it. If all these violations are documented and available for prosecution, the whole fabric of society changes dramatically. The old sign for totalitarian societies – arbitrary prosecution of political enemies – becomes a reality within the framework of democratic rule-of-law states. As long as the people affected can be made looking like the enemy-”theme” of the day, the system can be used to silence opposition effectively. And at some point the switch to open automated prosecution and policing can be made as any resistance to the system is by definition “terrorism”. Development of society comes to a standstill, the rules of the law and order paradise can no longer be violated.

    Now disentangling ourselves from the reality tunnel of said 60-year-old bureaucrat, where is hope for freedom, creativity and fun? To be honest, we need to assume that it will take a couple of decades before the pendulum will swing back into the freedom direction, barring a total breakdown of civilization as we know it. Only when the oppression becomes to burdensome and open, there might be a chance to get back to overall progress of mankind earlier. If the powers that be are able to manage the system smoothly and skillfully, we cannot make any prediction as to when the new dark ages will be over.
    So what now?

    

    Move to the mountains, become a gardener or carpenter, search for happiness in communities of like minded people, in isolation from the rest of the world? The idea has lost its charm for most who ever honestly tried. It may work if you can find eternal happiness in milking cows at five o’clock in the morning. But for the rest of us, the only realistic option is to try to live in, with, and from the world as bad it has become. We need to built our own communities nonetheless, virtual or real ones.

    The politics & lobby game

    So where to put your energy then? Trying to play the political game, fighting against software patents, surveillance laws, and privacy invasions in parliament and the courts can be the job of a lifetime. It has the advantage that you will win a battle from time to time and can probably slow things down. You may even be able to prevent a gross atrocity here and there. But in the end, the development of technology and the panic level of the general population will chew a lot of your victories for breakfast.

    This is not to discount the work and dedication of those of us who fight on this front. But you need to have a lawyers mindset and a very strong frustration tolerance to gain satisfaction from it, and that is not given to everyone. We need the lawyers nonetheless.

    Talent and Ethics

    Some of us sold their soul, maybe to pay the rent when the bubble bursted and the cool and morally easy jobs became scarce. They sold their head to corporations or the government to built the kind of things we knew perfectly well how to built, that we sometimes discussed as a intellectual game, never intending to make them a reality. Like surveillance infrastructure. Like software to analyze camera images in realtime for movement patterns, faces, license plates. Like data mining to combine vast amounts of information into graphs of relations and behavior. Like interception systems to record and analyze every single phone call, e-mail, click in the web. Means to track every single move of people and things.

    Thinking about what can be done with the results of one’s work is one thing. Refusing to do the job because it could be to the worse of mankind is something completely different. Especially when there is no other good option to earn a living in a mentally stimulating way around. Most projects by itself were justifiable, of course. It was “not that bad” or “no real risk”. Often the excuse was “it is not technical feasible today anyway, it’s too much data to store or make sense from”. Ten years later it is feasible. For sure.

    While it certainly would be better when the surveillance industry would die from lack of talent, the more realistic approach is to keep talking to those of us who sold their head. We need to generate a culture that might be compared with the sale of indulgences in the last dark ages: you may be working on the wrong side of the barricade but we would be willing to trade you private moral absolution in exchange for knowledge. Tell us what is happening there, what the capabilities are, what the plans are, which gross scandals have been hidden. To be honest, there is very little what we know about the capabilities of todays dark-side interception systems after the meanwhile slightly antiquated Echelon system had been discovered. All the new stuff that monitors the internet, the current and future use of database profiling, automated CCTV analysis, behavior pattern discovery and so on is only known in very few cases and vague outlines.

    We also need to know how the intelligence agencies work today. It is of highest priority to learn how the “we rather use backdoors than waste time cracking your keys”-methods work in practice on a large scale and what backdoors have been intentionally built into or left inside our systems. Building clean systems will be rather difficult, given the multitude of options to produce a backdoor – ranging from operating system and application software to hardware and CPUs that are to complex to fully audit. Open Source does only help in theory, who has the time to really audit all the source anyway…

    Of course, the risk of publishing this kind of knowledge is high, especially for those on the dark side. So we need to build structures that can lessen the risk. We need anonymous submission systems for documents, methods to clean out eventual document fingerprinting (both on paper and electronic). And, of course, we need to develop means to identify the inevitable disinformation that will also be fed through these channels to confuse us.

    Building technology to preserve the options for change

    We are facing a unprecedented onslaught of surveillance technology. The debate whether this may or may not reduce crime or terrorism is not relevant anymore. The de-facto impact on society can already be felt with the content mafia (aka. RIAA) demanding access to all data to preserve their dead business model. We will need to build technology to preserve the freedom of speech, the freedom of thought, the freedom of communication, there is no other long-term solution. Political barriers to total surveillance have a very limited half-life period.

    The universal acceptance of electronic communication systems has been a tremendous help for political movements. It has become a bit more difficult and costly to maintain secrets for those in power. Unfortunately, the same problem applies to everybody else. So one thing that we can do to help societies progress along is to provide tools, knowledge and training for secure communications to every political and social movement that shares at least some of our ideals. We should not be too narrow here in choosing our friends, everyone who opposes centralistic power structures and is not geared towards totalitarism should be welcome. Maintaining the political breathing spaces becomes more important than what this space is used for.

    Anonymity will become the most precious thing. Encrypting communications is nice and necessary but helps little as long as the communication partners are known. Traffic analysis is the most valuable intelligence tool around. Only by automatically looking at communications and movement patterns, the interesting individuals can be filtered out, those who justify the cost of detailed surveillance. Widespread implementation of anonymity technologies becomes seriously urgent, given the data retention laws that have been passed in the EU. We need opportunistic anonymity the same way we needed opportunistic encryption. Currently, every anonymization technology that has been deployed is instantly overwhelmed with file sharing content. We need solutions for that, preferably with systems that can stand the load, as anonymity loves company and more traffic means less probability of de-anonymization by all kinds of attack.

    Closed user groups have already gained momentum in communities that have a heightened awareness and demand for privacy. The darker parts of the hacker community and a lot of the warez trading circles have gone “black” already. Others will follow. The technology to build real-world working closed user groups is not yet there. We have only improvised setups that work under very specific circumstances. Generic, easy to use technology to create fully encrypted closed user groups for all kinds of content with comfortable degrees of anonymity is desperately needed.

    Decentralized infrastructure is the needed. The peer-to-peer networks are a good example to see what works and what not. As long as there are centralized elements they can be taken down under one pretext or another. Only true peer-to-peer systems that need as little centralized elements as possible can survive. Interestingly, tactical military networks have the same requirements. We need to borrow from them, the same way they borrow from commercial and open source technology.

    Design stuff with surveillance abuse in mind is the next logical step. A lot of us are involved into designing and implementing systems that can be abused for surveillance purposes. Be it webshop systems, databases, RFID systems, communication systems, or ordinary Blog servers, we need to design things as safe as possible against later abuse of collected data or interception. Often there is considerable freedom to design within the limits of our day jobs. We need to use this freedom to build systems in a way that they collect as little data as possible, use encryption and provide anonymity as much as possible. We need to create a culture around that. A system design needs to be viewed by our peers only as “good” if it adheres to these criteria. Of course, it may be hard to sacrifice the personal power that comes with access to juicy data. But keep in mind, you will not have this job forever and whoever takes over the system is most likely not as privacy-minded as you are. Limiting the amount of data gathered on people doing everyday transactions and communication is an absolute must if you are a serious hacker. There are many good things that can be done with RFID. For instance making recycling of goods easier and more effective by storing the material composition and hints about the manufacturing process in tags attached to electronic gadgets. But to be able to harness the good potential of technologies like this, the system needs to limit or prevent the downside as much as possible, by design, not as an afterthought.

    Do not compromise your friends with stupidity or ignorance will be even more essential. We are all used to the minor fuckups of encrypted mail being forwarded unencrypted, being careless about other peoples data traces or bragging with knowledge obtained in confidence. This is no longer possible. We are facing an enemy that is euphemistically called “Global Observer” in research papers. This is meant literally. You can no longer rely on information or communication being “overlooked” or “hidden in the noise”. Everything is on file. Forever. And it can and will be used against you. And your “innocent” slip-up five years back might compromise someone you like.

    Keep silent and enjoy or publish immediately may become the new mantra for security researchers. Submitting security problems to the manufacturers provides the intelligence agencies with a long period in which they can and will use the problem to attack systems and implant backdoors. It is well known that backdoors are the way around encryption and that all big manufacturers have an agreement with the respective intelligence agencies of their countries to hand over valuable “0 day” exploit data as soon as they get them. During the months or even years it takes them to issue a fix, the agencies can use the 0 day and do not risk exposure. If an intrusion gets detected by accident, no one will suspect foul play, as the problem will be fixed later by the manufacturer. So if you discover problems, publish at least enough information to enable people to detect an intrusion before submitting to the manufacturer.

    Most important: have fun! The eavesdropping people must be laughed about as their job is silly, boring, and ethically the worst thing to earn money with, sort of blackmail and robbing grandmas on the street. We need to develop a “lets have fun confusing their systems”-culture that plays with the inherent imperfections, loopholes, systematic problems, and interpretation errors that are inevitable with large scale surveillance. Artists are the right company for this kind of approach. We need a subculture of “In your face, peeping tom”. Exposing surveillance in the most humiliating and degrading manner, giving people something to laugh about must be the goal. Also, this prevents us from becoming frustrated and tired. If there is no fun in beating the system, we will get tired of it and they will win. So let’s be flexible, creative and funny, not angry, ideologic and stiff-necked.

    June 28, 2014 at 6:24:35 AM GMT+2 - permalink - http://frank.geekheim.de/?page_id=128
    société hacking surveillance philo
  • The big GSM write-up – how to capture, analyze and crack GSM? – 1. | Going on my way…

    Péter du GSM avec un SDR

    April 26, 2014 at 4:53:01 PM GMT+2 - permalink - http://domonkos.tomcsanyi.net/?p=418
    hacking sdr sécurité
  • Weevely by epinna

    Un shell PHP vraiment bien foutu, pour remplacer C99Shell dans ma trousse à outils :)

    April 22, 2014 at 9:24:13 PM GMT+2 - permalink - https://epinna.github.io/Weevely/
    hacking web sécurité php
  • thumbnail
    ▶ Fabrice Epelboin et la société de surveillance - YouTube
    March 9, 2014 at 3:54:56 PM GMT+1 - permalink - https://www.youtube.com/watch?v=QVBRC9MmZJk
    société hacking datamining
  • Disséquer du binaire - retour d'expérience - LinuxFr.org
    January 30, 2014 at 12:51:41 AM GMT+1 - permalink - https://linuxfr.org/users/mitsurugi/journaux/dissequer-du-binaire-retour-d-experience
    re hacking sécurité
  • 4 HTTP Security headers you should always be using | ibuildings
    January 26, 2014 at 1:52:12 AM GMT+1 - permalink - http://ibuildings.nl/blog/2013/03/4-http-security-headers-you-should-always-be-using
    sécurité hacking web
  • thumbnail
    You can't beat politics with technology, says Pirate Bay cofounder Peter Sunde (Wired UK)
    November 18, 2013 at 7:34:59 PM GMT+1 - permalink - http://www.wired.co.uk/news/archive/2013-11/18/peter-sunde-hemlis-political-apathy
    internet hacking société philo
Links per page: 20 50 100
◄Older
page 1 / 3
Shaarli - The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community - Help/documentation