Skip to content
This repository has been archived by the owner on Aug 6, 2020. It is now read-only.
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: NixOS/systemd
base: de6251bd94c9^
Choose a base ref
...
head repository: NixOS/systemd
compare: 837d559dfabe
Choose a head ref
  • 4 commits
  • 9 files changed
  • 1 contributor

Commits on Jan 10, 2019

  1. journald: do not store the iovec entry for process commandline on stack

    This fixes a crash where we would read the commandline, whose length is under
    control of the sending program, and then crash when trying to create a stack
    allocation for it.
    
    CVE-2018-16864
    https://bugzilla.redhat.com/show_bug.cgi?id=1653855
    
    The message actually doesn't get written to disk, because
    journal_file_append_entry() returns -E2BIG.
    
    (cherry picked from commit 084eeb8)
    keszybz authored and fpletz committed Jan 10, 2019
    Copy the full SHA
    de6251b View commit details
    Browse the repository at this point in the history
  2. journald: set a limit on the number of fields (1k)

    We allocate a iovec entry for each field, so with many short entries,
    our memory usage and processing time can be large, even with a relatively
    small message size. Let's refuse overly long entries.
    
    CVE-2018-16865
    https://bugzilla.redhat.com/show_bug.cgi?id=1653861
    
    What from I can see, the problem is not from an alloca, despite what the CVE
    description says, but from the attack multiplication that comes from creating
    many very small iovecs: (void* + size_t) for each three bytes of input message.
    
    (cherry picked from commit 052c57f)
    keszybz authored and fpletz committed Jan 10, 2019
    Copy the full SHA
    465f990 View commit details
    Browse the repository at this point in the history
  3. journal-remote: verify entry length from header

    Calling mhd_respond(), which ulimately calls MHD_queue_response() is
    ineffective at point, becuase MHD_queue_response() immediately returns
    MHD_NO signifying an error, because the connection is in state
    MHD_CONNECTION_CONTINUE_SENT.
    
    As Christian Grothoff kindly explained:
    > You are likely calling MHD_queue_repsonse() too late: once you are
    > receiving upload_data, HTTP forces you to process it all. At this time,
    > MHD has already sent "100 continue" and cannot take it back (hence you
    > get MHD_NO!).
    >
    > In your request handler, the first time when you are called for a
    > connection (and when hence *upload_data_size == 0 and upload_data ==
    > NULL) you must check the content-length header and react (with
    > MHD_queue_response) based on this (to prevent MHD from automatically
    > generating 100 continue).
    
    If we ever encounter this kind of error, print a warning and immediately
    abort the connection. (The alternative would be to keep reading the data,
    but ignore it, and return an error after we get to the end of data.
    That is possible, but of course puts additional load on both the
    sender and reciever, and doesn't seem important enough just to return
    a good error message.)
    
    Note that sending of the error does not work (the connection is always aborted
    when MHD_queue_response is used with MHD_RESPMEM_MUST_FREE, as in this case)
    with libµhttpd 0.59, but works with 0.61:
    https://src.fedoraproject.org/rpms/libmicrohttpd/pull-request/1
    
    (cherry picked from commit 7fdb237)
    keszybz authored and fpletz committed Jan 10, 2019
    Copy the full SHA
    8ecc4b4 View commit details
    Browse the repository at this point in the history
  4. journal-remote: set a limit on the number of fields in a message

    Existing use of E2BIG is replaced with ENOBUFS (entry too long), and E2BIG is
    reused for the new error condition (too many fields).
    
    This matches the change done for systemd-journald, hence forming the second
    part of the fix for CVE-2018-16865
    (https://bugzilla.redhat.com/show_bug.cgi?id=1653861).
    
    (cherry picked from commit ef4d6ab)
    keszybz authored and fpletz committed Jan 10, 2019
    Copy the full SHA
    837d559 View commit details
    Browse the repository at this point in the history