Extended standard RFCs operationsΒΆ

The extend.standard namespace contains extended operation defined in current RFCs:

extend.standard
    extend.standard.who_am_i()
    extend.standard.modify_password(
        user,
        old_password,
        new_password,
        hash_algorithm=None,
        salt=None
    )
    extend.standard.paged_search(search_base,
        search_filter,
        search_scope,
        dereference_aliases,
        attributes,
        size_limit,
        time_limit,
        types_only,
        get_operational_attributes,
        controls,
        paged_size,
        paged_criticality,
        generator
    )
    extend.standard.persistent_search(
        connection,
        search_base,
        search_filter,
        search_scope,
        dereference_aliases,
        attributes,
        size_limit,
        time_limit,
        controls,
        changes_only,
        events_type,
        notifications,
        streaming,
        callback
    )

To get the identity of the bound user:

c = Connection(....)
c.bind()
i_am = c.extend.standard.who_am_i()

if who_am_i() returns an empty string an anonymous connection is bound.

To modify a user password:

from ldap3 import Server, Connection, HASHED_SALTED_SHA256
s = Server(...)
c = Connection(s, ...)
c.bind()  # bind as someone that has permission to change user's password
new_password = c.extend.standard.modify_password('cn=test1,o=test', 'old_password', 'new_password', HASHED_SALTED_SHA256)  # a new password is set, hashed with sha256 and a random salt

A special case with modify_password is for LDAP servers that follow RFC3062. If you send the old password and do not specify a new password, the server should generate a new password compliant with the server password policy:

s = Server(...)
c = Connection(s, ...)
c.bind()  # bind as someone that has permission to change user's password
new_password = c.extend.standard.modify_password('cn=test1,o=test', 'old_password')  # a new password is generated by the server if compliant with RFC3062

The extend.standard.paged_search() operation is a convenient wrapper for the simple paged search as specified in the RFC2696. You can indicate how many entries will be read in the paged_size parameter (defaults to 100) and you get back a generator for the entries. If you set to False the generator parameter of the search will be fully executed before returning the results. If generator is set to True (the default) any subsequent search will be executed only when you read all the previous read entries, saving memory.

In the modify_password() extended operation you can specify an hashing algorithm, if your LDAP server use hashed password but don’t compute the hash by itself. Otherwise you can send the password and the server will hash it.

Algorithms names are defined in the ldap3 module. You can choose between:

  • HASHED_NONE (no hashing is performed, password is sent in plain text)
  • HASHED_MD5
  • HASHED_SHA
  • HASHED_SHA256
  • HASHED_SHA384
  • HASHED_SHA512
  • HASHED_SALTED_MD5
  • HASHED_SALTED_SHA
  • HASHED_SALTED_SHA256
  • HASHED_SALTED_SHA384
  • HASHED_SALTED_SHA512

If you don’t specify a salt parameter a random salt will be generated by the ldap3 library. Keep in mind that only salted password can provide a strong level of security against dictionary attacks.

To directly modify the userPassword attribute via a modify operation you probably have to send the hashed password. In this case you can use the hashed() function in the ldap3.utils.hashed package:

from ldap3 import HASHED_SALTED_SHA
from ldap3.utils.hashed import hashed

hashed_password = hashed(HASHED_SALTED_SHA, 'new_password')
c.modify('cn=user1,o=test', {'userPassword': [(MODIFY_REPLACE,[hashed_password])]})

To enable Persistent Searches to get all modification in the tree as they happens (for logging purpose):

from ldap3 import Server, Connection, ASYNC_STREAM
s = Server('myserver')
c = Connection(s, 'cn=admin,o=resources', 'password', client_strategy=ASYNC_STREAM)
c.stream = open('myfile.log', 'w+)
p = c.extend.standard.persistent_search()

now the persistent search is running in an internal thread. Each modification is recorded in the log in LDIF-CHANGE format, with the event type, event time and the modified dn and changelog number (if available) as comments.

This uses the AsyncStream Strategy, because the Persistent Search never sends the “SearchDone” packet, and this is not a valid LDAP3 behaviour. This is the reason for which the https://www.ietf.org/proceedings/50/I-D/ldapext-psearch-03.txt draft has never been approved as a standard RFC. The AsyncStream strategy sends each received packet to an external thread where it can be processed as soon as it is received.

In the persistent_search() method you can use the same parameter of a standard search. It also accepts some additional parameters specific of the persistent search:

def persistent_search(self,
                      search_base='',
                      search_filter='(objectclass=*)',
                      search_scope=SUBTREE,
                      dereference_aliases=DEREF_NEVER,
                      attributes=ALL_ATTRIBUTES,
                      size_limit=0,
                      time_limit=0,
                      controls=None,
                      changes_only=True,
                      show_additions=True,
                      show_deletions=True,
                      show_modifications=True,
                      show_dn_modifications=True,
                      notifications=True,
                      streaming=True,
                      callback=None
                      ):

If you don’t pass any parameters the search should be globally applied in your LDAP server.

You can choose which kind of events to show with the show_* boolean parameters. notification=True allows you to receive the original dn of a modify_dn operation and the changelog number if provided by the server.

If you want to stop the persistent search you can use p.stop(). Use p.start() to start it again.

If you don’t provide a stream (a file to write to), a StringIO object is used. You can use it as a standard file or get the value of the StringIO object with c.stream.getvalue().

For example an output from my test suite is the following:

# 2016-07-10T23:34:41.616615
# add
dn: cn=[71973491]modify-dn-1,o=test
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: Person
objectClass: ndsLoginProperties
objectClass: Top
sn: modify-dn-1
cn: [71973491]modify-dn-1
ACL: 2#subtree#cn=[71973491]modify-dn-1,o=test#[All Attributes Rights]
ACL: 6#entry#cn=[71973491]modify-dn-1,o=test#loginScript
ACL: 2#entry#[Public]#messageServer
ACL: 2#entry#[Root]#groupMembership
ACL: 6#entry#cn=[71973491]modify-dn-1,o=test#printJobConfiguration
ACL: 2#entry#[Root]#networkAddress

# 2016-07-10T23:34:41.888506
# modify dn
# previous dn: cn=[71973491]modify-dn-1,o=test
dn: cn=[71973491]modified-dn-1,o=test
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: Person
objectClass: ndsLoginProperties
objectClass: Top
sn: modify-dn-1
cn: [71973491]modified-dn-1
ACL: 2#subtree#cn=[71973491]modified-dn-1,o=test#[All Attributes Rights]
ACL: 6#entry#cn=[71973491]modified-dn-1,o=test#loginScript
ACL: 2#entry#[Public]#messageServer
ACL: 2#entry#[Root]#groupMembership
ACL: 6#entry#cn=[71973491]modified-dn-1,o=test#printJobConfiguration
ACL: 2#entry#[Root]#networkAddress

# 2016-07-10T23:34:41.929022
# delete
dn: cn=[71973491]modified-dn-1,o=test
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: Person
objectClass: ndsLoginProperties
objectClass: Top
sn: modify-dn-1
cn: [71973491]modified-dn-1
ACL: 2#subtree#cn=[71973491]modified-dn-1,o=test#[All Attributes Rights]
ACL: 6#entry#cn=[71973491]modified-dn-1,o=test#loginScript
ACL: 2#entry#[Public]#messageServer
ACL: 2#entry#[Root]#groupMembership
ACL: 6#entry#cn=[71973491]modified-dn-1,o=test#printJobConfiguration
ACL: 2#entry#[Root]#networkAddress

If you call the persistent_search() method with straming=False you can get the modified entries with the p.next() method. Each call to p.next() returns one event, with the extended control already decoded (as dict values) if available.

If you call the persistent_search() method with callback=myfynction (where myfunction is a callable, including lambda, accepting a dict as parameter) your function will be called for each event received in the persistent serach. The function will be called in the same thread of the persistent search, so it should not block.