init commit
commit
e9e5efdaee
|
|
@ -0,0 +1,4 @@
|
|||
.env
|
||||
output.xlsx
|
||||
__pycache__
|
||||
main.log
|
||||
|
|
@ -0,0 +1,171 @@
|
|||
# What?
|
||||
|
||||
Script to query IPSEC VPN tunnels on device type IP-VPNHUB / IP-P2PAGG.
|
||||
|
||||
- Finds operational VPN type devices tickets in NMS MongoDB.
|
||||
- Finds FQDN and IP.
|
||||
- Gets OS over SSH, finds Scrapli platform type.
|
||||
- Gets phase1/phase2 IPSEC tunnel details, correlates entries to complete tunnels in hub-spoke format.
|
||||
- Writes tunnel records to DB, if a record pre-exists the date stamp will be updated, new records will be appended.
|
||||
- Writes from DB to Excel spreadsheet and generates stats summary.
|
||||
|
||||
Script shortcomings:
|
||||
|
||||
- Cycles only one set of device SSH credentials, does not yet use team-pass as a 'secrets' vault.
|
||||
- Cisco devices audited, no other vendor scrapes written, spoke investigation may require Sarian/Digi/VA scraping (very manual output parsing - VA maybe ok as they are Openwrt).
|
||||
- Append/Update only operation for tunnel capture, timestamps used to indicate record relevance, tunnels only captured at time of scrape (the more the scrape is run the more active tunnels will be recorded).
|
||||
|
||||
Learned/Investigated:
|
||||
|
||||
- Cisco commands to capture phase1/2 tunnel information and how to correlate multiple command outputs to a single tunnel record.
|
||||
- Netmiko/textfsm+n2c-templates, getting screen scrapes into ordered data from legacy devices without API.
|
||||
- Scrapli/genie(OR Scrapli/textfsm+n2c-templates), getting screen scrapes into ordered data from legacy devices without API.
|
||||
- Scraping logic, most libraries do not have inbuilt text processor templates for the commands in use.
|
||||
- MongoDB/Pandas/Xlsxwriter.
|
||||
|
||||
# Status
|
||||
|
||||
- POC
|
||||
- WIP
|
||||
- Code Reference
|
||||
|
||||
# Run Script
|
||||
|
||||
## Python venv
|
||||
|
||||
Create venv example, add dependencies.
|
||||
|
||||
```sh
|
||||
$HOME/WORK/python/3.10.6/bin/python3 -m venv 3.10.6_vpn $HOME/WORK/python/vpn_venv
|
||||
source $HOME/WORK/python/vpn_venv/bin/activate
|
||||
python --version
|
||||
which python
|
||||
pip install --upgrade pip
|
||||
python -m pip install -r pip_requirements.txt
|
||||
deactivate
|
||||
|
||||
# enter/exit
|
||||
source $HOME/WORK/python/vpn_venv/bin/activate
|
||||
deactivate
|
||||
```
|
||||
|
||||
Pip modules installed:
|
||||
|
||||
- pip install python-dotenv
|
||||
- pip install scrapli
|
||||
- pip install scrapli[genie]
|
||||
- pip install pandas
|
||||
- pip install xlsxwriter
|
||||
- pip install fabric invoke==2.0.0
|
||||
- pip install pymongo==3.13.0
|
||||
|
||||
## Tacacs credential
|
||||
|
||||
Create credentials file `.env`, populate:
|
||||
|
||||
```sh
|
||||
SSH_USER=<tacacs user>
|
||||
SSH_PASSWORD=<tacacs password>
|
||||
```
|
||||
|
||||
## Local ssh config
|
||||
|
||||
Netmiko/Scrapli offer ways to select various encryption settings, for ease the following routines can be set in home ssh config or system wide.
|
||||
|
||||
```sh
|
||||
# append to the bottom of the file
|
||||
cat /etc/ssh/ssh_config
|
||||
|
||||
Ciphers 3des-cbc,aes128-cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com
|
||||
MACs hmac-sha1,hmac-sha1-96,hmac-sha2-256,hmac-sha2-512,hmac-md5,hmac-md5-96,umac-64@openssh.com,umac-128@openssh.com,hmac-sha1-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-md5-etm@openssh.com,hmac-md5-96-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com
|
||||
HostKeyAlgorithms ssh-ed25519,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,sk-ssh-ed25519-cert-v01@openssh.com,ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ecdsa-sha2-nistp256@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,sk-ecdsa-sha2-nistp256-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-256
|
||||
KexAlgorithms diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group-exchange-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,curve25519-sha256,curve25519-sha256@libssh.org,sntrup761x25519-sha512@openssh.com
|
||||
```
|
||||
|
||||
## Input list
|
||||
|
||||
Devices are listed in ipdesk.
|
||||
- http://ipdesk.corp.tnsi.com/newipdesk/dmvpnhubscan.php
|
||||
- http://ipdesk.corp.tnsi.com/newipdesk/report_p2p_aggs.php
|
||||
|
||||
The script pulls operational vpn devices from the NMS mongoDB with the following query.
|
||||
|
||||
```py
|
||||
query = { "raw.DeviceType": {"$in": ["IP-VPNHUB", "IP-VPNAGG", "IP-P2PAGG", "IP-VCSR-HUB"]},
|
||||
"raw.Environment_Usage": {"$nin": ["QA", "UAT"]},
|
||||
"raw.DeviceStatus": {"$nin": ["Order Cancelled", "De-Installed", "Installed", "New", "Hold", "Pend De-Install"]},
|
||||
"raw.DeviceName": {"$nin": [re.compile('.*_old$'), re.compile('.*_ol$')]}
|
||||
}
|
||||
```
|
||||
|
||||
The following devices were used to catch command parsing issues, there are a few 'easy/normal' parse devices in the list. There are devices with slightly non-standard output, devices that aren't in dns, devices where tunnels have same source/dest but different ports (RRI), devices with SSH issues (large scrape over slow links) and devices with no tunnels or incomplete tunnels.
|
||||
|
||||
```sh
|
||||
asb-cofs-dmvpn-hub04
|
||||
mep-shared-rri-agg09
|
||||
syd-rri-agg02
|
||||
asb-test-p2p-agg01
|
||||
lon-dmvpn-hub05
|
||||
vxmy-dmvpn-hub01
|
||||
lon-bml-rri-agg03
|
||||
sdy-rri-agg04
|
||||
wcd-bml-rri-agg03
|
||||
```
|
||||
|
||||
## Operating modes
|
||||
|
||||
```sh
|
||||
(3.10.6_vpn_venv) [tseed@asblpnxpdev01 vpn_discovery]$ ./main.py -h
|
||||
usage: main.py [-h] {inventory,audit,report,list} ...
|
||||
|
||||
Collect VPN tunnel info
|
||||
|
||||
positional arguments:
|
||||
{inventory,audit,report,list}
|
||||
inventory Query NMS MongoDB to generate VPN device table
|
||||
audit Collect tunnel info for target devices
|
||||
report Generate VPN XLSX report
|
||||
list Return all target devices in VPN device table
|
||||
|
||||
options:
|
||||
-h, --help show this help message and exit
|
||||
```
|
||||
|
||||
- inventory - reads the NMS MongoDB and find all VPN devices, then run commands against each device to determine connectivity and version for the scrapli_platform type
|
||||
- list - returns the names of all devices in the device table
|
||||
- audit - runs tunnel audit against devices, accepts arguments -a for all devices and -d for a selection of comma delimited device short names (as seen with the list argument)
|
||||
- report - builds XLSX report of captured tunnels for all devices, requires argument -e for email
|
||||
|
||||
The script captures tunnels seen during the scrape, this will include records with UP-ACTIVE / UP-IDLE / DOWN-NEGOTIATING session states.
|
||||
Owing to phase1/2 timeouts you will not capture every tunnel, often the tunnel will not have all the fields to qualify as a full record at the time of scrape (p1 only / p2 only), rerun the audit task against a device to capture more tunnels and also update tunnel records that have already been captured.
|
||||
The script can take a while to complete all scrapes (up to 1 hour), some devices fail (usually on connection quality) and the script tries to handle this gracefully, for speed - you may want to only scrape a new device or re-scrape an interesting device.
|
||||
|
||||
## Run
|
||||
|
||||
Start venv: `source $HOME/WORK/python/vpn_venv/bin/activate`
|
||||
Run script: `python main.py -h` or `./main.py -h`
|
||||
|
||||
# Outstanding
|
||||
|
||||
- implement a 'current_config' key for tunnels indicating changed configuration (useful for 3des remediation activity).
|
||||
- remedy/dns lookup of spokes - use `peer_id` field for lookup and possibly some `nhrp` commands.
|
||||
- thread pool queue count to log, 5/10 devices remaining
|
||||
- email wrapper
|
||||
|
||||
# Content
|
||||
|
||||
```sh
|
||||
.
|
||||
├── main.py
|
||||
├── pip_requirements.txt
|
||||
├── README.md
|
||||
├── vpn_audit
|
||||
│ ├── config.py
|
||||
│ ├── __init__.py
|
||||
│ ├── vpn_cisco.py
|
||||
│ ├── vpn_inventory.py
|
||||
│ ├── vpn_mongo.py
|
||||
│ ├── vpn_scrapli.py
|
||||
│ └── vpn_spreadsheet.py
|
||||
└── vpn_discovery.code-workspace
|
||||
```
|
||||
|
|
@ -0,0 +1,390 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
import os
|
||||
import sys
|
||||
import datetime
|
||||
import pymongo
|
||||
import urllib.parse
|
||||
import argparse
|
||||
from bson.json_util import dumps, loads
|
||||
import logging
|
||||
import datetime
|
||||
|
||||
# vpn_audit is not an installable package, update script instance python path to include modules
|
||||
fpath = os.path.join(os.path.dirname(__file__), 'vpn_audit')
|
||||
sys.path.append(fpath)
|
||||
from vpn_audit import *
|
||||
|
||||
def inventory_tasks(read_device_collection, read_ip_collection, write_collection):
|
||||
## retrive all vpn devices from MongoDB
|
||||
device_dict = device_record(read_device_collection)
|
||||
|
||||
## debug, override devices
|
||||
# target_devices = ['air-oob-hub01', 'air-vpn09', 'zfr-evry-p2p-agg01']
|
||||
# device_dict = {k:v for k, v in device_dict.items() if k in target_devices}
|
||||
|
||||
## retrieve device management address info from MongoDB
|
||||
device_dict = mgmt_address(read_ip_collection, device_dict)
|
||||
|
||||
## DNS lookup for devices in MongoDB without management address info, populate 'unknown' fields on failed lookup
|
||||
# dont resolve tnsi.com, external dns record endpoints dont typically have accessible ssh
|
||||
suffix = ['red.tnsi.com', 'corp.tnsi.com', 'blue.tnsi.com', 'open.corp.tnsi.com', 'win2k.corp.tnsi.com', 'reston.tnsi.com', 'csg.tnsi.com', 'tsdlabs.tnsi.com', 'vlab.corp.tnsi.com']
|
||||
device_dict = dns_lookup(suffix, device_dict)
|
||||
|
||||
## debug, pretty print device inventory
|
||||
# print(json.dumps(device_dict, indent=4))
|
||||
|
||||
## write inventory to device table
|
||||
write_devices_collection(write_collection, device_dict)
|
||||
|
||||
## get device os for scrapli driver selection
|
||||
find_query = { "session_protocol" : { "$exists" : False } }
|
||||
object_ids = document_ids(write_collection, find_query)
|
||||
get_os(write_collection, object_ids)
|
||||
|
||||
## get cisco version
|
||||
query_modifier = { "session_protocol" : "ssh", "vendor": "cisco" }
|
||||
target_devices = device_names(write_collection, query_modifier, device_name_list = [])
|
||||
|
||||
def send_commands(collection, device_id, device_name):
|
||||
## start new buffered logger for thread
|
||||
logger = logging.getLogger('main')
|
||||
thread_logger = logging.getLogger(device_name)
|
||||
thread_logger.setLevel(logging.INFO)
|
||||
memhandler = logging.handlers.MemoryHandler(1024*10, target=logger, flushOnClose=True)
|
||||
thread_logger.addHandler(memhandler)
|
||||
|
||||
## send commands to device, parse, update collection
|
||||
commands = {
|
||||
"cisco_version": {"command": "show version"}
|
||||
}
|
||||
for command in commands.keys():
|
||||
func_ref = eval(command)
|
||||
commands[command].update({'func_ref': func_ref})
|
||||
status = device_commands(collection, device_id, commands)
|
||||
if 'error' in status:
|
||||
thread_logger.error(status)
|
||||
# collection['temp'][device_name].drop() # may want to drop temp table
|
||||
return
|
||||
|
||||
## stop buffered logger for thread, flush logs to 'main' logger
|
||||
memhandler.close()
|
||||
del thread_logger
|
||||
|
||||
threads = 16
|
||||
with ThreadPoolExecutor(max_workers=threads) as executor:
|
||||
for d in target_devices:
|
||||
device_id = target_devices[d]
|
||||
device_name = d
|
||||
executor.submit(send_commands, write_collection, device_id, device_name)
|
||||
|
||||
# def document_update_tasks(collection, device_name):
|
||||
def document_update_tasks(read_device_collection, read_ip_collection, write_collection, device_name):
|
||||
## init
|
||||
logger = logging.getLogger(device_name)
|
||||
|
||||
## dedupe temp device collection
|
||||
# (there can be near identicle documents where the documents have a different 'c_id' value - maybe tunnel renegotiation)
|
||||
# logger.info(f"Deduplicate: {collection['temp'][device_name].full_name}")
|
||||
logger.info(f"Deduplicate: {write_collection['temp'][device_name].full_name}")
|
||||
# mode = 'list'
|
||||
# mode = 'show'
|
||||
mode = 'delete'
|
||||
ignore_schema_keys = ['_id', 'c_id']
|
||||
# tmp_collection = collection['temp'][device_name]
|
||||
tmp_collection = write_collection['temp'][device_name]
|
||||
# deduplicate_collection(collection['temp'][device_name], mode, ignore_schema_keys)
|
||||
deduplicate_collection(collection = tmp_collection, mode = mode, ignore_schema_keys = ignore_schema_keys, logger_name = device_name)
|
||||
|
||||
## lookup NMS mongo(remedy) spoke devices NMS tickets (get remedy 'DeviceRecNum' for each spoke device)
|
||||
logger.info(f'Lookup: find peer device ticket {write_collection[device_name].full_name}')
|
||||
tmp_collection = write_collection['temp'][device_name]
|
||||
spoke_lookup(read_device_collection = read_device_collection, read_ip_collection = read_ip_collection, write_collection = tmp_collection, logger_name = device_name)
|
||||
|
||||
## lookup NMS mongo(remedy) spoke devices NMS info (use remedy 'DeviceRecNum' to get any required device ticket attributes)
|
||||
logger.info(f'Lookup: find peer device attributes {write_collection[device_name].full_name}')
|
||||
tmp_collection = write_collection['temp'][device_name]
|
||||
device_ticket_lookup(read_device_collection = read_device_collection, write_collection = tmp_collection, logger_name = device_name)
|
||||
|
||||
## merge temp tables to main device table
|
||||
# (in short this mask of keys 'tunnel_qualifier_keys' determines if a valid/full tunnel is configured and has been active in a scrape)
|
||||
# (tunnel_qualifier_keys describes the keys required for a tunnel with cisco_vpn_phase1 + cisco_crypto_session, the keys from cisco_crypto_session are only present when cisco_crypto_map describes a matching cisco_vpn_phase2 configuration)
|
||||
# (the tunnel may report 'session_status': 'UP-ACTIVE' displaying full phase1/phase2 attributes, 'UP-IDLE' / 'DOWN-NEGOTIATING' will only capture phase1 attributes)
|
||||
#
|
||||
# (owing to the phase1/phase2 timout values, stale configuration (and other conditions unknown), scrapes may not capture full tunnel configurations)
|
||||
# (we may see partial scraped documents with the following data: cisco_vpn_phase1 / cisco_crypto_map / cisco_vpn_phase1 + cisco_crypto_session / cisco_crypto_session / cisco_vpn_phase1 + cisco_crypto_map / cisco_crypto_session + cisco_vpn_phase2)
|
||||
# (subsequent scrapes may capture tunnel configuration in an active state (likely withing phase1/2 timeout thresholds) and contain enough of the keys to match the 'tunnel_qualifier_keys' mask)
|
||||
# logger.info(f"Merge: {collection['temp'][device_name].full_name} to {collection[device_name].full_name}")
|
||||
# src_collection = collection['temp'][device_name]
|
||||
# dst_collection = collection[device_name]
|
||||
logger.info(f"Merge: {write_collection['temp'][device_name].full_name} to {write_collection[device_name].full_name}")
|
||||
src_collection = write_collection['temp'][device_name]
|
||||
dst_collection = write_collection[device_name]
|
||||
# ignore specific keys in src_document when searching for matching documents in the dst_collection
|
||||
# 'crypto_map_interface' is dynamic and only present when data is passing, dont want multiple documents in the dst_collection table for a tunnel with/without this attribute
|
||||
# 'ipsec_flow' maybe dynamic, changing when devices pass traffic
|
||||
#ignore_src_schema_keys = ['_id', 'c_id', 'crypto_map_interface', 'ipsec_flow']
|
||||
ignore_src_schema_keys = ['_id', 'c_id', 'crypto_map_interface', 'ipsec_flow', 'DeviceName', 'Manufacturer', 'Model', 'DeviceRecNum', 'nhrp_nexthop'] # merge records with latest/new scrape fields - 'DeviceName', 'Manufacturer', 'Model', 'DeviceRecNum', 'nhrp_nexthop' - likely not a problem to leave in place
|
||||
# exclude keys in the insert/update to the dst_collection
|
||||
exclude_dst_schema_keys = ['_id', 'c_id']
|
||||
# list of additional key value pairs to add to each document in the dst_collection (the date Type value is interpereted by MongoDB as type ISODate)
|
||||
additonal_dst_schema_keypairs = [{'last_modified': datetime.datetime.now(tz=datetime.timezone.utc)}]
|
||||
# list of schema keys to match/qualify document in src_collection for merge to dst_collection (optional parameter, when unused everything gets merged)
|
||||
tunnel_qualifier_keys = ["local_ip", "p1_ivrf", "peer_ip", "p1_dh_group", "p1_encr_algo", "p1_hash_algo", "p1_auth_type", "p1_status", "local_port", "crypto_session_interface", "peer_port", "p2_fvrf", "peer_vpn_id"]
|
||||
merge_to_collection(src_collection = src_collection, dst_collection = dst_collection, ignore_src_schema_keys = ignore_src_schema_keys, exclude_dst_schema_keys = exclude_dst_schema_keys, additonal_dst_schema_keypairs = additonal_dst_schema_keypairs, match_src_schema_keys = tunnel_qualifier_keys, logger_name = device_name)
|
||||
|
||||
# ## debug - validate merge results, all records in src_collection should be included or excluded from merge - check against merge_to_collection results
|
||||
# # mode = 'show'
|
||||
# mode = 'stat'
|
||||
# ignore_src_schema_keys = ['_id', 'c_id']
|
||||
# src_collection = collection['temp'][device_name]
|
||||
# dst_collection = collection[device_name]
|
||||
# tunnel_qualifier_keys = ["local_ip", "p1_ivrf", "peer_ip", "p1_dh_group", "p1_encr_algo", "p1_hash_algo", "p1_auth_type", "p1_status", "local_port", "crypto_session_interface", "peer_port", "p2_fvrf", "peer_vpn_id"]
|
||||
# match_src_schema_keys = tunnel_qualifier_keys
|
||||
# diff_collection(src_collection, dst_collection, mode, ignore_src_schema_keys, match_src_schema_keys)
|
||||
|
||||
## full dedupe device table
|
||||
# (ensure there are no 'duplicate' documents that only differ by '_id')
|
||||
# (this should not be encountered at this stage)
|
||||
# (if there are duplicates they must have the same datestamp indicating an error in the previous merge function)
|
||||
# logger.info(f"Deduplicate: {collection[device_name].full_name}")
|
||||
logger.info(f"Deduplicate: {write_collection[device_name].full_name}")
|
||||
# device_collection = collection[device_name]
|
||||
device_collection = write_collection[device_name]
|
||||
mode = 'delete'
|
||||
ignore_schema_keys = ['_id']
|
||||
deduplicate_collection(collection = device_collection, mode = mode, ignore_schema_keys = ignore_schema_keys, logger_name = device_name)
|
||||
#deduplicate_collection(collection[device_name], mode, ignore_schema_keys)
|
||||
|
||||
## capture 'UP-IDLE' / 'DOWN-NEGOTIATING' tunnels, delete if UP-ACTIVE tunnel documents exist
|
||||
# (this may occur if an idle tunnel (cisco_vpn_phase1 + cisco_crypto_session) is initially scraped, then the full tunnel establishes +(cisco_vpn_phase2 + cisco_crypto_map) and is captured on a subsequent scrape thus creating two documents)
|
||||
# (if the tunnel is idle on the 3rd+ scrape it will be merged into the document with the full tunnel attributes)
|
||||
#
|
||||
# (the 'idle_connection' mask contains the nearly the same keys as the 'tunnel_qualifier_keys' mask, as listed below. the 'session_status' field is ignored in the query to ensure both idle + active documents are matched)
|
||||
# (["local_ip", "p1_ivrf", "peer_ip", "p1_dh_group", "p1_encr_algo", "p1_hash_algo", "p1_auth_type", "p1_status", "local_port", "crypto_session_interface", "session_status", "peer_port", "p2_fvrf", "peer_vpn_id"])
|
||||
# print(f'\ndeduplicate_collection collection[{device_name}] - remove matched idle_connection')
|
||||
# logger.info(f"Deduplicate: {collection[device_name].full_name} - remove 'UP-IDLE' records that are subset to 'UP-ACTIVE' records")
|
||||
logger.info(f"Deduplicate: {write_collection[device_name].full_name} - remove 'UP-IDLE' records that are subset to 'UP-ACTIVE' records")
|
||||
# device_collection = collection[device_name]
|
||||
device_collection = write_collection[device_name]
|
||||
mode = 'delete'
|
||||
ignore_schema_keys = ['_id', 'last_modified', 'session_status']
|
||||
idle_connection = ["local_ip", "p1_ivrf", "peer_ip", "p1_dh_group", "p1_encr_algo", "p1_hash_algo", "p1_auth_type", "p1_status", "local_port", "crypto_session_interface", "peer_port", "p2_fvrf", "peer_vpn_id"]
|
||||
#required_schema_keys = idle_connection
|
||||
#deduplicate_collection(collection[device_name], mode, ignore_schema_keys, required_schema_keys)
|
||||
deduplicate_collection(collection = device_collection, mode = mode, ignore_schema_keys = ignore_schema_keys, required_schema_keys = idle_connection, logger_name = device_name)
|
||||
|
||||
## drop temp sorting table (disable for debug)
|
||||
write_collection['temp'][device_name].drop()
|
||||
|
||||
## want to match >1 documents with the following keys as aggregate id_keys {"$group":{"_id": id_keys,"count": {"$sum": 1}}}
|
||||
## ["local_ip", "local_port", "peer_ip", "peer_port", "peer_vpn_id"]
|
||||
## this should match documents that have had changes to their config such as 'ordered_transform_set' / 'p2_encr_algo' and allow for a document date comparrison on the key 'last_modified'
|
||||
## add an additional key to each document 'current_configuration' with a bool value for inclusion/exclusion in the spreadsheet stats and as a rudamentary history to indicate resolved complaince status
|
||||
|
||||
# def audit_tasks(collection, target_devices):
|
||||
def audit_tasks(read_device_collection, read_ip_collection, write_collection, target_devices):
|
||||
|
||||
# def send_commands(collection, device_id, device_name):
|
||||
def send_commands(read_device_collection, read_ip_collection, write_collection, device_id, device_name):
|
||||
## start new buffered logger for thread
|
||||
logger = logging.getLogger('main')
|
||||
thread_logger = logging.getLogger(device_name)
|
||||
thread_logger.setLevel(logging.INFO)
|
||||
memhandler = logging.handlers.MemoryHandler(1024*10, target=logger, flushOnClose=True)
|
||||
thread_logger.addHandler(memhandler)
|
||||
|
||||
try:
|
||||
## send commands to device, parse, update collection - isakmp / ipsec audit (the order of commands is tied to the db record logic)
|
||||
commands = {
|
||||
"cisco_vpn_phase1": {"command": "show crypto isakmp sa detail"},
|
||||
"cisco_crypto_session": {"command": "show crypto session detail"},
|
||||
"cisco_vpn_phase2": {"command": "show crypto ipsec sa"},
|
||||
"cisco_crypto_map": {"command": "show crypto map"},
|
||||
"cisco_isakmp_policy": {"command": "show crypto isakmp policy"},
|
||||
"cisco_nhrp_lookup": {"command": "compound"}
|
||||
}
|
||||
for command in commands.keys():
|
||||
func_ref = eval(command)
|
||||
commands[command].update({'func_ref': func_ref})
|
||||
# status = device_commands(collection, device_id, commands)
|
||||
status = device_commands(write_collection, device_id, commands)
|
||||
if 'error' in status:
|
||||
thread_logger.error(status)
|
||||
# collection['temp'][device_name].drop() # may want to drop temp table on error
|
||||
return
|
||||
|
||||
## send commands to device, parse, update collection - 3des audit (the order of commands is tied to the db record logic)
|
||||
commands = {
|
||||
"cisco_transform_set": {"command": "compound"},
|
||||
"triple_des_check": {"command": "compound"}
|
||||
}
|
||||
for command in commands.keys():
|
||||
func_ref = eval(command)
|
||||
commands[command].update({'func_ref': func_ref})
|
||||
# status = device_commands(collection, device_id, commands)
|
||||
status = device_commands(write_collection, device_id, commands)
|
||||
if 'error' in status:
|
||||
thread_logger.error(status)
|
||||
# collection['temp'][device_name].drop() # may want to drop temp table on error
|
||||
return
|
||||
|
||||
## promote qualifying tunnels in 'temp' device collection to device collection
|
||||
# document_update_tasks(collection, device_name)
|
||||
document_update_tasks(read_device_collection, read_ip_collection, write_collection, device_name)
|
||||
|
||||
except Exception as e:
|
||||
# as buffered logging is in place to try to collate per device/thread log messages, there is no visibility on crash, flush thread log
|
||||
memhandler.flush()
|
||||
logger.error(f"Exception occurred: {type(e).__name__}", exc_info=True)
|
||||
memhandler.close()
|
||||
del thread_logger
|
||||
|
||||
## stop buffered logger for thread, flush logs to 'main' logger
|
||||
memhandler.close()
|
||||
del thread_logger
|
||||
|
||||
## main loop - send commands to threadpool
|
||||
# device_ids = [i for i in target_devices.values()]
|
||||
# device_names = [n for n in target_devices.keys()]
|
||||
with ThreadPoolExecutor(max_workers=config.device_threads) as executor:
|
||||
for d in target_devices:
|
||||
device_id = target_devices[d]
|
||||
device_name = d
|
||||
# executor.submit(send_commands, collection, device_id, device_name)
|
||||
executor.submit(send_commands, read_device_collection, read_ip_collection, write_collection, device_id, device_name)
|
||||
|
||||
def parser_action(args):
|
||||
## main script logic
|
||||
logger = logging.getLogger('main')
|
||||
match args.mode:
|
||||
case 'inventory':
|
||||
logger.info('#### Run - argument: inventory ####')
|
||||
inventory_tasks(args.arg_lookup_device_collection, args.arg_lookup_ip_collection, args.arg_write_device_collection)
|
||||
case 'list':
|
||||
logger.info('#### Run - argument: list ####')
|
||||
query_modifier = { "session_protocol" : "ssh", "vendor": "cisco" }
|
||||
target_devices = device_names(args.arg_write_device_collection, query_modifier, device_name_list = [])
|
||||
target_devices_list = [d for d in target_devices.keys()]
|
||||
if len(target_devices_list) >0:
|
||||
# print(','.join(target_devices_list))
|
||||
logger.info(f"{','.join(target_devices_list)}")
|
||||
else:
|
||||
# print('device table empty, rerun inventory task')
|
||||
logger.error('device table empty, rerun inventory task')
|
||||
case 'audit':
|
||||
if not args.all_devices and args.devices is None:
|
||||
print('usage: main.py audit [-h] [-a | -d DEVICES]')
|
||||
print('main.py audit: error: argument -d/--devices or argument -a/--all_devices required')
|
||||
quit()
|
||||
if args.all_devices:
|
||||
query_modifier = { "session_protocol" : "ssh", "vendor": "cisco" }
|
||||
target_devices = device_names(args.arg_write_device_collection, query_modifier, device_name_list = [])
|
||||
# print(dumps(target_devices, indent=4))
|
||||
logger.info('#### Run - argument: audit -a ####')
|
||||
# audit_tasks(args.arg_write_device_collection, target_devices)
|
||||
audit_tasks(args.arg_lookup_device_collection, args.arg_lookup_ip_collection, args.arg_write_device_collection, target_devices)
|
||||
elif len(args.devices) >0:
|
||||
device_name_list = [d for d in args.devices.split(',')]
|
||||
# print(f'target devices\n{device_name_list}')
|
||||
query_modifier = { "session_protocol" : "ssh", "vendor": "cisco" }
|
||||
target_devices = device_names(args.arg_write_device_collection, query_modifier, device_name_list)
|
||||
# print(dumps(target_devices, indent=4))
|
||||
invalid_devices = [d for d in device_name_list if d not in target_devices.keys()]
|
||||
if len(invalid_devices) >0:
|
||||
print(f"device(s) error for {','.join(invalid_devices)}\n")
|
||||
for d in invalid_devices:
|
||||
if args.arg_write_device_collection.count_documents({'DeviceName': d}) == 0:
|
||||
print(f'{d} not in device table, rerun inventory task if you are sure the device exists in remedy')
|
||||
logger.error(f'{d} not in device table, rerun inventory task if you are sure the device exists in remedy')
|
||||
else:
|
||||
result = dumps(args.arg_write_device_collection.find({'DeviceName': d}, {'_id': 0, 'DeviceName': 1, "session_protocol" : 1, "vendor": 1}))
|
||||
print(f'{d} does not meet audit requirements {result}')
|
||||
logger.error(f'{d} does not meet audit requirements {result}')
|
||||
quit()
|
||||
logger.info(f'#### Run - argument: audit -d device1,device2,deviceN ####')
|
||||
logger.info('Target devices:')
|
||||
logger.info(f"{','.join([k for k in target_devices.keys()])}")
|
||||
# audit_tasks(args.arg_write_device_collection, target_devices)
|
||||
audit_tasks(args.arg_lookup_device_collection, args.arg_lookup_ip_collection, args.arg_write_device_collection, target_devices)
|
||||
case 'report':
|
||||
spreadsheet = './output.xlsx'
|
||||
devices_dict = device_names(args.arg_write_device_collection) # pass query modifier to filter devices in spreadsheet
|
||||
#print(dumps(devices_dict, indent=4))
|
||||
logger.info(f'#### Run - argument: report ####')
|
||||
build_spreadsheet(args.arg_write_device_collection, devices_dict, spreadsheet)
|
||||
|
||||
def main():
|
||||
#### MongoDB sources
|
||||
# need some sort of class inherritance setup store the client connection object to instantiate collections owned by the class, then chuck into vpn_mongo and have a dict as the config(that can be kept in json/toml)
|
||||
|
||||
## TNS MongoDB client connection
|
||||
# no firewall rules asblpnxpdev01 -> rstlcnscmgd01:27017 (TNS MongoDB), use quick tunnel
|
||||
# screen -S nmsmongo
|
||||
# ssh -o "ServerAliveInterval 60" -L 127.0.0.1:27017:rstlcnscmgd01.open.corp.tnsi.com:27017 tseed@airlcinfjmp01.open.corp.tnsi.com
|
||||
lookup_mongohost = '127.0.0.1'
|
||||
lookup_mongoport = 27017
|
||||
lookup_client = pymongo.MongoClient(f'mongodb://{lookup_mongohost}:{lookup_mongoport}/')
|
||||
lookup_device_db = lookup_client['jobs']
|
||||
lookup_device_collection = lookup_device_db['DEVICE_WS']
|
||||
lookup_ip_collection = lookup_device_db['NT_IPAddress_WS']
|
||||
|
||||
## DEV MongoDB client connection
|
||||
# no firewall rules asblpnxpdev01 -> 172.17.213.136:27017 (DEV MongoDB), use quick tunnel
|
||||
# screen -S testmongo
|
||||
# ssh -o "ServerAliveInterval 60" -J airlcinfjmp01.open.corp.tnsi.com -L 127.0.0.1:27018:127.0.0.1:27017 tseed@172.17.213.136
|
||||
write_mongohost = '127.0.0.1'
|
||||
write_mongoport = 27018
|
||||
write_username = urllib.parse.quote_plus('script')
|
||||
write_password = urllib.parse.quote_plus('install1')
|
||||
write_client = pymongo.MongoClient(f'mongodb://{write_username}:{write_password}@{write_mongohost}:{write_mongoport}/')
|
||||
write_vpn_db = write_client['vpn']
|
||||
write_device_collection = write_vpn_db['devices']
|
||||
|
||||
#### Logger
|
||||
logger = logging.getLogger('main')
|
||||
logger.setLevel(logging.INFO)
|
||||
console = logging.StreamHandler()
|
||||
file = logging.FileHandler("main.log")
|
||||
logger.addHandler(console)
|
||||
logger.addHandler(file)
|
||||
formatter = logging.Formatter(
|
||||
fmt="%(asctime)s, %(levelname)-8s | %(filename)-15s:%(lineno)-5s | %(threadName)-1s: %(message)s",
|
||||
datefmt="%Y-%m-%d %H:%M:%S")
|
||||
console.setFormatter(formatter)
|
||||
file.setFormatter(formatter)
|
||||
|
||||
#### Threading, concurrent device audits
|
||||
# device_threads
|
||||
# concurrent devices queries (collect), mostly an IO task with lots of delay
|
||||
# database tasks are suboptimal retrieving all records for dedupe/merge, consider dropping threads where database performance issues/drops
|
||||
# scrape_threads
|
||||
# screen scrapes being processed concurrently, mostly a CPU task with lots of loop/split/regex of screen scrapes
|
||||
# total threads (nested) - 16 devices * 2 nested scrapes = 32 threads
|
||||
# set to 1 / 1 for debug
|
||||
config.scrape_threads = os.cpu_count()
|
||||
config.device_threads = 32
|
||||
|
||||
#### Argument parser, run main script logic in parser_action()
|
||||
parser = argparse.ArgumentParser(description='Collect VPN tunnel info')
|
||||
audit = argparse.ArgumentParser(add_help=False)
|
||||
audit_args = audit.add_mutually_exclusive_group()
|
||||
report = argparse.ArgumentParser(add_help=False)
|
||||
audit_args.add_argument('-a', '--all_devices', action='store_true', help='All target devices in the VPN device table, WARNING this may take a full day to complete')
|
||||
audit_args.add_argument('-d', '--devices', action='store', help='Comma separated list of target devices')
|
||||
report.add_argument('-e', '--email', action='store', help='Email addresses to send report', required=True)
|
||||
sp = parser.add_subparsers(required=True)
|
||||
sp_inventory = sp.add_parser('inventory', help='Query NMS MongoDB to generate VPN device table')
|
||||
sp_audit = sp.add_parser('audit', parents=[audit], description='Collect tunnel info for target devices, requires argument [-a | -d]', help='Collect tunnel info for target devices')
|
||||
sp_report = sp.add_parser('report', parents=[report], description='Generate VPN XLSX report, requires argument [-e]', help='Generate VPN XLSX report')
|
||||
sp_list = sp.add_parser('list', description='Return all target devices in VPN device table', help='Return all target devices in VPN device table')
|
||||
sp_inventory.set_defaults(func=parser_action, mode='inventory', arg_lookup_device_collection=lookup_device_collection, arg_lookup_ip_collection = lookup_ip_collection, arg_write_device_collection = write_device_collection)
|
||||
# sp_audit.set_defaults(func=parser_action, mode='audit', arg_write_device_collection = write_device_collection)
|
||||
sp_audit.set_defaults(func=parser_action, mode='audit', arg_lookup_device_collection=lookup_device_collection, arg_lookup_ip_collection = lookup_ip_collection, arg_write_device_collection = write_device_collection)
|
||||
sp_report.set_defaults(func=parser_action, mode='report', arg_write_device_collection = write_device_collection)
|
||||
sp_list.set_defaults(func=parser_action, mode='list', arg_write_device_collection = write_device_collection)
|
||||
args = parser.parse_args()
|
||||
args.func(args)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -0,0 +1,92 @@
|
|||
aiofiles==23.1.0
|
||||
aiohttp==3.8.4
|
||||
aiohttp-swagger==1.0.16
|
||||
aiosignal==1.3.1
|
||||
async-lru==2.0.2
|
||||
async-timeout==4.0.2
|
||||
attrs==23.1.0
|
||||
bcrypt==4.0.1
|
||||
certifi==2022.12.7
|
||||
cffi==1.15.1
|
||||
chardet==4.0.0
|
||||
charset-normalizer==3.1.0
|
||||
cryptography==40.0.2
|
||||
dill==0.3.6
|
||||
distro==1.8.0
|
||||
dnspython==2.3.0
|
||||
dotenv==0.0.5
|
||||
fabric==3.0.1
|
||||
frozenlist==1.3.3
|
||||
genie==23.4
|
||||
genie.libs.clean==23.4
|
||||
genie.libs.conf==23.4
|
||||
genie.libs.filetransferutils==23.4
|
||||
genie.libs.health==23.4
|
||||
genie.libs.ops==23.4
|
||||
genie.libs.parser==23.4
|
||||
genie.libs.sdk==23.4
|
||||
gitdb==4.0.10
|
||||
GitPython==3.1.31
|
||||
grpcio==1.54.0
|
||||
idna==3.4
|
||||
invoke==2.0.0
|
||||
Jinja2==3.1.2
|
||||
jsonpickle==3.0.1
|
||||
junit-xml==1.9
|
||||
lxml==4.9.2
|
||||
MarkupSafe==2.1.2
|
||||
multidict==6.0.4
|
||||
ncclient==0.6.13
|
||||
netaddr==0.8.0
|
||||
numpy==1.24.3
|
||||
packaging==23.1
|
||||
pandas==2.0.1
|
||||
paramiko==3.1.0
|
||||
pathspec==0.11.1
|
||||
prettytable==3.7.0
|
||||
protobuf==3.20.3
|
||||
psutil==5.9.5
|
||||
pyats==23.4
|
||||
pyats.aereport==23.4
|
||||
pyats.aetest==23.4
|
||||
pyats.async==23.4
|
||||
pyats.connections==23.4
|
||||
pyats.datastructures==23.4
|
||||
pyats.easypy==23.4
|
||||
pyats.kleenex==23.4
|
||||
pyats.log==23.4
|
||||
pyats.reporter==23.4
|
||||
pyats.results==23.4
|
||||
pyats.tcl==23.4
|
||||
pyats.topology==23.4
|
||||
pyats.utils==23.4
|
||||
pycparser==2.21
|
||||
pyftpdlib==1.5.7
|
||||
pymongo==3.13.0
|
||||
PyNaCl==1.5.0
|
||||
python-dateutil==2.8.2
|
||||
python-dotenv==1.0.0
|
||||
python-engineio==3.14.2
|
||||
python-socketio==4.6.1
|
||||
pytz==2023.3
|
||||
PyYAML==6.0
|
||||
requests==2.29.0
|
||||
ruamel.yaml==0.17.22
|
||||
ruamel.yaml.clib==0.2.7
|
||||
scrapli==2023.1.30
|
||||
six==1.16.0
|
||||
smmap==5.0.0
|
||||
tftpy==0.8.0
|
||||
tinydb==4.7.1
|
||||
tqdm==4.65.0
|
||||
typing_extensions==4.5.0
|
||||
tzdata==2023.3
|
||||
unicon==23.4
|
||||
unicon.plugins==23.4
|
||||
urllib3==1.26.15
|
||||
wcwidth==0.2.6
|
||||
XlsxWriter==3.1.0
|
||||
xmltodict==0.13.0
|
||||
yamllint==1.31.0
|
||||
yang.connector==23.4
|
||||
yarl==1.9.2
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
from vpn_mongo import *
|
||||
from vpn_inventory import *
|
||||
from vpn_scrapli import *
|
||||
from vpn_cisco import *
|
||||
from vpn_spreadsheet import *
|
||||
from config import *
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
device_threads = 1
|
||||
scrape_threads = 1
|
||||
|
|
@ -0,0 +1,894 @@
|
|||
import os
|
||||
from functools import partial
|
||||
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
|
||||
from scrapli import Scrapli
|
||||
# from scrapli.driver import GenericDriver
|
||||
# from scrapli.driver.core import IOSXEDriver
|
||||
from pymongo import InsertOne, DeleteMany, ReplaceOne, UpdateOne, UpdateMany
|
||||
import re
|
||||
import json
|
||||
|
||||
def cisco_version(collection, command_output, device_record, connection):
|
||||
print('run cisco_version')
|
||||
def device_audit():
|
||||
return {'os_flavour': operating_system, 'image': image, 'os_version': version, 'chassis': chassis, 'serial': serial}
|
||||
id = device_record['_id']
|
||||
output = command_output['output']
|
||||
if not output == 'error':
|
||||
if len(output.genie_parse_output()) >0:
|
||||
print(f'genie parser method')
|
||||
parsed = output.genie_parse_output()
|
||||
operating_system = parsed['version']['os']
|
||||
image = parsed['version']['system_image'].split(':')[1].replace('/', '')
|
||||
version = parsed['version']['version']
|
||||
chassis = parsed['version']['chassis']
|
||||
serial = parsed['version']['chassis_sn']
|
||||
record = device_audit()
|
||||
# print(record)
|
||||
filter = {'_id': id}
|
||||
result = collection.update_one(filter, {'$set': record}, upsert=True)
|
||||
return result
|
||||
else:
|
||||
print(f'manual parse method not written for this command')
|
||||
|
||||
#### scrape 1)
|
||||
# 'show isakmp sa detail'
|
||||
# {'c_id': '20681', 'local_ip': '10.227.184.157', 'p1_ivrf': 'none', 'peer_ip': '10.229.4.74', 'p1_dh_group': '14', 'p1_encr_algo': 'aes', 'p1_hash_algo': 'sha', 'p1_auth_type': 'psk', 'p1_status': 'ACTIVE'}
|
||||
def cisco_vpn_phase1(collection, command_output, device_record, connection):
|
||||
print('\ncisco_vpn_phase1')
|
||||
|
||||
def process_p1(p1_dict, idx):
|
||||
global peer_count
|
||||
global p1_records
|
||||
c_id = str(p1_dict['isakmp_stats']['IPv4'][idx]['c_id'])
|
||||
local_ip = p1_dict['isakmp_stats']['IPv4'][idx]['local_ip']
|
||||
peer_ip = p1_dict['isakmp_stats']['IPv4'][idx]['remote_ip']
|
||||
encr_algo = p1_dict['isakmp_stats']['IPv4'][idx]['encr_algo']
|
||||
hash_algo = p1_dict['isakmp_stats']['IPv4'][idx]['hash_algo']
|
||||
auth_type = p1_dict['isakmp_stats']['IPv4'][idx]['auth_type']
|
||||
dh_group = str(p1_dict['isakmp_stats']['IPv4'][idx]['dh_group'])
|
||||
status = p1_dict['isakmp_stats']['IPv4'][idx]['status']
|
||||
ivrf = p1_dict['isakmp_stats']['IPv4'][idx]['ivrf'] if 'ivrf' in p1_dict['isakmp_stats']['IPv4'][idx] else 'none'
|
||||
p1_record = {'c_id': c_id, 'local_ip': local_ip, 'p1_ivrf': ivrf, 'peer_ip': peer_ip, 'p1_dh_group': dh_group, 'p1_encr_algo': encr_algo, 'p1_hash_algo': hash_algo, 'p1_auth_type': auth_type, 'p1_status': status }
|
||||
# print(f'genie p1 {p1_records}')
|
||||
p1_records.append(p1_record)
|
||||
peer_count += 1
|
||||
print('phase1 processed %d\r'%peer_count, end="")
|
||||
|
||||
device_name = device_record['DeviceName']
|
||||
output = command_output['output']
|
||||
device_table = collection['temp'][device_name] # create/use-existing temp subcollection
|
||||
if not output == 'error':
|
||||
p1_dict = output.genie_parse_output()
|
||||
# print(json.dumps(p1_dict, indent=4))
|
||||
if len(p1_dict) >0:
|
||||
global peer_count
|
||||
peer_count = 0
|
||||
global p1_records
|
||||
p1_records = []
|
||||
p1_scrape_idx = [t for t in p1_dict['isakmp_stats']['IPv4']]
|
||||
n_cores = os.cpu_count()
|
||||
partial_function = partial(process_p1, p1_dict)
|
||||
# with ThreadPoolExecutor(max_workers=1) as executor: # debug
|
||||
with ThreadPoolExecutor(max_workers=n_cores) as executor:
|
||||
executor.map(partial_function, p1_scrape_idx)
|
||||
print(f'phase1 processed {peer_count}')
|
||||
# write to db
|
||||
if len(p1_records) >0:
|
||||
requests = []
|
||||
for i in p1_records:
|
||||
record = i
|
||||
# requests.append(InsertOne(record))
|
||||
filter = {'c_id': record['c_id']}
|
||||
requests.append(UpdateMany(filter, {'$set': record}, upsert=True))
|
||||
result = device_table.bulk_write(requests)
|
||||
# print(result.bulk_api_result)
|
||||
return result
|
||||
else:
|
||||
print('phase1 no tunnel records')
|
||||
else:
|
||||
print('error returning command, check network connectivity')
|
||||
|
||||
#### scrape 2)
|
||||
# 'show crypto session remote {ip} detail'
|
||||
# {'local_ip': '10.225.112.42', 'local_port': '500', 'c_id': '11907', 'ipsec_flow': ['permit 47 host 10.225.112.42 host 10.227.36.18'], 'crypto_session_interface': 'Tunnel6', 'session_status': 'UP-ACTIVE', 'peer_ip': '10.227.36.18', 'peer_port': '500', 'p2_fvrf': 'none', 'peer_vpn_id': '10.227.36.18'}
|
||||
# correlate to scrape 1) with key 'c_id'
|
||||
def cisco_crypto_session(collection, command_output, device_record, connection):
|
||||
print('\ncisco_crypto_session')
|
||||
|
||||
def process_session(session):
|
||||
# debug with single thread
|
||||
# print('\n##########')
|
||||
# print(session)
|
||||
if not 'Invalid input detected at' in session: # occurs with no match on peer ip
|
||||
# wipe all lines before first match of 'Interface: ', this match is used to delimit lines of text into subsequent interface scrapes
|
||||
scrape = ""
|
||||
tag_found = False
|
||||
for line in session.split('\n'):
|
||||
if not tag_found:
|
||||
if 'Interface: ' in line:
|
||||
scrape += f'{line}\n'
|
||||
tag_found = True
|
||||
else:
|
||||
scrape += f'{line}\n'
|
||||
|
||||
# split scrapes into multiple interfaces, each interface entry may have many sessions
|
||||
interfaces = []
|
||||
try:
|
||||
if len(scrape) >0:
|
||||
for line in scrape.split('\n'): # this can fail but only on a huge scrape and hard to see, mep-shared-rri-agg09
|
||||
if 'Interface: ' in line:
|
||||
interfaces.append(f'{line}\n')
|
||||
else:
|
||||
interfaces[-1] += f'{line}\n'
|
||||
except Exception as e:
|
||||
print(f'failed to process scrape: {e}')
|
||||
#print(scrape)
|
||||
pass
|
||||
# print(f'retrieved crypto session interface entries {len(interfaces)}')
|
||||
|
||||
# loop interfaces, loop session attributes
|
||||
global task_count
|
||||
global session_records
|
||||
for i in interfaces:
|
||||
#print(i)
|
||||
peer_record_dict = {}
|
||||
session_record_dict = {}
|
||||
#global task_count
|
||||
all_sess = ""
|
||||
all_sess_found = False
|
||||
for line in i.split('\n'):
|
||||
if 'Interface: ' in line:
|
||||
crypto_session_interface = line.split(' ')[1]
|
||||
peer_record_dict.update({'crypto_session_interface': crypto_session_interface})
|
||||
if 'Profile: ' in line:
|
||||
p1_profile = line.split(' ')[1]
|
||||
peer_record_dict.update({'p1_profile': p1_profile})
|
||||
if 'Session status: ' in line:
|
||||
session_status = line.split('Session status: ')[1]
|
||||
peer_record_dict.update({'session_status': session_status})
|
||||
if 'Peer: ' in line:
|
||||
peer_ip = line.split(' ')[1]
|
||||
peer_port = line.split(' ')[3]
|
||||
p2_fvrf = line.split(' ')[5].replace('(', '').replace(')', '')
|
||||
# p1_vrf = line.split(' ')[7]
|
||||
peer_record_dict.update({'peer_ip': peer_ip, 'peer_port': peer_port, 'p2_fvrf': p2_fvrf})
|
||||
if 'Phase1_id: ' in line:
|
||||
peer_vpn_id = line.lstrip().split(' ')[1].replace('(', '').replace(')', '')
|
||||
peer_record_dict.update({'peer_vpn_id': peer_vpn_id})
|
||||
# split all lines from 'IKEv1 SA: ' to end
|
||||
if not all_sess_found:
|
||||
if any(ike in line for ike in ['IKEv1 SA: ', 'IKE SA: ']):
|
||||
all_sess += f'{line}\n'
|
||||
all_sess_found = True
|
||||
elif 'Session ID: ' in line:
|
||||
pass
|
||||
else:
|
||||
all_sess += f'{line}\n'
|
||||
|
||||
# breakout each session, this can be P1 only with 'IKEv1 SA: ' or P1 + P2 with 'IKEv1 SA: ' and 'IPSEC FLOW: '
|
||||
all_sess_pairs = []
|
||||
for line in all_sess.split('\n'):
|
||||
if any(ike in line for ike in ['IKEv1 SA: ', 'IKE SA: ']):
|
||||
all_sess_pairs.append(f'{line}\n')
|
||||
else:
|
||||
all_sess_pairs[-1] += f'{line}\n'
|
||||
for asp in all_sess_pairs:
|
||||
#print(asp)
|
||||
ipsec_flow =[]
|
||||
for line in asp.split('\n'):
|
||||
# if 'IKEv1 SA: ' in line: # fails for older cisco
|
||||
if any(ike in line for ike in ['IKEv1 SA: ', 'IKE SA: ']):
|
||||
local_ip = line.lstrip().split(' ')[3].split('/')[0]
|
||||
local_port = line.lstrip().split(' ')[3].split('/')[1]
|
||||
session_record_dict.update({'local_ip': local_ip, 'local_port': local_port})
|
||||
if 'connid:' in line:
|
||||
c_id = line.lstrip().split(' ')[1].split(':')[1]
|
||||
session_record_dict.update({'c_id': c_id})
|
||||
if 'IPSEC FLOW: ' in line:
|
||||
acl = line.lstrip().split('FLOW: ')[1]
|
||||
ipsec_flow.append(acl)
|
||||
if len(ipsec_flow) >0:
|
||||
session_record_dict.update({'ipsec_flow': ipsec_flow})
|
||||
|
||||
# merge peer record component with each session record
|
||||
session_record_dict.update(peer_record_dict)
|
||||
# print(session_record_dict)
|
||||
|
||||
# check for required fields for complete/valid session record
|
||||
if all(m in session_record_dict for m in ['local_ip', 'peer_ip', 'c_id']):
|
||||
task_count += 1
|
||||
print('session interface entries processed %d\r'%task_count, end="")
|
||||
session_records.append(session_record_dict)
|
||||
|
||||
# init vars, start
|
||||
device_name = device_record['DeviceName']
|
||||
device_table = collection['temp'][device_name] # create/use-existing temp subcollection
|
||||
peer_ips = []
|
||||
sessions = []
|
||||
session_count = 0
|
||||
global session_records
|
||||
session_records = []
|
||||
global task_count
|
||||
task_count = 0
|
||||
|
||||
# get scrapes
|
||||
with Scrapli(**connection) as conn:
|
||||
peers = conn.send_command('show crypto session brief')
|
||||
for line in peers.result.split('\n'):
|
||||
if len(line) >0 and not any(exclude in line for exclude in ['ivrf = ', 'Peer ', 'Status: ', 'No IKE']):
|
||||
# print(line)
|
||||
format_line = ' '.join(line.split())
|
||||
ip = format_line.split(' ')[0]
|
||||
# print(ip)
|
||||
peer_ips.append(format_line.split(' ')[0])
|
||||
if len(peer_ips) >0:
|
||||
print(f'crypto session count {len(peer_ips)}')
|
||||
for ip in peer_ips:
|
||||
session_count += 1
|
||||
print('lookup crypto sessions %d\r'%session_count, end="")
|
||||
session = conn.send_command(f'show crypto session remote {ip} detail')
|
||||
# print(session.result)
|
||||
sessions.append(session.result)
|
||||
|
||||
# process scrapes
|
||||
print(f'lookup crypto sessions {len(sessions)}')
|
||||
if len(sessions) >0:
|
||||
n_cores = os.cpu_count()
|
||||
partial_function = partial(process_session)
|
||||
#with ThreadPoolExecutor(max_workers=1) as executor: # debug
|
||||
with ThreadPoolExecutor(max_workers=n_cores) as executor:
|
||||
executor.map(partial_function, sessions)
|
||||
print(f'session interface entries processed {task_count}')
|
||||
|
||||
# write to db
|
||||
if len(session_records) >0:
|
||||
requests = []
|
||||
for i in session_records:
|
||||
record = i
|
||||
filter = {'c_id': record['c_id']}
|
||||
requests.append(UpdateMany(filter, {'$set': record}, upsert=True))
|
||||
result = device_table.bulk_write(requests)
|
||||
print(result.bulk_api_result)
|
||||
return result
|
||||
|
||||
#### scrape 3)
|
||||
# 'show crypto ipsec sa peer {p}'
|
||||
# {'p2_interface': 'Tunnel6', 'local_ip': '10.225.112.42', 'peer_ip': '10.225.56.110', 'peer_port': '500', 'protected_vrf': 'none', 'pfs': 'N', 'p2_encr_algo': 'esp-256-aes', 'p2_hash_algo': 'esp-sha-hmac', 'p2_status': 'ACTIVE', 'crypto_map': 'Tunnel6-head-0'}
|
||||
# correlate to (scrape 1) + scrape 2)) with keys 'local_ip' 'peer_ip' 'peer_port'
|
||||
def cisco_vpn_phase2(collection, command_output, device_record, connection):
|
||||
print('\ncisco_vpn_phase2')
|
||||
|
||||
def process_p2_scrapes(record):
|
||||
global p2_scrapes
|
||||
global empty_p2_scrapes
|
||||
global p2_records
|
||||
interfaces = []
|
||||
if not len(record) >0:
|
||||
empty_p2_scrapes += 1
|
||||
else:
|
||||
for line in record.split('\n'):
|
||||
if 'interface: ' in line:
|
||||
interfaces.append(f'{line}\n')
|
||||
else:
|
||||
interfaces[-1] += f'{line}\n'
|
||||
# print(f'manual scrape tunnel interface count {len(interfaces)}')
|
||||
for int in interfaces:
|
||||
# reset interface vars
|
||||
for va in ['p2_interface', 'local_ip']:
|
||||
if va in locals():
|
||||
del va
|
||||
# get interface vars
|
||||
for line in int.split('\n'):
|
||||
if 'interface' in line:
|
||||
p2_interface = line.split(' ')[1]
|
||||
if 'local addr' in line:
|
||||
local_ip = line.split('addr ')[1]
|
||||
# strip up to 'protected vrf:' for ivrf loop
|
||||
intf = ""
|
||||
tag_found = False
|
||||
for line in int.split('\n'):
|
||||
if not tag_found:
|
||||
if 'protected vrf:' in line:
|
||||
intf += f'{line}\n'
|
||||
tag_found = True
|
||||
else:
|
||||
intf += f'{line}\n'
|
||||
# loop vrfs
|
||||
vrfs = []
|
||||
for line in intf.split('\n'):
|
||||
if 'protected vrf:' in line:
|
||||
vrfs.append(f'{line}\n')
|
||||
else:
|
||||
vrfs[-1] += f'{line}\n'
|
||||
for v in vrfs:
|
||||
# reset ivrf vars
|
||||
for va in ['peer_ip', 'peer_port', 'vrf', 'pfs', 'transform', 'p2_encr_algo', 'p2_hash_algo', 'status', 'crypto_map']:
|
||||
if va in locals():
|
||||
del va
|
||||
peer_ip_l = []
|
||||
peer_port_l = []
|
||||
vrf_l = []
|
||||
pfs_l = []
|
||||
transform_l = []
|
||||
status_l = []
|
||||
crypto_map_l = []
|
||||
p2_record_dict = {}
|
||||
|
||||
# get vrf vars
|
||||
for line in v.split('\n'):
|
||||
if 'current_peer' in line:
|
||||
peer_ip_l.append(line.lstrip(' ').split(' ')[1])
|
||||
peer_port_l.append(line.lstrip(' ').split(' ')[3])
|
||||
if 'PFS' in line:
|
||||
pfs_l.append(line.lstrip(' ').split(' ')[2].split(',')[0].lower())
|
||||
if 'transform' in line:
|
||||
transform_l.append(line.lstrip(' ').split('transform: ')[1].split(' ,')[0])
|
||||
if 'crypto map: ' in line:
|
||||
crypto_map_l.append(line.lstrip(' ').split('crypto map: ')[1])
|
||||
if 'Status' in line:
|
||||
status_l.append(line.split('Status: ')[1].split('(')[0])
|
||||
if 'protected vrf' in line:
|
||||
vrf_l.append(line.split('protected vrf: ')[1].replace('(', '').replace(')', ''))
|
||||
|
||||
# write vrf vars to record dict
|
||||
p2_record_dict.update({'p2_interface': p2_interface})
|
||||
p2_record_dict.update({'local_ip': local_ip})
|
||||
if len(peer_ip_l) >0:
|
||||
#peer_ip = peer_ip_l[0]
|
||||
p2_record_dict.update({'peer_ip': peer_ip_l[0]})
|
||||
if len(peer_port_l) >0:
|
||||
#peer_port = peer_port_l[0]
|
||||
p2_record_dict.update({'peer_port': peer_port_l[0]})
|
||||
if len(vrf_l) >0:
|
||||
#vrf = vrf_l[0]
|
||||
p2_record_dict.update({'protected_vrf': vrf_l[0]})
|
||||
if len(pfs_l) >0:
|
||||
#pfs = pfs_l[0].upper()
|
||||
p2_record_dict.update({'pfs': pfs_l[0].upper()})
|
||||
# else:
|
||||
# pfs = 'N'
|
||||
if len(transform_l) >0:
|
||||
transform = transform_l[0]
|
||||
#p2_encr_algo = transform.split(' ')[0]
|
||||
p2_record_dict.update({'p2_encr_algo': transform.split(' ')[0]})
|
||||
#p2_hash_algo = transform.split(' ')[1]
|
||||
p2_record_dict.update({'p2_hash_algo': transform.split(' ')[1]})
|
||||
if len(status_l) >0:
|
||||
#status = status_l[0]
|
||||
p2_record_dict.update({'p2_status': status_l[0]})
|
||||
if len(crypto_map_l) >0:
|
||||
#crypto_map = crypto_map_l[0]
|
||||
p2_record_dict.update({'crypto_map': crypto_map_l[0]})
|
||||
# print(p2_record_dict)
|
||||
|
||||
# check for required fields for complete/valid p2 record
|
||||
if all(include in p2_record_dict for include in ['local_ip', 'peer_ip', 'peer_port']):
|
||||
# print(p2_record_dict)
|
||||
p2_records.append(p2_record_dict)
|
||||
p2_scrapes += 1
|
||||
print('phase2 scrape processed %d\r'%p2_scrapes, end="")
|
||||
|
||||
def get_p2_scrapes(peers, connection):
|
||||
global scrape_count
|
||||
peer_commands = [f'show crypto ipsec sa peer {p}' for p in set(peers)]
|
||||
print(f'phase2 peer commands prepared {len(peer_commands)}')
|
||||
p2scrapes = []
|
||||
if len(peers) >0:
|
||||
try:
|
||||
with Scrapli(**connection) as conn:
|
||||
for c in peer_commands:
|
||||
p2scrape = conn.send_command(c)
|
||||
p2scrapes.append(p2scrape)
|
||||
scrape_count += 1
|
||||
print('lookup phase2 peers %d\r'%scrape_count, end="")
|
||||
return p2scrapes
|
||||
except Exception as e:
|
||||
print(f'exception_type: {type(e).__name__}')
|
||||
# pass
|
||||
print('phase2 failback parser method error collecting scrapes, possible incomplete records for this device')
|
||||
return p2scrapes
|
||||
else:
|
||||
print('phase2 failback parser method error, no phase1 peers/tunnels')
|
||||
return p2scrapes
|
||||
|
||||
# init vars
|
||||
device_name = device_record['DeviceName']
|
||||
device_table = collection['temp'][device_name] # create/use-existing temp subcollection
|
||||
global scrape_count
|
||||
scrape_count = 0
|
||||
global p2_scrapes
|
||||
p2_scrapes = 0
|
||||
global empty_p2_scrapes
|
||||
empty_p2_scrapes = 0
|
||||
global p2_records
|
||||
p2_records = []
|
||||
|
||||
# lookup all peer_ip
|
||||
peer_ip_query = device_table.find({"peer_ip": { "$exists": True }}, {"peer_ip":1, "_id":0})
|
||||
peers = [r['peer_ip'] for r in peer_ip_query]
|
||||
|
||||
# get p2 scrapes
|
||||
result = get_p2_scrapes(peers, connection)
|
||||
print(f'lookup phase2 peers {scrape_count}')
|
||||
|
||||
# process p2 scrapes
|
||||
if len(result) >0:
|
||||
scrape_results = [s.result for s in result]
|
||||
n_cores = os.cpu_count()
|
||||
partial_function = partial(process_p2_scrapes)
|
||||
# with ThreadPoolExecutor(max_workers=1) as executor: # debug
|
||||
with ThreadPoolExecutor(max_workers=n_cores) as executor:
|
||||
executor.map(partial_function, scrape_results)
|
||||
print(f'phase2 scrape processed {p2_scrapes}')
|
||||
print(f'empty phase2 scrapes {empty_p2_scrapes}')
|
||||
|
||||
# write to db
|
||||
if len(p2_records) >0:
|
||||
requests = []
|
||||
for i in p2_records:
|
||||
record = i
|
||||
filter = {'local_ip': record['local_ip'], 'peer_ip': record['peer_ip'], 'peer_port': record['peer_port']}
|
||||
requests.append(UpdateMany(filter, {'$set': record}, upsert=True))
|
||||
result = device_table.bulk_write(requests)
|
||||
print(result.bulk_api_result)
|
||||
return result
|
||||
|
||||
#### scrape 4)
|
||||
# {'crypto_map': 'Tunnel6-head-0', 'peer_ip': '10.227.112.50', 'pfs': 'N', 'transform_sets': [{'name': 'TS-AES256-SHA', 'p2_encr_algo': 'esp-256-aes', 'p2_hash_algo': 'esp-sha-hmac'}, {'name': 'TS-3DES-SHA', 'p2_encr_algo': 'esp-3des', 'p2_hash_algo': 'esp-sha-hmac'}], 'crypto_map_interface': ['Tunnel6'], 'RRI_enabled': False, 'default_p2_3des': False}
|
||||
# correlate to (scrape 1) + scrape 2) + scrape 3)) with keys 'peer_ip' 'crypto_map'
|
||||
def cisco_crypto_map(collection, command_output, device_record, connection):
|
||||
print('\ncisco_crypto_map')
|
||||
|
||||
def process_cryptomaps(cryptomap):
|
||||
# print(cryptomap)
|
||||
# loop lines in cryptomap entry
|
||||
global crypto_map_count
|
||||
global cryptomap_records
|
||||
tfs_found = False
|
||||
int_found = False
|
||||
tfset = []
|
||||
crypto_map_interface = []
|
||||
cryptomap_record_dict = {}
|
||||
for line in cryptomap.split('\n'):
|
||||
if 'Crypto Map "' in line: # older variant of ios
|
||||
crypto_map = line.split(' ')[2].replace('"', '')
|
||||
cryptomap_record_dict.update({'crypto_map': crypto_map})
|
||||
if 'Crypto Map IPv4 "' in line: # newer variant of ios
|
||||
crypto_map = line.split(' ')[3].replace('"', '')
|
||||
cryptomap_record_dict.update({'crypto_map': crypto_map})
|
||||
if 'ISAKMP Profile: ' in line:
|
||||
p1_profile = line.split(' ')[2]
|
||||
cryptomap_record_dict.update({'p1_profile': p1_profile})
|
||||
if 'Current peer: ' in line:
|
||||
peer_ip = line.split(' ')[2]
|
||||
cryptomap_record_dict.update({'peer_ip': peer_ip})
|
||||
# RRI devices use dynamic crypto map templates, the name of the crypto map may not match the template name CM-BML-RRI != CDM-BML-RRI
|
||||
if 'dynamic (created from dynamic map ' in line:
|
||||
# dynamic (created from dynamic map CDM-BML-RRI/200)
|
||||
crypto_map_template = line.split('dynamic map ')[1].split('/')[0]
|
||||
cryptomap_record_dict.update({'crypto_map_template': crypto_map_template})
|
||||
if 'PFS (Y/N): ' in line:
|
||||
pfs = line.split(' ')[2].upper()
|
||||
cryptomap_record_dict.update({'pfs': pfs})
|
||||
|
||||
if not tfs_found:
|
||||
if 'Transform sets=' in line:
|
||||
tfs_found = True
|
||||
pass
|
||||
elif ' } ,' in line:
|
||||
tfs_name = line.split(' ')[0].split(':')[0]
|
||||
tfs_encr_algo = line.replace(' ', ' ').split(' ')[2]
|
||||
tfs_hash_algo = line.replace(' ', ' ').split(' ')[3]
|
||||
tfset.append({'name': tfs_name, 'p2_encr_algo': tfs_encr_algo, 'p2_hash_algo': tfs_hash_algo})
|
||||
else:
|
||||
tfs_found = False
|
||||
|
||||
if 'Reverse Route Injection Enabled' in line:
|
||||
cryptomap_record_dict.update({'RRI_enabled': True})
|
||||
|
||||
if not int_found:
|
||||
if 'Interfaces using crypto map ' in line:
|
||||
int_found = True
|
||||
pass
|
||||
else:
|
||||
if len(line) >0:
|
||||
crypto_map_interface.append(line)
|
||||
|
||||
# add possible list items to cryptomap record
|
||||
if len(tfset) >0:
|
||||
cryptomap_record_dict.update({'transform_sets': tfset})
|
||||
if len(crypto_map_interface) >0:
|
||||
cryptomap_record_dict.update({'crypto_map_interface' : crypto_map_interface})
|
||||
|
||||
# catch absence of RRI
|
||||
if 'RRI_enabled' not in cryptomap_record_dict:
|
||||
cryptomap_record_dict.update({'RRI_enabled': False})
|
||||
|
||||
# # DISABLE - transform_sets is dynamic, not the best source of truth
|
||||
# # determine if 1st/default P2 transform set is 3des
|
||||
# if 'transform_sets' in cryptomap_record_dict:
|
||||
# if '3des' in cryptomap_record_dict['transform_sets'][0]['p2_encr_algo'].lower():
|
||||
# cryptomap_record_dict.update({'default_p2_3des': True})
|
||||
# else:
|
||||
# cryptomap_record_dict.update({'default_p2_3des': False})
|
||||
# # print(cryptomap_record_dict)
|
||||
|
||||
# check for required fields for complete/valid cryptomap record (if the cryptomap has no peer_ip it has no use)
|
||||
if all(include in cryptomap_record_dict for include in ['peer_ip', 'crypto_map']):
|
||||
# print(cryptomap_record_dict)
|
||||
cryptomap_records.append(cryptomap_record_dict)
|
||||
crypto_map_count += 1
|
||||
print('cryptomaps processed %d\r'%crypto_map_count, end="")
|
||||
|
||||
# start, init vars
|
||||
global crypto_map_count
|
||||
crypto_map_count = 0
|
||||
global cryptomap_records
|
||||
cryptomap_records = []
|
||||
device_name = device_record['DeviceName']
|
||||
device_table = collection['temp'][device_name] # create/use-existing temp subcollection
|
||||
output = command_output['output']
|
||||
# print(output.result)
|
||||
|
||||
if output == 'error':
|
||||
print('parse failure, output too large for screen scrape, consider a command targetted by peer')
|
||||
elif output == 'compound':
|
||||
print('cisco_crypto_map is not a compound command')
|
||||
else:
|
||||
# strip everything up to first 'Crypto Map IPv4 '
|
||||
scrape = ""
|
||||
tag_found = False
|
||||
for line in output.result.split('\n'):
|
||||
if not tag_found:
|
||||
#if 'Crypto Map IPv4 ' in line:
|
||||
if 'Crypto Map ' in line:
|
||||
scrape += f'{line}\n'
|
||||
tag_found = True
|
||||
else:
|
||||
scrape += f'{line}\n'
|
||||
# print(scrape)
|
||||
|
||||
# split document into cryptomap entries
|
||||
cryptomaps = []
|
||||
crypto_map_found_count = 0
|
||||
try:
|
||||
if len(scrape) >0:
|
||||
for line in scrape.split('\n'): # this can fail but only on a huge scrape which is hard to see - mep-shared-rri-agg09
|
||||
# if 'Crypto Map IPv4 ' in line: # will not work on older cisco
|
||||
if 'Crypto Map ' in line:
|
||||
cryptomaps.append(f'{line}\n')
|
||||
crypto_map_found_count += 1
|
||||
print('lookup crypto maps %d\r'%crypto_map_found_count, end="")
|
||||
elif 'Crypto Map: ' in line: # these lines list the isakmp profile for the ipsec cryptomap profile, shorthand output that is not required
|
||||
pass
|
||||
else:
|
||||
cryptomaps[-1] += f'{line.lstrip()}\n'
|
||||
print(f'lookup crypto maps {crypto_map_found_count}')
|
||||
except Exception as e:
|
||||
print(f'failed to process cryptomap scrape: {e}')
|
||||
# print(scrape)
|
||||
pass
|
||||
|
||||
# process cryptomap scrapes
|
||||
n_cores = os.cpu_count()
|
||||
partial_function = partial(process_cryptomaps)
|
||||
# with ThreadPoolExecutor(max_workers=1) as executor: # debug
|
||||
with ThreadPoolExecutor(max_workers=n_cores) as executor:
|
||||
executor.map(partial_function, cryptomaps)
|
||||
print(f'cryptomaps processed {crypto_map_count}')
|
||||
|
||||
# write to db
|
||||
if len(cryptomap_records) >0:
|
||||
requests = []
|
||||
for i in cryptomap_records:
|
||||
record = i
|
||||
filter = {'peer_ip': record['peer_ip'], 'crypto_map': record['crypto_map']}
|
||||
requests.append(UpdateMany(filter, {'$set': record}, upsert=True))
|
||||
result = device_table.bulk_write(requests)
|
||||
print(result.bulk_api_result)
|
||||
return result
|
||||
|
||||
def cisco_isakmp_policy(collection, command_output, device_record, connection):
|
||||
print('\ncisco_isakmp_policy')
|
||||
device_name = device_record['DeviceName']
|
||||
ip = device_record['IPAddress']
|
||||
output = command_output['output']
|
||||
|
||||
if not output == 'error':
|
||||
scrape = ""
|
||||
tag_found = False
|
||||
isakmp_policy = []
|
||||
# split scrape by policies
|
||||
for line in output.result.split('\n'):
|
||||
# print(line)
|
||||
if not tag_found:
|
||||
if 'Global IKE policy' in line:
|
||||
tag_found = True
|
||||
else:
|
||||
scrape += f'{line}\n'
|
||||
# print(scrape)
|
||||
|
||||
# split policies by suite
|
||||
suite = []
|
||||
if len(scrape) >0:
|
||||
for line in scrape.split('\n'):
|
||||
if 'Protection suite of priority ' in line:
|
||||
suite.append(f'{line}\n')
|
||||
else:
|
||||
suite[-1] += f'{line}\n'
|
||||
|
||||
# get suite attributes
|
||||
for s in suite:
|
||||
suite_dict = {}
|
||||
# print(s)
|
||||
for line in s.split('\n'):
|
||||
#print(line)
|
||||
sline = line.lstrip()
|
||||
if 'Protection suite of priority' in sline:
|
||||
priority = sline.split(' ')[4]
|
||||
# print(priority)
|
||||
suite_dict.update({'priority': priority})
|
||||
if 'encryption algorithm:' in sline:
|
||||
if 'Advanced Encryption Standard' in sline:
|
||||
enc_algo = 'aes'
|
||||
elif 'Three key triple DE' in sline:
|
||||
enc_algo = '3des'
|
||||
elif 'Data Encryption Standard' in sline:
|
||||
enc_algo = 'des'
|
||||
else:
|
||||
enc_algo = 'no_match'
|
||||
if enc_algo != 'no_match':
|
||||
enc_kb = [int(x) for x in sline[sline.find("(")+1:sline.find(")")].split() if x.isdigit()]
|
||||
enc_kb = str(enc_kb[0]) if len(enc_kb) >0 else ''
|
||||
if len(enc_kb) >0:
|
||||
enc_algo = enc_algo + '_' + str(enc_kb)
|
||||
# print(enc_algo)
|
||||
suite_dict.update({'enc_algo': enc_algo})
|
||||
if 'hash algorithm:' in sline:
|
||||
if 'Secure Hash Standard 2' in sline:
|
||||
hash_algo = 'sha2'
|
||||
elif 'Secure Hash Standard' in sline:
|
||||
hash_algo = 'sha'
|
||||
elif 'Message Digest 5' in sline:
|
||||
hash_algo = 'md5'
|
||||
else:
|
||||
hash_algo = 'no_match'
|
||||
if hash_algo != 'no_match':
|
||||
hash_kb = [int(x) for x in sline[sline.find("(")+1:sline.find(")")].split() if x.isdigit()]
|
||||
hash_kb = str(hash_kb[0]) if len(hash_kb) >0 else ''
|
||||
if len(hash_kb) >0:
|
||||
hash_algo = hash_algo + '_' + str(hash_kb)
|
||||
# print(hash_algo)
|
||||
suite_dict.update({'hash_algo': hash_algo})
|
||||
if 'authentication method:' in sline:
|
||||
if 'Pre-Shared Key' in sline:
|
||||
auth_type = 'psk'
|
||||
else:
|
||||
auth_type = 'no_match'
|
||||
if 'Diffie-Hellman group:' in sline:
|
||||
dh_group = sline.split('Diffie-Hellman group:')[1].split(' ')[0].lstrip().replace('#', '')
|
||||
dh_group_kb = [int(x) for x in sline[sline.find("(")+1:sline.find(")")].split() if x.isdigit()]
|
||||
dh_group_kb = str(dh_group_kb[0]) if len(dh_group_kb) >0 else ''
|
||||
if len(dh_group_kb) >0:
|
||||
dh_group = dh_group + '_' + str(dh_group_kb)
|
||||
# print(dh_group)
|
||||
suite_dict.update({'dh_group': dh_group})
|
||||
# print(suite_dict)
|
||||
isakmp_policy.append(suite_dict)
|
||||
|
||||
# get isakmp policy precedence
|
||||
if len(isakmp_policy) >0:
|
||||
# print(isakmp_policy)
|
||||
print(f'isakmp policy entry count {len(isakmp_policy)}')
|
||||
update = {'isakmp_policy': isakmp_policy}
|
||||
# isakmp_policy.append({'priority': '0', 'enc_algo': '3des', 'hash_algo': 'sha', 'dh_group': '14_2048'}) # debug
|
||||
# find highest priority isakmp policy enc_algo
|
||||
highest_priority_policy = sorted([int(x['priority']) for x in isakmp_policy])[0]
|
||||
highest_priority_policy_algo = [x['enc_algo'] for x in isakmp_policy if x['priority'] == str(highest_priority_policy)]
|
||||
if 'des' in highest_priority_policy_algo[0]:
|
||||
# print(highest_priority_policy_algo[0])
|
||||
update.update({'isakmp_policy_default_p1_3des': True})
|
||||
else:
|
||||
update.update({'isakmp_policy_default_p1_3des': False})
|
||||
|
||||
# update _default table with device isakmp policy result
|
||||
# for i in update['isakmp_policy']:
|
||||
# print(i)
|
||||
# print(update['isakmp_policy_default_p1_3des'])
|
||||
filter = {'DeviceName': device_name}
|
||||
result = collection.update_one(filter, {'$set': update}, upsert=True)
|
||||
# print(dir(result))
|
||||
# print(result.acknowledged) # need to print the full insert update upsert stuff
|
||||
return result
|
||||
else:
|
||||
print(f'isakmp policy entry count {len(isakmp_policy)}')
|
||||
|
||||
def cisco_transform_set(collection, command_output, device_record, connection):
|
||||
print('\ncisco_transform_set')
|
||||
|
||||
def process_transform_set(scrape):
|
||||
# print(scrape)
|
||||
tfs_found = False
|
||||
tfset = []
|
||||
for line in scrape.split('\n'):
|
||||
sline = line.lstrip()
|
||||
if not tfs_found:
|
||||
if 'Transform sets=' in sline:
|
||||
tfs_found = True
|
||||
pass
|
||||
elif ' } ,' in sline:
|
||||
tfs_name = sline.split(' ')[0].split(':')[0]
|
||||
tfs_encr_algo = sline.replace(' ', ' ').split(' ')[2]
|
||||
tfs_hash_algo = sline.replace(' ', ' ').split(' ')[3]
|
||||
tfset.append({'name': tfs_name, 'p2_encr_algo': tfs_encr_algo, 'p2_hash_algo': tfs_hash_algo})
|
||||
else:
|
||||
tfs_found = False
|
||||
return tfset
|
||||
|
||||
## init vars
|
||||
update_src = []
|
||||
device_name = device_record['DeviceName']
|
||||
device_type = device_record['DeviceType']
|
||||
device_table = collection['temp'][device_name]
|
||||
requests = []
|
||||
|
||||
## dmvpn lookup ordered transform set
|
||||
# print('dmvpn')
|
||||
if device_type in ["IP-VPNHUB", "IP-VCSR-HUB"]:
|
||||
tunnel_interfaces = device_table.distinct('p2_interface')
|
||||
print(f'tunnel interfaces {tunnel_interfaces}')
|
||||
if len(tunnel_interfaces) >0:
|
||||
for t in tunnel_interfaces:
|
||||
interface_name = t
|
||||
with Scrapli(**connection) as conn:
|
||||
interface = conn.send_command(f'show interface {interface_name}')
|
||||
parsed = interface.genie_parse_output()
|
||||
# print(json.dumps(parsed, indent=4))
|
||||
if 'tunnel_profile' in parsed[interface_name]:
|
||||
ipsec_profile_name = parsed[interface_name]['tunnel_profile']
|
||||
elif 'Tunnel protection via IPSec' in interface.result:
|
||||
# some ios genie outputs are not fully parsed, failback to manual parse
|
||||
for line in interface.result.split('\n'):
|
||||
if 'Tunnel protection via IPSec' in line:
|
||||
ipsec_profile_name = [a for a in line[line.find("(")+1:line.find(")")].split()][1]
|
||||
# print(ipsec_profile_name)
|
||||
if 'ipsec_profile_name' in locals():
|
||||
with Scrapli(**connection) as conn:
|
||||
ipsec_profile = conn.send_command(f'show crypto ipsec profile {ipsec_profile_name}')
|
||||
# print(ipsec_profile.result)
|
||||
transform_set = process_transform_set(ipsec_profile.result)
|
||||
match_field = 'p2_interface'
|
||||
match_field_value = t
|
||||
update_src.append({'match_field': match_field, 'match_field_value': match_field_value, 'transform_set': transform_set})
|
||||
|
||||
## rri lookup ordered transform set
|
||||
# print('rri')
|
||||
if device_type in ["IP-VPNAGG", "IP-P2PAGG"]:
|
||||
crypto_map_templates = device_table.distinct('crypto_map_template')
|
||||
# print(crypto_map_templates)
|
||||
if len(crypto_map_templates) >0:
|
||||
for t in crypto_map_templates:
|
||||
with Scrapli(**connection) as conn:
|
||||
crypto_map = conn.send_command(f'show crypto dynamic-map tag {t}')
|
||||
# print(crypto_map.result)
|
||||
transform_set = process_transform_set(crypto_map.result)
|
||||
match_field = 'crypto_map_template'
|
||||
match_field_value = t
|
||||
update_src.append({'match_field': match_field, 'match_field_value': match_field_value, 'transform_set': transform_set})
|
||||
|
||||
## build db update requests
|
||||
# print(json.dumps(update_src, indent=4))
|
||||
if len(update_src) >0:
|
||||
for r in update_src:
|
||||
query = {r['match_field']: r['match_field_value']}
|
||||
# print(query)
|
||||
object_ids = [d for d in device_table.distinct('_id', query)]
|
||||
# print(object_ids)
|
||||
query = { "_id" : { "$in" : object_ids } }
|
||||
update = {'ordered_transform_set': r['transform_set']}
|
||||
requests.append(UpdateMany(query, {'$set': update}, upsert=True))
|
||||
# print(requests)
|
||||
|
||||
## bulk update collection documents with ordered_transform_sets
|
||||
if len(requests) >0:
|
||||
dst_result = device_table.bulk_write(requests)
|
||||
print(dst_result.bulk_api_result)
|
||||
return dst_result
|
||||
|
||||
def triple_des_check(collection, command_output, device_record, connection):
|
||||
print('\ntriple_des_check')
|
||||
|
||||
## owing to the age of mongodb 3.0.15 some filters/operators are not available, the following queries could otherwise be merged and done in bulk in the query language with a huge performance uptick
|
||||
# "$arrayElemAt" "$first" "$slice", "$regex" also does not honour read ahead negative match (?!3des)
|
||||
# https://stackoverflow.com/questions/29664097/what-is-the-syntax-for-mongodb-query-for-boolean-values
|
||||
# https://www.tutorialspoint.com/get-the-first-element-in-an-array-and-return-using-mongodb-aggregate
|
||||
|
||||
def p2_encr_algo_check(collection, triple_des_match = True):
|
||||
if triple_des_match:
|
||||
#regex_statement = {'$regex': '.*3des.*', '$options': 'i'}
|
||||
regex_statement = re.compile('(?i).*3DES.*')
|
||||
else:
|
||||
regex_statement = {'$not': re.compile('(?i).*3DES.*')}
|
||||
result = collection.aggregate([
|
||||
{"$match": {"ordered_transform_set": {"$exists": True}}},
|
||||
{"$match": {'p2_encr_algo': regex_statement }},
|
||||
{"$project": {"_id": 1}}
|
||||
])
|
||||
matched_doc_ids = [d['_id'] for d in result]
|
||||
# print(dumps(matched_doc_ids, indent=4))
|
||||
return matched_doc_ids
|
||||
|
||||
def first_ordered_transform_set_check(collection, doc_ids, triple_des_match = True):
|
||||
matched_doc_ids = []
|
||||
if triple_des_match:
|
||||
regex_statement = re.compile('(?i).*3DES.*')
|
||||
else:
|
||||
regex_statement = {'$not': re.compile('(?i).*3DES.*')}
|
||||
for doc_id in doc_ids:
|
||||
result = collection.aggregate([
|
||||
{"$match": { "_id" : doc_id }},
|
||||
{"$unwind": "$ordered_transform_set"},
|
||||
{"$limit": 1 },
|
||||
{"$match": {'ordered_transform_set.p2_encr_algo': regex_statement}},
|
||||
{"$project": {"_id": 1}}
|
||||
])
|
||||
for result_id in [d['_id'] for d in result]:
|
||||
matched_doc_ids.append(result_id)
|
||||
# print(dumps(matched_doc_ids, indent=4))
|
||||
return matched_doc_ids
|
||||
|
||||
def tdes_requests_builder(requests, doc_ids, p2_default_3des, spoke_p2_default_3des, spoke_p2_algo_preference):
|
||||
if len(doc_ids) >0:
|
||||
update = {}
|
||||
update.update({'p2_default_3des': p2_default_3des})
|
||||
if spoke_p2_default_3des != 'unset':
|
||||
update.update({'spoke_p2_default_3des': spoke_p2_default_3des})
|
||||
update.update({'spoke_p2_algo_preference': spoke_p2_algo_preference})
|
||||
# print(json.dumps(update, indent=4))
|
||||
query = { "_id" : { "$in" : doc_ids } }
|
||||
requests.append(UpdateMany(query, {'$set': update}, upsert=True))
|
||||
return requests
|
||||
|
||||
## init vars
|
||||
device_name = device_record['DeviceName']
|
||||
device_table = collection['temp'][device_name]
|
||||
requests = []
|
||||
|
||||
#### p2_encr_algo 3des
|
||||
triple_des_match = True
|
||||
tdes_doc_ids = p2_encr_algo_check(device_table, triple_des_match)
|
||||
|
||||
## 1st ordered_transform_set 3des
|
||||
triple_des_match = True
|
||||
tdes_tdes_ids = first_ordered_transform_set_check(device_table, tdes_doc_ids, triple_des_match)
|
||||
# p2_default_3des = True / spoke_p2_default_3des = unset / spoke_p2_algo_preference = unknown
|
||||
requests = tdes_requests_builder(requests, tdes_tdes_ids, True, 'unset', 'unknown')
|
||||
|
||||
## 1st ordered_transform_set NOT 3des
|
||||
triple_des_match = False
|
||||
tdes_ntdes_ids = first_ordered_transform_set_check(device_table, tdes_doc_ids, triple_des_match)
|
||||
# p2_default_3des False / spoke_p2_default_3des True / spoke_p2_algo_preference = 3des
|
||||
requests = tdes_requests_builder(requests, tdes_ntdes_ids, False, True, '3des')
|
||||
|
||||
#### p2_encr_algo NOT 3des
|
||||
triple_des_match = False
|
||||
ntdes_doc_ids = p2_encr_algo_check(device_table, triple_des_match)
|
||||
|
||||
## 1st ordered_transform_set 3des
|
||||
triple_des_match = True
|
||||
ntdes_tdes_ids = first_ordered_transform_set_check(device_table, ntdes_doc_ids, triple_des_match)
|
||||
# p2_default_3des True / spoke_p2_default_3des False / spoke_p2_algo_preference = not 3des
|
||||
requests = tdes_requests_builder(requests, ntdes_tdes_ids, True, False, 'not 3des')
|
||||
|
||||
## 1st ordered_transform_set NOT 3des
|
||||
triple_des_match = False
|
||||
ntdes_ntdes_ids = first_ordered_transform_set_check(device_table, ntdes_doc_ids, triple_des_match)
|
||||
# p2_default_3des False / spoke_p2_default_3des unset / spoke_p2_algo_preference = unknown
|
||||
requests = tdes_requests_builder(requests, ntdes_ntdes_ids, False, 'unset', 'unknown')
|
||||
|
||||
## bulk update collection documents with ordered_transform_sets
|
||||
if len(requests) >0:
|
||||
result = device_table.bulk_write(requests)
|
||||
print(result.bulk_api_result)
|
||||
return result
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,111 @@
|
|||
import re
|
||||
import socket
|
||||
from pymongo import InsertOne, DeleteMany, ReplaceOne, UpdateOne, UpdateMany
|
||||
import logging
|
||||
|
||||
def device_record(collection):
|
||||
# print('\ndevice_record')
|
||||
logger = logging.getLogger('main')
|
||||
logger.info('Lookup mongodb for device records')
|
||||
|
||||
### Query parameters
|
||||
## DeviceType
|
||||
# exisitng device types ["IP-VPNHUB", "IP-VPNAGG", "IP-P2PAGG"]
|
||||
# new type 'IP-VCSR-HUB' for cloud routers
|
||||
## DeviceStatus
|
||||
# {'Testing', 'Installed', 'Order Cancelled', 'Hold', 'De-Installed', 'Configured', 'Operational', 'Pend De-Install', 'New'}
|
||||
# dont exclude 'Operational' / 'Testing' / 'Configured' the device is up and may have tunnels
|
||||
# exclude 'New' the device maybe physically installed and not configured
|
||||
# exclude 'Pend De-Install' the device doesnt seem to be reachable, maybe powered off ready to de-rack
|
||||
## DeviceName
|
||||
# exclude devices suffixed _old or _ol
|
||||
## Environment_Usage
|
||||
# exclude 'QA' / 'UAT', transitory tunnels no need to record
|
||||
query = { "raw.DeviceType": {"$in": ["IP-VPNHUB", "IP-VPNAGG", "IP-P2PAGG", "IP-VCSR-HUB"]},
|
||||
"raw.Environment_Usage": {"$nin": ["QA", "UAT"]},
|
||||
"raw.DeviceStatus": {"$nin": ["Order Cancelled", "De-Installed", "Installed", "New", "Hold", "Pend De-Install"]},
|
||||
"raw.DeviceName": {"$nin": [re.compile('.*_old$'), re.compile('.*_ol$')]}
|
||||
}
|
||||
|
||||
result = collection.find(query)
|
||||
device_dict = {}
|
||||
include_fields = ['DeviceRecNum', 'DeviceType', 'DeviceDescription', 'DeviceStatus', 'Site', 'Country', 'Region', 'Division']
|
||||
for i in result:
|
||||
device_entry = {}
|
||||
device_attributes = {}
|
||||
if 'DeviceName' in i['raw']:
|
||||
for l in include_fields:
|
||||
if l in i['raw']:
|
||||
device_attributes.update({l: i['raw'][l]})
|
||||
device_entry = { i['raw']['DeviceName']: device_attributes }
|
||||
device_dict.update(device_entry)
|
||||
return device_dict
|
||||
|
||||
def mgmt_address(collection, device_dict):
|
||||
# print('\nmgmt_address')
|
||||
logger = logging.getLogger('main')
|
||||
logger.info('Lookup mongodb for device management addresses')
|
||||
for device in device_dict.keys():
|
||||
query = { "raw.CHR_DeviceName": device }
|
||||
result = collection.find(query)
|
||||
for r in result:
|
||||
# a populated 'DNS_Update_Timestamp' field seems to be the best indicator of the management IP record
|
||||
if r['normalized']['DNS_Update_Timestamp'] is not None:
|
||||
if r['raw']['CHR_FQDN'] is not None:
|
||||
device_dict[device].update({'FQDN': r['raw']['CHR_FQDN']})
|
||||
if r['normalized']['CHR_IPAddress']['ip'] is not None:
|
||||
device_dict[device].update({'IPAddress': r['normalized']['CHR_IPAddress']['ip']})
|
||||
return(device_dict)
|
||||
|
||||
def dns_lookup(suffix, device_dict):
|
||||
# print('\ndns_lookup')
|
||||
logger = logging.getLogger('main')
|
||||
logger.info('Lookup DNS for absent device management addresses')
|
||||
device_lookup = []
|
||||
for k in device_dict.keys():
|
||||
if 'FQDN' not in device_dict[k]:
|
||||
device_lookup.append(k)
|
||||
if 'FQDN' in device_dict[k] and not len(device_dict[k]["FQDN"]) >0:
|
||||
device_lookup.append(k)
|
||||
# lookup device in DNS
|
||||
if len(device_lookup) >0:
|
||||
# print(f'{len(device_lookup)} devices with no MongoDB field for FQDN, perform DNS lookup for: \n {device_lookup}')
|
||||
logger.info(f'{len(device_lookup)} devices with no MongoDB field for FQDN, perform DNS lookup for:')
|
||||
logger.info(f'{device_lookup}')
|
||||
for name in device_lookup:
|
||||
for s in suffix:
|
||||
fqdn = name + '.' + s
|
||||
try:
|
||||
ip = socket.gethostbyname(fqdn)
|
||||
# print(f'found DNS record {fqdn}')
|
||||
logger.info(f'found DNS record {fqdn}')
|
||||
device_dict[name].update({"FQDN": fqdn, "IPAddress": ip})
|
||||
except socket.gaierror as e:
|
||||
pass
|
||||
# populate 'unknown' fields for devices found in MongoDB without DNS records (this is just a catchall to negate further device inspection)
|
||||
for k in device_dict.keys():
|
||||
if 'FQDN' not in device_dict[k] or not len(device_dict[k]['FQDN']) >0:
|
||||
device_dict[k].update({"FQDN": 'unknown'})
|
||||
if 'IPAddress' not in device_dict[k] or not len(device_dict[k]['IPAddress']) >0:
|
||||
device_dict[k].update({"IPAddress": 'unknown'})
|
||||
# print(f'{device_dict["lon-vpn22"]}')
|
||||
return(device_dict)
|
||||
|
||||
def write_devices_collection(collection, device_dict):
|
||||
# print('\nwrite_devices_collection')
|
||||
logger = logging.getLogger('main')
|
||||
logger.info('Write updated device records to collection')
|
||||
# merge 'DeviceName' from key into value dict
|
||||
records = [{**device_dict[i], 'DeviceName': i} for i in device_dict.keys()]
|
||||
requests = []
|
||||
for i in records:
|
||||
record = i
|
||||
filter = {'DeviceName': i['DeviceName']}
|
||||
requests.append(ReplaceOne(filter, record, upsert=True))
|
||||
result = collection.bulk_write(requests)
|
||||
# object_ids = [str(bson.objectid.ObjectId(oid=id)) for id in result.upserted_ids.values()]
|
||||
# print(result.bulk_api_result)
|
||||
# logger.info(result.bulk_api_result)
|
||||
logger.info(f'Database: inserted_count {result.inserted_count} upserted_count {result.upserted_count} matched_count {result.matched_count} modified_count {result.modified_count} deleted_count {result.deleted_count}')
|
||||
object_ids = [id for id in result.upserted_ids.values()]
|
||||
return object_ids
|
||||
|
|
@ -0,0 +1,694 @@
|
|||
from pymongo import InsertOne, DeleteMany, ReplaceOne, UpdateOne, UpdateMany
|
||||
from bson.json_util import dumps, loads
|
||||
import logging
|
||||
from netaddr import valid_ipv4
|
||||
import re
|
||||
|
||||
# pymongo pymongo-4.3.3 has issues with the older version of mongo installed on rstlcnscmgd01.open.corp.tnsi.com, use an older library
|
||||
# pymongo.errors.ConfigurationError: Server at rstlcnscmgd01.open.corp.tnsi.com:27017 reports wire version 3, but this version of PyMongo requires at least 6 (MongoDB 3.6).
|
||||
# pip install pymongo==3.13.0
|
||||
|
||||
def document_ids(collection, query={}, query_modifier={}):
|
||||
## get all document ids in a collection
|
||||
# device_ids = document_ids(mycollection)
|
||||
|
||||
## get all contactable cisco device ids
|
||||
# device_ids = document_ids(mycollection, { "session_protocol" : "ssh", "vendor": "cisco" })
|
||||
|
||||
## get all contactable cisco device ids where device type is IP-VPNAGG using a query modifier
|
||||
# device_ids = document_ids(mycollection, { "session_protocol" : "ssh", "vendor": "cisco" }, { "DeviceType": "IP-VPNAGG" })
|
||||
if len(query_modifier) >0:
|
||||
query.update(query_modifier)
|
||||
result = [d for d in collection.distinct('_id', query)]
|
||||
return result
|
||||
|
||||
def device_names(collection, query_modifier = {}, device_name_list = []):
|
||||
if len(device_name_list) >0:
|
||||
query = { "DeviceName" : { "$in" : device_name_list } }
|
||||
else:
|
||||
query = {}
|
||||
if len(query_modifier) >0:
|
||||
query.update(query_modifier)
|
||||
projection = {"_id": 1, "DeviceName": 1}
|
||||
result = {d['DeviceName']:d['_id'] for d in collection.find(query, projection)}
|
||||
return result
|
||||
|
||||
#def deduplicate_collection(collection, mode='list', ignore_schema_keys=['_id'], required_schema_keys=[]):
|
||||
def deduplicate_collection(**kwargs):
|
||||
## check params
|
||||
required_args = ['collection']
|
||||
missing_args = [arg for arg in required_args if arg not in kwargs.keys()]
|
||||
if len(missing_args) >0:
|
||||
print(f'{deduplicate_collection.__name__} missing arguments {missing_args}')
|
||||
quit()
|
||||
collection = kwargs['collection']
|
||||
mode = kwargs['mode'] if 'mode' in kwargs.keys() else 'list'
|
||||
ignore_schema_keys = kwargs['ignore_schema_keys'] if 'ignore_schema_keys' in kwargs.keys() else ['_id']
|
||||
required_schema_keys = kwargs['required_schema_keys'] if 'required_schema_keys' in kwargs.keys() else []
|
||||
logger_name = kwargs['logger_name'] if 'logger_name' in kwargs.keys() else 'main'
|
||||
logger = logging.getLogger(logger_name)
|
||||
|
||||
## what does this func do?
|
||||
#
|
||||
## dedupe all documents with exactly matching schemas and keys, required_schema_keys=[] must be an empty list
|
||||
#
|
||||
# # this example will find all exactly matching documents with the exact same keys and key values excluding the unique attribute '_id', then remove duplicates
|
||||
# mode = 'list' / 'show' / 'delete'
|
||||
# ignore_schema_keys = ['_id']
|
||||
# deduplicate_collection(collection = collection, mode = mode, ignore_schema_keys = ignore_schema_keys)
|
||||
#
|
||||
# Or
|
||||
#
|
||||
## dedupe documents where smaller documents are a subset of larger documents with matching key values, described by the schema in required_schema_keys=[]
|
||||
# keeps the largest document(s), discards the smaller document(s)
|
||||
# small document: ["local_ip", "p1_ivrf", "peer_ip", "p1_dh_group", "p1_encr_algo", "p1_hash_algo", "p1_auth_type", "p1_status", "local_port", "crypto_session_interface", "peer_port", "p2_fvrf", "peer_vpn_id"]
|
||||
# large document: ['local_ip', 'p1_ivrf', 'peer_ip', 'p1_dh_group', 'p1_encr_algo', 'p1_hash_algo', 'p1_auth_type', 'p1_status', 'local_port', 'ipsec_flow', 'crypto_session_interface', 'peer_port', 'p2_fvrf', 'peer_vpn_id', 'p2_interface', 'protected_vrf', 'pfs', 'p2_encr_algo', 'p2_hash_algo', 'p2_status', 'crypto_map', 'RRI_enabled', 'transform_sets', 'default_p2_3des', 'last_modified']
|
||||
#
|
||||
# # this example shows the state of an idle tunnel, when it is up the phase2 and crypto session information is present, when the scrape is rerun it may capture the tunnel in either state, potentially creating two distinct documents
|
||||
# mode = 'list' / 'show' / 'delete'
|
||||
# ignore_schema_keys = ['_id', 'last_modified', 'session_status']
|
||||
# idle_connection = ["local_ip", "p1_ivrf", "peer_ip", "p1_dh_group", "p1_encr_algo", "p1_hash_algo", "p1_auth_type", "p1_status", "local_port", "crypto_session_interface", "peer_port", "p2_fvrf", "peer_vpn_id"]
|
||||
# deduplicate_collection(collection = collection, mode = mode, ignore_schema_keys = ignore_schema_keys, required_schema_keys = idle_connection)
|
||||
#
|
||||
## the ignore_schema_keys=[] list masks keys from the search query that would otherwise identify documents as unique, at a miniumum the '_id' key must be excluded
|
||||
|
||||
## init
|
||||
delete_object_ids = []
|
||||
document_schemas = []
|
||||
partial_dedupe_document_key_count = 0
|
||||
|
||||
## select dynamic dedupe matching exact document keys or partial dedupe matching some document keys
|
||||
if len(required_schema_keys) >0:
|
||||
partial_dedupe = True
|
||||
operation = 'operation = deduplication of all documents as subsets of larger documents containing matching schemas and key values'
|
||||
partial_dedupe_document_key_count = len(ignore_schema_keys) + len(required_schema_keys)
|
||||
else:
|
||||
partial_dedupe = False
|
||||
operation = 'operation = deduplication of all documents with matching schemas and key values'
|
||||
|
||||
## return all documents in collection to find all keys in collection and all distinct document schemas
|
||||
# inefficient, with later versions of mongo this can be achieved within the query language
|
||||
# https://stackoverflow.com/questions/2298870/get-names-of-all-keys-in-the-collection
|
||||
result = collection.find()
|
||||
for i in result:
|
||||
keys = [k for k in i.keys() if k not in ignore_schema_keys]
|
||||
if keys not in document_schemas:
|
||||
document_schemas.append(keys)
|
||||
# print(f'available schemas\n{document_schemas}')
|
||||
|
||||
## get all the schema keys in collection, this is used to mask keys with exact document key matching
|
||||
all_schema_keys = list(set(sum(document_schemas, [])))
|
||||
# print(f'all schema keys\n{all_schema_keys}')
|
||||
|
||||
# override document_schemas with a single document schema from required_schema_keys
|
||||
if partial_dedupe:
|
||||
document_schemas = [required_schema_keys]
|
||||
|
||||
## find duplicate documents per schema
|
||||
for schema in document_schemas:
|
||||
## get all _id schema keys used in aggregate query to match duplicate documents
|
||||
id_keys = {k:f'${k}' for k in schema}
|
||||
include_keys = {k:{ "$exists": True } for k in schema}
|
||||
|
||||
## merge include keys {'$exists': True} + exclude keys{'$exists': False} for the first aggregate $match filter to ensure only records with the exact same keys as schema are matched
|
||||
# find all keys in all_schema_keys not in this document schema
|
||||
exclude_keys_list = list(set(all_schema_keys) - set(schema))
|
||||
# exclude_keys_list.append('test_key')
|
||||
exclude_keys = {k:{ "$exists": False} for k in exclude_keys_list}
|
||||
mask_keys = include_keys.copy()
|
||||
mask_keys.update(exclude_keys)
|
||||
# print(f'document match query for this schema\n{mask_keys}')
|
||||
|
||||
# ## debug
|
||||
# print('\n')
|
||||
# print(f'document schema\n{schema}')
|
||||
# print(f'mask documents with required keys only\n{include_keys}')
|
||||
# print(f'mask documents with exact keys\n{mask_keys}')
|
||||
# print(f'search documents with these keys by matching value (should match schema)\n{mask_keys}')
|
||||
|
||||
## find duplicate documents with matching values for id_keys with schema mask_keys, or find one/more document(s) matching schema mask_keys
|
||||
if not partial_dedupe:
|
||||
match_count = 2
|
||||
query_keys = mask_keys
|
||||
else:
|
||||
match_count = 1
|
||||
query_keys = include_keys
|
||||
|
||||
## return the content of duplicate documents
|
||||
duplicates = collection.aggregate([
|
||||
{
|
||||
"$match": mask_keys
|
||||
},
|
||||
{ "$group": {
|
||||
"_id": id_keys,
|
||||
"count": {"$sum": 1}
|
||||
}
|
||||
},
|
||||
{ "$match": {
|
||||
"count": {"$gte": match_count}
|
||||
}
|
||||
},
|
||||
{ "$sort": {
|
||||
"count": -1
|
||||
}
|
||||
}
|
||||
])
|
||||
# print(dumps(duplicates, indent=4))
|
||||
|
||||
## loop duplicate document content, aggregate search using schema keys mask and get document object_ids for deletion
|
||||
for duplicate_document in duplicates:
|
||||
query = {k:v for k, v in duplicate_document['_id'].items()}
|
||||
# print(query)
|
||||
filtered_result = collection.aggregate([
|
||||
{
|
||||
"$match": query_keys
|
||||
},
|
||||
{
|
||||
"$match": query
|
||||
},
|
||||
])
|
||||
#print(dumps(filtered_result, indent=4))
|
||||
#print(len(dumps(filtered_result, indent=4)))
|
||||
|
||||
if not partial_dedupe:
|
||||
## get document ids of exactly matching documents
|
||||
object_ids = [r['_id'] for r in filtered_result]
|
||||
## remove the first duplicate document_id, this will be the only remaining document
|
||||
object_ids.pop(0)
|
||||
# print(object_ids)
|
||||
delete_object_ids.extend(object_ids)
|
||||
else:
|
||||
preserve_document_ids = []
|
||||
query_result = [r for r in filtered_result]
|
||||
query_result_ids = [r['_id'] for r in query_result]
|
||||
if len(query_result) >1:
|
||||
for d in query_result:
|
||||
if len(d.keys()) > partial_dedupe_document_key_count:
|
||||
preserve_document_ids.append(d['_id'])
|
||||
|
||||
## this is too much logic, this can be done calling the function again for full deduplication, if date evaluation is needed another function should be written more targetted towards the use case
|
||||
## keep this logic for the 'previous/current_configuration' key generation and the date range selection for report generation
|
||||
#
|
||||
# query_result.sort(key=len, reverse=True)
|
||||
# largest_document_len = len(query_result[0].keys())
|
||||
# largest_documents = [d for d in query_result if len(d.keys()) == largest_document_len]
|
||||
# # print(dumps(largest_documents, indent=4))
|
||||
# latest_date = datetime.datetime.min
|
||||
# latest_document = {}
|
||||
# for l in largest_documents:
|
||||
# if 'last_modified' in l:
|
||||
# if l['last_modified'] > latest_date:
|
||||
# latest_date = l['last_modified']
|
||||
# latest_document.update(l)
|
||||
# if len(latest_document.keys()) >0:
|
||||
# preserve_document_ids.append(latest_document['_id'])
|
||||
# else:
|
||||
# preserve_document_ids.append(largest_documents[0]['_id'])
|
||||
|
||||
elif len(query_result) == 1:
|
||||
preserve_document_ids.append(query_result[0]['_id'])
|
||||
## find document ids to remove
|
||||
remove_document_ids = [i for i in query_result_ids if i not in preserve_document_ids]
|
||||
delete_object_ids.extend(remove_document_ids)
|
||||
|
||||
# ## debug
|
||||
# print('\n')
|
||||
# print(f'all documents {query_result_ids}')
|
||||
# print(f'remove documents {remove_document_ids}')
|
||||
# print(f'keep documents {preserve_document_ids}')
|
||||
|
||||
## get unique document_ids
|
||||
delete_object_ids = list(set(delete_object_ids))
|
||||
|
||||
## list object_ids of duplicate records
|
||||
if mode == 'list':
|
||||
# print(operation)
|
||||
# print(f'mode = {mode}\n')
|
||||
# print(f'object_ids to delete\n{delete_object_ids}')
|
||||
logger.info(f'{operation}')
|
||||
logger.info(f'mode = {mode}')
|
||||
logger.info(f'object_ids to delete:')
|
||||
logger.info(f'{delete_object_ids}')
|
||||
|
||||
## show duplicate records
|
||||
if mode == 'show':
|
||||
# print(operation)
|
||||
# print(f'mode = {mode}\n')
|
||||
# query = { "_id" : { "$in" : delete_object_ids } }
|
||||
# result = collection.find(query)
|
||||
# print('documents to delete')
|
||||
# for r in result:
|
||||
# print(r)
|
||||
# print(f'\ndocument ids to delete\n{delete_object_ids}')
|
||||
logger.info(f'{operation}')
|
||||
logger.info(f'mode = {mode}')
|
||||
query = { "_id" : { "$in" : delete_object_ids } }
|
||||
result = collection.find(query)
|
||||
logger.info('documents to delete:')
|
||||
logger.info(f'{result}')
|
||||
logger.info(f'document ids to delete:')
|
||||
logger.info(f'{delete_object_ids}')
|
||||
|
||||
## remove duplicate documents
|
||||
if mode == 'delete':
|
||||
if len(delete_object_ids) >0:
|
||||
requests = [DeleteMany({ "_id": { "$in": delete_object_ids } })]
|
||||
result = collection.bulk_write(requests)
|
||||
# print(result.bulk_api_result)
|
||||
# logger.info(f'{result.bulk_api_result}')
|
||||
logger.info(f'Database: inserted_count {result.inserted_count} upserted_count {result.upserted_count} matched_count {result.matched_count} modified_count {result.modified_count} deleted_count {result.deleted_count}')
|
||||
return result
|
||||
else:
|
||||
logger.info('Database: no operations processed - no duplicates matched')
|
||||
|
||||
# def merge_to_collection(src_collection, dst_collection, ignore_src_schema_keys = ['_id'], exclude_dst_schema_keys = ['_id'], additonal_dst_schema_keypairs = [], match_src_schema_keys = []):
|
||||
def merge_to_collection(**kwargs):
|
||||
## check params
|
||||
required_args = ['src_collection', 'dst_collection']
|
||||
missing_args = [arg for arg in required_args if arg not in kwargs.keys()]
|
||||
if len(missing_args) >0:
|
||||
print(f'{deduplicate_collection.__name__} missing arguments {missing_args}')
|
||||
quit()
|
||||
src_collection = kwargs['src_collection']
|
||||
dst_collection = kwargs['dst_collection']
|
||||
ignore_src_schema_keys = kwargs['ignore_src_schema_keys'] if 'ignore_src_schema_keys' in kwargs.keys() else ['_id']
|
||||
exclude_dst_schema_keys = kwargs['exclude_dst_schema_keys'] if 'exclude_dst_schema_keys' in kwargs.keys() else ['_id']
|
||||
additonal_dst_schema_keypairs = kwargs['additonal_dst_schema_keypairs'] if 'additonal_dst_schema_keypairs' in kwargs.keys() else []
|
||||
match_src_schema_keys = kwargs['match_src_schema_keys'] if 'match_src_schema_keys' in kwargs.keys() else []
|
||||
logger_name = kwargs['logger_name'] if 'logger_name' in kwargs.keys() else 'main'
|
||||
logger = logging.getLogger(logger_name)
|
||||
|
||||
## what does this func do?
|
||||
#
|
||||
# merge new or update existing documents from a source collection to a destination collection
|
||||
# can ommit document keys from the destination document search
|
||||
# can add/remove key pair values to the new/existing document destined for the destination collection
|
||||
# caution
|
||||
# the dst collection match can be too broad, returning many documents however the update is targetted to a single document (UpdateOne vs UpdateMany)
|
||||
# there is not much logic here and the function relies on prior deduplication
|
||||
#
|
||||
# find documents in src_collection matching ANY or match_src_schema_keys if present
|
||||
# for each src document remove keys in ignore_src_schema_keys, this will be the match query for the dst_collection (at a minimum this must contain '_id' but may contain more, such as the dynamic 'crypto_map_interface' key)
|
||||
# match documents in the dst_collection (this should return 0 or 1 matching documents as the 'device' and 'device['temp']' collections have been deduplicated)
|
||||
# merge the dst document with the src document, effectively updating the dst document - if no dst document is present in the dst collection a new dst document is created
|
||||
# remove keys from the new dst document listed in exclude_dst_schema_keys (at a minimum this should contain key '_id' to allow the dst collection to write its own index for the document)
|
||||
# add additional keys to the new dst document (this is optional, in this case the addition of 'last_modified' date stamp is used to identify current tunnel records)
|
||||
# update/upsert the new dst document
|
||||
|
||||
requests = []
|
||||
if len(match_src_schema_keys) >0:
|
||||
query = {k:{ "$exists": True } for k in match_src_schema_keys}
|
||||
src_result = src_collection.find(query)
|
||||
else:
|
||||
src_result = src_collection.find()
|
||||
for r in src_result:
|
||||
src_document = r
|
||||
filter = {k:v for k, v in r.items() if k not in ignore_src_schema_keys} # example: we dont want crypto_map_interface when matching documents to merge into
|
||||
dst_match_count = dst_collection.count_documents(filter)
|
||||
if dst_match_count >0:
|
||||
dst_result = dst_collection.find(filter)
|
||||
for dst_match in dst_result:
|
||||
dst_id = dst_match['_id']
|
||||
# merge src document fields with dst document, overwrite dst key/value pairs
|
||||
dst_document = {**dst_match, **src_document} # z = {**x, **y} - merge dicts, y replaces x
|
||||
for exclude in exclude_dst_schema_keys:
|
||||
if exclude in dst_document:
|
||||
dst_document.pop(exclude)
|
||||
if len(additonal_dst_schema_keypairs) >0:
|
||||
for kvp in additonal_dst_schema_keypairs:
|
||||
dst_document.update(kvp)
|
||||
requests.append(UpdateOne({'_id': dst_id}, {'$set': dst_document}, upsert=True))
|
||||
else:
|
||||
dst_document = src_document
|
||||
for exclude in exclude_dst_schema_keys:
|
||||
if exclude in dst_document:
|
||||
dst_document.pop(exclude)
|
||||
if len(additonal_dst_schema_keypairs) >0:
|
||||
for kvp in additonal_dst_schema_keypairs:
|
||||
dst_document.update(kvp)
|
||||
requests.append(InsertOne(dst_document))
|
||||
|
||||
if len(requests) >0:
|
||||
result = dst_collection.bulk_write(requests)
|
||||
# print(result.bulk_api_result)
|
||||
# logger.info(f'{result.bulk_api_result}')
|
||||
logger.info(f'Database: inserted_count {result.inserted_count} upserted_count {result.upserted_count} matched_count {result.matched_count} modified_count {result.modified_count} deleted_count {result.deleted_count}')
|
||||
return result
|
||||
else:
|
||||
logger.info('Database: no operations processed - no upsert or insert requests')
|
||||
|
||||
## debug function
|
||||
def diff_collection(src_collection, dst_collection, mode = 'stat', ignore_src_schema_keys = ['_id'], match_src_schema_keys = []):
|
||||
print(f'{src_collection.full_name} documents merged into {dst_collection.full_name}')
|
||||
# init
|
||||
src_doc_count = src_collection.count_documents({})
|
||||
dst_doc_match_count = 0
|
||||
dst_doc_unmatch_count = 0
|
||||
unmatched_documents = []
|
||||
# get all documents in the src_collection
|
||||
src_result = src_collection.find()
|
||||
find_keys = src_result.clone()
|
||||
# get all keys in a collection to build query mask
|
||||
src_collection_keys = []
|
||||
for d in find_keys:
|
||||
for key in d.keys():
|
||||
src_collection_keys.append(key)
|
||||
src_collection_keys = [k for k in list(set(src_collection_keys)) if k not in ignore_src_schema_keys]
|
||||
for r in src_result:
|
||||
# mangle src_document for use as an exact match query on the dst_collection
|
||||
query = {k:v for k, v in r.items() if k not in ignore_src_schema_keys}
|
||||
mask = {k:{ "$exists": False } for k in src_collection_keys if k not in query.keys()}
|
||||
query.update(mask)
|
||||
# search dst_collection for the src_document
|
||||
dst_match_count = dst_collection.count_documents(query)
|
||||
dst_doc_match_count += dst_match_count # this isnt accurate owing to subset records in the src_collection
|
||||
if dst_match_count == 0:
|
||||
dst_doc_unmatch_count += 1
|
||||
if mode == 'show':
|
||||
unmatched_documents.append(r)
|
||||
if len(unmatched_documents) >0:
|
||||
print('documents did not make it to the dst_collection')
|
||||
for d in unmatched_documents:
|
||||
missing_keys = [k for k in match_src_schema_keys if k not in d.keys()]
|
||||
if len(missing_keys) == 0:
|
||||
print('error detected, document contains required keys')
|
||||
else:
|
||||
print(f'\ndocument missing required keys {missing_keys}')
|
||||
print(f'{dumps(d)}')
|
||||
print()
|
||||
print(f'src_doc_count {src_doc_count}')
|
||||
print(f'dst_doc_match_count {dst_doc_match_count}')
|
||||
print(f'dst_doc_unmatch_count {dst_doc_unmatch_count}')
|
||||
if (dst_doc_match_count + dst_doc_unmatch_count) != src_doc_count:
|
||||
print("error detected, set mode = 'show' to highlight rogue documents")
|
||||
print('\n')
|
||||
|
||||
def spoke_lookup(**kwargs):
|
||||
## check params
|
||||
required_args = ['read_device_collection', 'read_ip_collection', 'write_collection']
|
||||
missing_args = [arg for arg in required_args if arg not in kwargs.keys()]
|
||||
if len(missing_args) >0:
|
||||
print(f'{spoke_lookup.__name__} missing arguments {missing_args}')
|
||||
quit()
|
||||
read_device_collection = kwargs['read_device_collection']
|
||||
read_ip_collection = kwargs['read_ip_collection']
|
||||
write_collection = kwargs['write_collection']
|
||||
logger_name = kwargs['logger_name'] if 'logger_name' in kwargs.keys() else 'main'
|
||||
logger = logging.getLogger(logger_name)
|
||||
|
||||
## init
|
||||
# +'peer_vpn_id', +'nhrp_nexthop' = interface lookup using 'nhrp_nexthop' ip - get +'DeviceRecNum', +'DeviceName' (high confidence, nhrp map has an immutable 1:1 mapping)
|
||||
# +'peer_vpn_id', -'nhrp_nexthop' = interface lookup using 'peer_vpn_id' ip - get +'DeviceRecNum', +'DeviceName' (fairly high confidence, engineer built tunnel and elected to match the handshake attribute 'peer_vpn_id' with the spoke device address)
|
||||
# +'peer_vpn_id', -'nhrp_nexthop' = device lookup using 'peer_vpn_id' name - get +'DeviceRecNum' (fairly high confidence, engineer built tunnel and elected to match the handshake attribute 'peer_vpn_id' with the spoke device name)
|
||||
# +'peer_ip', -ips_not_in_other_lists = interface lookup using 'peer_ip' ip - get +'DeviceRecNum', +'DeviceName' (less confidence (catch all), finds half negotiated tunnels and tunnels with public ips thus not likely in mongo(remedy))
|
||||
# +'peer_ip', -'DeviceName' = interface lookup using 'peer_ip' - get +'DeviceRecNum', +'DeviceName' (any spoke not matched by previous queries, there is not high confidence in this method thus it must be a dependent(-'DeviceName') and final query)
|
||||
|
||||
## get spoke attributes by ('nhrp_nexthop' ip) / ('peer_vpn_id' ip) / ('peer_vpn_id' name) for lookup in NMS mongo(remedy) device tickets
|
||||
|
||||
# search nhrp ips
|
||||
nhrp_query = {'peer_vpn_id':{ "$exists": True }, 'nhrp_nexthop':{ "$exists": True }}
|
||||
nhrp_projection = {"_id": 0, "nhrp_nexthop": 1}
|
||||
nhrp_result = write_collection.find(nhrp_query, nhrp_projection)
|
||||
nhrp_ips = [ip['nhrp_nexthop'] for ip in nhrp_result]
|
||||
|
||||
# search vpnid ips and names
|
||||
vpnid_query = {'peer_vpn_id':{ "$exists": True }, 'nhrp_nexthop':{ "$exists": False }}
|
||||
vpnid_projection = {"_id": 0, "peer_vpn_id": 1}
|
||||
vpnid_result = write_collection.find(vpnid_query, vpnid_projection)
|
||||
vpnid_items = [itm['peer_vpn_id'] for itm in vpnid_result]
|
||||
vpnid_ips = [ip for ip in vpnid_items if valid_ipv4(ip)]
|
||||
# remove suffix '-ps' from 'peer_vpn_id' for mongo 'DeviceName' lookup (bml0990-ps - bml0990)
|
||||
vpnid_names = [re.sub('(?i)-ps$', '', name) for name in vpnid_items if not valid_ipv4(name) and name not in ['none']]
|
||||
|
||||
# search remining peer ips (likely public and not in mongo(remedy))
|
||||
all_peerip_ips = [d for d in write_collection.distinct('peer_ip')]
|
||||
negate_peerip_ips = list(set(nhrp_ips + vpnid_ips))
|
||||
peer_ips = [ip for ip in all_peerip_ips if ip not in negate_peerip_ips]
|
||||
|
||||
## search NMS mongo(remedy) for interface/device tickets using the spoke attributes
|
||||
|
||||
# search interface collection, match interface ip to 'nhrp_nexthop' ip
|
||||
nhrp_ips_query = {"raw.CHR_IPAddress": {"$in": nhrp_ips}}
|
||||
nhrp_ips_projection = {"_id": 0, "raw.DeviceRecNum": 1, "raw.CHR_DeviceName": 1, "raw.CHR_IPAddress": 1}
|
||||
nhrp_ips_result = read_ip_collection.find(nhrp_ips_query, nhrp_ips_projection)
|
||||
|
||||
# search interface collection, match interface ip to 'peer_vpn_id' ip
|
||||
vpnid_ips_query = {"raw.CHR_IPAddress": {"$in": vpnid_ips}}
|
||||
vpnid_ips_projection = {"_id": 0, "raw.DeviceRecNum": 1, "raw.CHR_DeviceName": 1, "raw.CHR_IPAddress": 1}
|
||||
vpnid_ips_result = read_ip_collection.find(vpnid_ips_query, vpnid_ips_projection)
|
||||
|
||||
# search device collection, match device name to 'peer_vpn_id' name
|
||||
vpnid_names_query = {"raw.DeviceName": {"$in": vpnid_names}}
|
||||
vpnid_names_projection = {"_id": 0, "raw.DeviceRecNum": 1, "raw.DeviceName": 1}
|
||||
vpnid_names_result = read_device_collection.find(vpnid_names_query, vpnid_names_projection)
|
||||
|
||||
# search interface collection, match interface ip to 'peer_ip' ip
|
||||
peer_ips_query = {"raw.CHR_IPAddress": {"$in": peer_ips}}
|
||||
peer_ips_projection = {"_id": 0, "raw.DeviceRecNum": 1, "raw.CHR_DeviceName": 1, "raw.CHR_IPAddress": 1}
|
||||
peer_ips_result = read_ip_collection.find(peer_ips_query, peer_ips_projection)
|
||||
|
||||
## db requests builder
|
||||
def db_requests(list, mode):
|
||||
requests = []
|
||||
for rec in list:
|
||||
if mode == 'nhrp_ip':
|
||||
filter = {'nhrp_nexthop': rec['raw']['CHR_IPAddress']}
|
||||
record = {'DeviceName': rec['raw']['CHR_DeviceName'], 'DeviceRecNum': rec['raw']['DeviceRecNum']}
|
||||
elif mode == 'vpn_ip':
|
||||
filter = {'peer_vpn_id': rec['raw']['CHR_IPAddress']}
|
||||
record = {'DeviceName': rec['raw']['CHR_DeviceName'], 'DeviceRecNum': rec['raw']['DeviceRecNum']}
|
||||
elif mode == 'vpn_id':
|
||||
filter = {'peer_vpn_id': re.compile(f'(?i).*({rec["raw"]["DeviceName"]}).*')}
|
||||
record = {'DeviceName': rec["raw"]["DeviceName"], 'DeviceRecNum': rec['raw']['DeviceRecNum']}
|
||||
elif mode == 'peer_ip':
|
||||
filter = {'peer_ip': rec['raw']['CHR_IPAddress']}
|
||||
record = {'DeviceName': rec['raw']['CHR_DeviceName'], 'DeviceRecNum': rec['raw']['DeviceRecNum']}
|
||||
if not record['DeviceRecNum']:
|
||||
devname_query = {"raw.DeviceName": record['DeviceName']}
|
||||
devname_projection = {"_id": 0, "raw.DeviceRecNum": 1}
|
||||
devname_result = read_device_collection.find_one(devname_query, devname_projection)
|
||||
try:
|
||||
record['DeviceRecNum'] = devname_result['raw']['DeviceRecNum']
|
||||
except:
|
||||
pass
|
||||
remove_keys = []
|
||||
for k, v in record.items():
|
||||
if not v:
|
||||
remove_keys.append(k)
|
||||
for i in remove_keys:
|
||||
record.pop(i)
|
||||
if len(record) >0:
|
||||
requests.append(UpdateMany(filter, {'$set': record}, upsert=False))
|
||||
return requests
|
||||
|
||||
nhrp_ips_requests = db_requests(nhrp_ips_result, 'nhrp_ip')
|
||||
vpnid_ips_requests = db_requests(vpnid_ips_result, 'vpn_ip')
|
||||
vpnid_names_requests = db_requests(vpnid_names_result, 'vpn_id')
|
||||
peer_ips_requests = db_requests(peer_ips_result, 'peer_ip')
|
||||
requests = nhrp_ips_requests + vpnid_ips_requests + vpnid_names_requests + peer_ips_requests
|
||||
|
||||
# for i in requests:
|
||||
# logger.info(f'Request: {i}')
|
||||
|
||||
## write nhrp / vpn-id requests
|
||||
if len(requests) >0:
|
||||
result = write_collection.bulk_write(requests)
|
||||
logger.info(f'Database: inserted_count {result.inserted_count} upserted_count {result.upserted_count} matched_count {result.matched_count} modified_count {result.modified_count} deleted_count {result.deleted_count}')
|
||||
|
||||
## catch all, seldom seen in a run, may catch devices by 'peer_ip' not matched by the previous queries, likely many public ips matched not in mongo(remedy)
|
||||
peer_query = {'peer_ip':{ "$exists": True }, 'DeviceName': { "$exists": False}}
|
||||
peer_projection = {"_id": 0, "peer_ip": 1}
|
||||
peer_result = write_collection.find(peer_query, peer_projection)
|
||||
peer_ips = [ip['peer_ip'] for ip in peer_result]
|
||||
if len(peer_ips) >0:
|
||||
## search NMS mongo(remedy) for interface collection using the spoke attribute 'peer_ip'
|
||||
peer_ips_query = {"raw.CHR_IPAddress": {"$in": peer_ips}}
|
||||
peer_ips_projection = {"_id": 0, "raw.DeviceRecNum": 1, "raw.CHR_DeviceName": 1, "raw.CHR_IPAddress": 1}
|
||||
peer_ips_result = read_ip_collection.find(peer_ips_query, peer_ips_projection)
|
||||
if peer_ips_result.count() >0:
|
||||
peer_ips_requests = db_requests(peer_ips_result, 'peer_ip')
|
||||
if len(peer_ips_requests) >0:
|
||||
result = write_collection.bulk_write(requests)
|
||||
logger.info("CatchAll: device ip not matched by 'nhrp_nexthop' / 'peer_vpn_id' and missed by 'peer_ip'")
|
||||
logger.info(f'Database: inserted_count {result.inserted_count} upserted_count {result.upserted_count} matched_count {result.matched_count} modified_count {result.modified_count} deleted_count {result.deleted_count}')
|
||||
|
||||
def device_ticket_lookup(**kwargs):
|
||||
## check params
|
||||
required_args = ['read_device_collection', 'write_collection']
|
||||
missing_args = [arg for arg in required_args if arg not in kwargs.keys()]
|
||||
if len(missing_args) >0:
|
||||
print(f'{device_ticket_lookup.__name__} missing arguments {missing_args}')
|
||||
quit()
|
||||
read_device_collection = kwargs['read_device_collection']
|
||||
write_collection = kwargs['write_collection']
|
||||
logger_name = kwargs['logger_name'] if 'logger_name' in kwargs.keys() else 'main'
|
||||
logger = logging.getLogger(logger_name)
|
||||
|
||||
## init
|
||||
# this function depends on the 'DeviceRecNum' field populated by the 'spoke_lookup' function
|
||||
|
||||
## get all 'DeviceRecNum' in device 'temp' collection, lookup the 'DeviceRecNum' in NMS mongo(remedy) to return device attributes
|
||||
device_recnums = [d for d in write_collection.distinct('DeviceRecNum')]
|
||||
device_hardware_query = {'raw.DeviceRecNum': {'$in': device_recnums}}
|
||||
device_hardware_projection = {"_id": 0, 'raw.DeviceRecNum': 1, 'raw.Manufacturer': 1, 'raw.Model': 1}
|
||||
device_hardware = read_device_collection.find(device_hardware_query, device_hardware_projection)
|
||||
|
||||
## update device records in device 'temp' collection
|
||||
requests = []
|
||||
for rec in device_hardware:
|
||||
filter = {'DeviceRecNum': rec['raw']['DeviceRecNum']}
|
||||
record = {'Manufacturer': rec['raw']['Manufacturer'], 'Model': rec['raw']['Model']}
|
||||
remove_keys = []
|
||||
for k, v in record.items():
|
||||
if not v:
|
||||
remove_keys.append(k)
|
||||
for i in remove_keys:
|
||||
record.pop(i)
|
||||
if len(record) >0:
|
||||
requests.append(UpdateMany(filter, {'$set': record}, upsert=False))
|
||||
|
||||
# for i in requests:
|
||||
# logger.info(f'Request: {i}')
|
||||
|
||||
## write requests
|
||||
if len(requests) >0:
|
||||
result = write_collection.bulk_write(requests)
|
||||
logger.info(f'Database: inserted_count {result.inserted_count} upserted_count {result.upserted_count} matched_count {result.matched_count} modified_count {result.modified_count} deleted_count {result.deleted_count}')
|
||||
|
||||
# ## origional deduplicate_collection, reference, finds all distinct document schemas and searches duplicates on exact schema keys and values, the newer version also removes documents with specific subset of keys where a larger document exists (idle tunnels)
|
||||
# def deduplicate_collection(collection, mode='list', ignore_schema_keys=['_id']):
|
||||
# #### dedupe documents with exactly matching schemas
|
||||
# ## get all unique document schemas in collection
|
||||
# # this pulls all documents in collection, inefficient, with later versions of mongo this can be achieved within the query language
|
||||
# # https://stackoverflow.com/questions/2298870/get-names-of-all-keys-in-the-collection
|
||||
# delete_object_ids = []
|
||||
# document_schemas = []
|
||||
# result = collection.find()
|
||||
# for i in result:
|
||||
# keys = [k for k in i.keys() if k not in ignore_schema_keys]
|
||||
# if keys not in document_schemas:
|
||||
# document_schemas.append(keys)
|
||||
# # print(f'available schemas\n{document_schemas}')
|
||||
|
||||
# ## get all the schema keys in collection, this will be used to mask keys
|
||||
# all_schema_keys = list(set(sum(document_schemas, [])))
|
||||
# # print(f'all schema keys\n{all_schema_keys}')
|
||||
|
||||
# ## find duplicate documents per schema
|
||||
# for schema in document_schemas:
|
||||
# ## get all _id schema keys used in aggregate query to match duplicate documents
|
||||
# id_keys = {k:f'${k}' for k in schema}
|
||||
# include_keys = {k:{ "$exists": True } for k in schema}
|
||||
|
||||
# ## find all keys in all_schema_keys not in this document schema
|
||||
# exclude_keys_list = list(set(all_schema_keys) - set(schema))
|
||||
# # exclude_keys_list.append('test_key')
|
||||
# exclude_keys = {k:{ "$exists": False} for k in exclude_keys_list}
|
||||
|
||||
# ## merge include keys {'$exists': True} + exclude keys{'$exists': False} for the first $match filter to ensure only records with the exact same keys as schema are matched
|
||||
# include_keys.update(exclude_keys)
|
||||
# # print(f'document match query for this schema\n{include_keys}')
|
||||
|
||||
# ## return the content of duplicate documents
|
||||
# # debug
|
||||
# # print('\n')
|
||||
# # print(f'document schema\n{schema}')
|
||||
# # print(f'mask documents with exact keys\n{include_keys}')
|
||||
# # print(f'search documents with these keys by matching value (should match schema)\n{id_keys}')
|
||||
# duplicates = collection.aggregate([
|
||||
# {
|
||||
# "$match": include_keys
|
||||
# },
|
||||
# { "$group": {
|
||||
# "_id": id_keys,
|
||||
# "count": {"$sum": 1}
|
||||
# }
|
||||
# },
|
||||
# { "$match": {
|
||||
# "count": {"$gte": 2}
|
||||
# }
|
||||
# },
|
||||
# { "$sort": {
|
||||
# "count": -1
|
||||
# }
|
||||
# }
|
||||
# ])
|
||||
# # print(dumps(duplicates, indent=4))
|
||||
|
||||
# ## loop duplicate document content, aggregate search using schema keys mask and get document object_ids for deletion
|
||||
# for duplicate_document in duplicates:
|
||||
# query = {k:v for k, v in duplicate_document['_id'].items()}
|
||||
# # print(query)
|
||||
# filtered_result = collection.aggregate([
|
||||
# {
|
||||
# "$match": include_keys
|
||||
# },
|
||||
# {
|
||||
# "$match": query
|
||||
# },
|
||||
# ])
|
||||
# object_ids = [r['_id'] for r in filtered_result]
|
||||
|
||||
# ## remove the first duplicate document_id, this will be the only remaining document
|
||||
# object_ids.pop(0)
|
||||
# # print(object_ids)
|
||||
# delete_object_ids.extend(object_ids)
|
||||
|
||||
# ## get unique document_ids
|
||||
# delete_object_ids = list(set(delete_object_ids))
|
||||
|
||||
# ## list object_ids of duplicate records
|
||||
# if mode == 'list':
|
||||
# print("mode = 'list'\n")
|
||||
# print(f'object_ids to delete\n{delete_object_ids}')
|
||||
|
||||
# ## show duplicate records
|
||||
# if mode == 'show':
|
||||
# print("mode = 'show'\n")
|
||||
# query = { "_id" : { "$in" : delete_object_ids } }
|
||||
# result = collection.find(query)
|
||||
# print('documents to delete')
|
||||
# for r in result:
|
||||
# print(r)
|
||||
# print(f'\ndocument ids to delete\n{delete_object_ids}')
|
||||
|
||||
# ## remove duplicate documents
|
||||
# if mode == 'delete':
|
||||
# if len(delete_object_ids) >0:
|
||||
# requests = [DeleteMany({ "_id": { "$in": delete_object_ids } })]
|
||||
# result = collection.bulk_write(requests)
|
||||
# return result
|
||||
|
||||
|
||||
# ## origional merge_to_collection, no schema key match to qualify documents to merge
|
||||
# def merge_to_collection(src_collection, dst_collection, ignore_schema_keys = ['_id'], exclude_schema_keys = ['_id']):
|
||||
# last_modified = datetime.datetime.now(tz=datetime.timezone.utc)
|
||||
# requests = []
|
||||
# src_result = src_collection.find()
|
||||
# for r in src_result:
|
||||
# src_document = r
|
||||
# filter = {k:v for k, v in r.items() if k not in ignore_schema_keys} # dont want crypto_map_interface
|
||||
# dst_match_count = dst_collection.count_documents(filter)
|
||||
# if dst_match_count >0:
|
||||
# dst_result = dst_collection.find(filter)
|
||||
# for dst_match in dst_result:
|
||||
# dst_id = dst_match['_id']
|
||||
# # merge src document fields with dst document, overwrite dst key/value pairs
|
||||
# dst_document = {**dst_match, **src_document} # z = {**x, **y} y replaces x
|
||||
# for exclude in exclude_schema_keys:
|
||||
# if exclude in dst_document:
|
||||
# dst_document.pop(exclude)
|
||||
# dst_document.update({'last_modified': last_modified})
|
||||
# requests.append(UpdateOne({'_id': dst_id}, {'$set': dst_document}, upsert=True))
|
||||
# else:
|
||||
# dst_document = src_document
|
||||
# for exclude in exclude_schema_keys:
|
||||
# if exclude in dst_document:
|
||||
# dst_document.pop(exclude)
|
||||
# dst_document.update({'last_modified': last_modified})
|
||||
# requests.append(InsertOne(dst_document))
|
||||
|
||||
# if len(requests) >0:
|
||||
# dst_result = dst_collection.bulk_write(requests)
|
||||
# print(dst_result.bulk_api_result)
|
||||
# return dst_result
|
||||
|
|
@ -0,0 +1,223 @@
|
|||
from dotenv import load_dotenv
|
||||
import os
|
||||
from scrapli import Scrapli
|
||||
from scrapli.driver import GenericDriver
|
||||
# from scrapli.driver.core import IOSXEDriver
|
||||
import logging
|
||||
from logging import handlers
|
||||
from bson.json_util import dumps, loads
|
||||
|
||||
# accept dict of commands and associated functions to parse command output, run commands directly and pass output to parsing function or pass connection dict (compound) to parsing function to run multiple commands
|
||||
def device_commands(collection, object_id, commands):
|
||||
## init
|
||||
load_dotenv()
|
||||
sshuser = os.environ.get('SSH_USER')
|
||||
sshpass = os.environ.get('SSH_PASSWORD')
|
||||
query = {"_id": object_id}
|
||||
result = collection.find(query)
|
||||
device_record = result[0]
|
||||
ip = device_record['IPAddress']
|
||||
device_name = device_record['DeviceName']
|
||||
scrapli_platform = device_record['scrapli_platform']
|
||||
|
||||
## log
|
||||
logger = logging.getLogger(device_name)
|
||||
logger.info(f'Send commands to device - {device_name} - {ip} - {scrapli_platform}')
|
||||
|
||||
## send commands
|
||||
# timeout = 60 # default
|
||||
timeout = 120 # australia
|
||||
connection = {
|
||||
"host": ip,
|
||||
"auth_username": sshuser,
|
||||
"auth_password": sshpass,
|
||||
"auth_secondary": sshpass,
|
||||
"auth_strict_key": False,
|
||||
"ssh_config_file": "/etc/ssh/ssh_config",
|
||||
"platform": scrapli_platform,
|
||||
"timeout_socket": timeout,
|
||||
"timeout_transport": timeout,
|
||||
"timeout_ops": timeout,
|
||||
}
|
||||
device_commands = commands.copy()
|
||||
scrapli_commands_keys = [c for c in device_commands.keys() if not device_commands[c]['command'] == 'compound']
|
||||
try:
|
||||
with Scrapli(**connection) as conn:
|
||||
# send commands over a single socket to avoid session-limits/IDS
|
||||
for k in scrapli_commands_keys:
|
||||
command = device_commands[k]['command']
|
||||
# print(f"sending command '{command}' for {k}")
|
||||
logger.info(f"Sending command '{command}' for {k}")
|
||||
output = conn.send_command(command)
|
||||
device_commands[k].update({'output': output})
|
||||
except Exception as e:
|
||||
# print(f'exception_type: {type(e).__name__}')
|
||||
logger.error(f"Exception occurred: {type(e).__name__}", exc_info=True)
|
||||
return f'{device_name} error: scrapli failure {command}'
|
||||
|
||||
## run scrape processors
|
||||
for c in device_commands.keys():
|
||||
func = commands[c]['func_ref']
|
||||
command_output = commands[c]
|
||||
func(collection, command_output, device_record, connection)
|
||||
|
||||
## success end
|
||||
return 'processed'
|
||||
|
||||
# accept dict of commands and associated functions to parse command output, run commands directly and pass output to parsing function or pass connection dict (compound) to parsing function to run multiple commands
|
||||
def device_commandsAA(collection, object_ids, commands):
|
||||
# logger = logging.getLogger('main')
|
||||
load_dotenv()
|
||||
# error_encountered = False
|
||||
sshuser = os.environ.get('SSH_USER')
|
||||
sshpass = os.environ.get('SSH_PASSWORD')
|
||||
query = { "_id" : { "$in" : object_ids } }
|
||||
result = collection.find(query)
|
||||
for device_record in result:
|
||||
ip = device_record['IPAddress']
|
||||
device_name = device_record['DeviceName']
|
||||
scrapli_platform = device_record['scrapli_platform']
|
||||
|
||||
# print(f"\nquery device - {device_name} - {ip} - {scrapli_platform}")
|
||||
local_logger = logging.getLogger(device_name)
|
||||
local_logger.info(f'Send commands to device - {device_name} - {ip} - {scrapli_platform}')
|
||||
|
||||
# timeout = 60 # default
|
||||
# timeout = 120 # australia
|
||||
timeout = 180 # RRI in australia/malaysia
|
||||
connection = {
|
||||
"host": ip,
|
||||
"auth_username": sshuser,
|
||||
"auth_password": sshpass,
|
||||
"auth_secondary": sshpass,
|
||||
"auth_strict_key": False,
|
||||
"ssh_config_file": "/etc/ssh/ssh_config",
|
||||
"platform": scrapli_platform,
|
||||
"timeout_socket": timeout,
|
||||
"timeout_transport": timeout,
|
||||
"timeout_ops": timeout,
|
||||
}
|
||||
device_commands = commands.copy()
|
||||
scrapli_commands_keys = [c for c in device_commands.keys() if not device_commands[c]['command'] == 'compound']
|
||||
try:
|
||||
with Scrapli(**connection) as conn:
|
||||
# send commands over a single socket to avoid session-limits/IDS
|
||||
for k in scrapli_commands_keys:
|
||||
command = device_commands[k]['command']
|
||||
# print(f"sending command '{command}' for {k}")
|
||||
local_logger.info(f"Sending command '{command}' for {k}")
|
||||
output = conn.send_command(command)
|
||||
device_commands[k].update({'output': output})
|
||||
except Exception as e:
|
||||
# print(f'exception_type: {type(e).__name__}')
|
||||
# return f'{device_name} error: scrapli failure'
|
||||
local_logger.error(f"Exception occurred: {type(e).__name__}", exc_info=True)
|
||||
return f'{device_name} error: scrapli failure {command}'
|
||||
# update all commands that didnt run with error status (should not get to this point)
|
||||
# for k in scrapli_commands_keys:
|
||||
# if 'output' not in device_commands[k]:
|
||||
# device_commands[k]['output'] = 'error'
|
||||
# error_encountered = True
|
||||
# if error_encountered:
|
||||
# return f'{device_name} error: scrapli failure'
|
||||
# run scrape processors
|
||||
for c in device_commands.keys():
|
||||
func = commands[c]['func_ref']
|
||||
command_output = commands[c]
|
||||
func(collection, command_output, device_record, connection)
|
||||
return 'processed'
|
||||
|
||||
def identify_scrapli_platform(osinfo):
|
||||
# parse output of 'generic' device type command 'show version' for cisco/junos, will use this to find netmiko ConnectHandler 'device_type' parameter / scrapli_platform
|
||||
# Cisco IOS Software, C3900 Software (C3900-UNIVERSALK9-M), Version 15.4(3)M2, RELEASE SOFTWARE (fc2) # first line ios, scrapli platform cisco_iosxe
|
||||
# Cisco IOS XE Software, Version 03.16.06.S - Extended Support Release # first line iosxe, scrapli platform cisco_iosxe
|
||||
# JUNOS Software Release [12.1X44-D30.4] # 2nd line junos, scrapli platform juniper_junos
|
||||
vendors = ['cisco', 'junos']
|
||||
cisco = [{'os': 'Cisco IOS Software', 'scrapli_platform': 'cisco_iosxe', 'line': 0},
|
||||
{'os': 'Cisco IOS XE Software', 'scrapli_platform': 'cisco_iosxe', 'line': 0}]
|
||||
junos = [{'os': 'JUNOS Software Release', 'scrapli_platform': 'juniper_junos', 'line': 2}]
|
||||
for v in vendors:
|
||||
if v in osinfo.lower():
|
||||
vendor = v
|
||||
if not 'vendor' in locals():
|
||||
vendor = 'unknown'
|
||||
match vendor:
|
||||
case "cisco":
|
||||
# known cisco os
|
||||
for i in cisco:
|
||||
scrape_line = osinfo.partition('\n')[i['line']]
|
||||
if i['os'] in scrape_line:
|
||||
scrapli_platform = i['scrapli_platform']
|
||||
record = {'scrapli_platform': scrapli_platform, 'vendor': vendor }
|
||||
# print(record)
|
||||
# unknown cisco os
|
||||
if not 'scrapli_platform' in locals():
|
||||
scrapli_platform = 'generic'
|
||||
record = {'scrapli_platform': scrapli_platform, 'vendor': vendor }
|
||||
# print(record)
|
||||
case "junos":
|
||||
# known junos os
|
||||
for i in junos:
|
||||
scrape_line = osinfo.partition('\n')[i['line']]
|
||||
if i['os'] in scrape_line:
|
||||
scrapli_platform = i['scrapli_platform']
|
||||
record = {'scrapli_platform': scrapli_platform, 'vendor': vendor }
|
||||
# print(record)
|
||||
# catch all
|
||||
case _:
|
||||
scrapli_platform = 'generic'
|
||||
record = {'scrapli_platform': scrapli_platform, 'vendor': vendor }
|
||||
# print(record)
|
||||
return record
|
||||
|
||||
def get_os(collection, object_ids):
|
||||
# print('\nget_os')
|
||||
logger = logging.getLogger('main')
|
||||
logger.info('Get device OS type, update device records with Scrapli driver')
|
||||
load_dotenv()
|
||||
sshuser = os.environ.get('SSH_USER')
|
||||
sshpass = os.environ.get('SSH_PASSWORD')
|
||||
query = { "_id" : { "$in" : object_ids } }
|
||||
result = collection.find(query)
|
||||
for i in result:
|
||||
name = i['DeviceName']
|
||||
ip = i['IPAddress']
|
||||
id = i['_id']
|
||||
if not ip == 'unknown':
|
||||
device = {
|
||||
"host": ip,
|
||||
"auth_username": sshuser,
|
||||
"auth_password": sshpass,
|
||||
"auth_strict_key": False,
|
||||
"ssh_config_file": "/etc/ssh/ssh_config",
|
||||
"timeout_socket": 15,
|
||||
"timeout_transport": 15,
|
||||
"timeout_ops": 15,
|
||||
}
|
||||
# use generic driver to help identify *any* OS to then select vendor specific drivers
|
||||
# may require multiple try statements for commands that identify different OS vendors, show version may not be present on a VA/Sarian?
|
||||
try:
|
||||
with GenericDriver(**device) as conn:
|
||||
# print(conn.ssh_config_file) # modern OS disable legacy KEX/Cipers/etc - the systemwide ssh_config has been compromised to use legacy ciphers, in future provide local ssh_config
|
||||
conn.send_command("terminal length 0")
|
||||
response = conn.send_command("show version")
|
||||
sshresult = True
|
||||
osinfo = identify_scrapli_platform(response.result)
|
||||
except Exception as e:
|
||||
# print(f'{name} connection error, exception_type {type(e).__name__}')
|
||||
logger.error(f'{name} connection error, exception_type {type(e).__name__}')
|
||||
sshresult = False
|
||||
session_message = 'unknown'
|
||||
if type(e).__name__ == 'ScrapliAuthenticationFailed':
|
||||
session_message = 'ssh failed auth'
|
||||
if type(e).__name__ == 'ScrapliTimeout':
|
||||
session_message = 'ssh failed connection'
|
||||
if sshresult:
|
||||
osinfo.update({'session_protocol': 'ssh'})
|
||||
filter = {'_id': id}
|
||||
record = osinfo
|
||||
update = collection.update_one(filter, {'$set': record}, upsert=True)
|
||||
else:
|
||||
filter = {'_id': id}
|
||||
record = {'session_protocol': session_message}
|
||||
update = collection.update_one(filter, {'$set': record}, upsert=True)
|
||||
|
|
@ -0,0 +1,692 @@
|
|||
import pandas
|
||||
import xlsxwriter
|
||||
from xlsxwriter.utility import xl_col_to_name # xlsxwriter.utility.xl_rowcol_to_cell is in use
|
||||
import pymongo
|
||||
from bson.json_util import dumps, loads
|
||||
import logging
|
||||
import re
|
||||
import json
|
||||
|
||||
def build_spreadsheet(collection, devices_dict, outfile):
|
||||
|
||||
def workbook_sheets_order(writer, collection, devices_dict, static_sheets):
|
||||
## pre-create workbook sheets in desired order, any reordering of the sheets after the fact, can break formula/hyperlink index relationships
|
||||
devices = [n for n in devices_dict.keys()]
|
||||
# find populated collections, write populated collection sheets in alphabetical order
|
||||
devices_with_collections = [n for n in devices if not collection[n].find_one({},{"$item": 1, '_id': 1}) == None]
|
||||
sheets = static_sheets + sorted(devices_with_collections)
|
||||
dummy_df = pandas.DataFrame()
|
||||
for s in sheets:
|
||||
dummy_df.to_excel(writer, index=False, sheet_name=s)
|
||||
del dummy_df
|
||||
return [s for s in sheets if s not in static_sheets]
|
||||
|
||||
def populate_device_sheets(collection, writer, sheets):
|
||||
# init
|
||||
logger = logging.getLogger('main')
|
||||
devices_dataframes = {}
|
||||
spokes_df = pandas.DataFrame() # 'VPN Spokes'
|
||||
|
||||
#
|
||||
## PROBLEM - phase2 profile (dmvpn not rri) not present in device collections, yet it is collected in 'cisco_transform_set' but not written - rather than full rescrape merge incrementally with ignore_src_schema_keys with 'p2_profile' - transform sets already captured so not sure necessary?
|
||||
#
|
||||
|
||||
## define which columns and the order of columns to present in sheet, remove mongodb document '_id', not all db keys are useful
|
||||
all_column_order = ["last_modified", "crypto_map", "crypto_map_template", "crypto_map_interface", "crypto_session_interface", "p1_profile", "RRI_enabled", "p1_status", "p2_status", "session_status", "local_ip", "local_port", "peer_ip", "peer_port", "peer_vpn_id", "p1_auth_type", "p1_dh_group", "p1_encr_algo", "p1_hash_algo", "p2_encr_algo", "p2_hash_algo", "pfs", "transform_sets", "ordered_transform_set", "p2_interface", "ipsec_flow", "p1_ivrf", "p2_fvrf", "protected_vrf", "p2_default_3des", "spoke_p2_default_3des", "spoke_p2_algo_preference", "DeviceName", "DeviceRecNum", "nhrp_nexthop", "Manufacturer", "Model"]
|
||||
filtered_column_order = ["last_modified", "DeviceName", "Manufacturer", "Model", "DeviceRecNum", "crypto_map", "crypto_map_template", "crypto_session_interface", "p1_profile", "RRI_enabled", "session_status", "local_ip", "local_port", "peer_ip", "peer_port", "peer_vpn_id", "nhrp_nexthop", "p1_auth_type", "p1_dh_group", "p1_encr_algo", "p1_hash_algo", "p2_encr_algo", "p2_hash_algo", "pfs", "protected_vrf", "ordered_transform_set", "ipsec_flow", "p2_default_3des", "spoke_p2_default_3des", "spoke_p2_algo_preference"]
|
||||
projection = {'_id': 0}
|
||||
for c in filtered_column_order:
|
||||
projection.update({c: 1})
|
||||
|
||||
## populate device sheets, update devices sheet
|
||||
for sheet in sheets:
|
||||
# print(f'building excel sheet {sheet}')
|
||||
logger.info(f'building excel sheet {sheet}')
|
||||
|
||||
## load new device dataframe
|
||||
device_table_df = pandas.DataFrame.from_dict(collection[sheet].find({}, projection)) # would like to order by vpn_id to make spotting config changes simple
|
||||
|
||||
## copy device dataframe spoke records to 'VPN Spokes' dataframe
|
||||
cols_to_copy = ['last_modified', 'DeviceName', 'Manufacturer', 'Model', 'DeviceRecNum', 'peer_ip', 'peer_vpn_id', 'nhrp_nexthop']
|
||||
cols_to_copy_exist = []
|
||||
for col in cols_to_copy:
|
||||
if col in device_table_df.columns:
|
||||
cols_to_copy_exist.append(col)
|
||||
temp_df = pandas.DataFrame(device_table_df[cols_to_copy_exist])
|
||||
temp_df.insert(0, 'Hub', sheet)
|
||||
spokes_df = pandas.concat([spokes_df, temp_df])
|
||||
|
||||
## create missing columns containing NaN values - see gotcha below, when searching for strings in a 'was missing now NaN only' column, the column has no type (string), use .astype(str) to help
|
||||
column_order = filtered_column_order
|
||||
missing_columns = [c for c in column_order if c not in list(device_table_df.columns)]
|
||||
if len(missing_columns) >0:
|
||||
# print(missing_columns)
|
||||
device_table_df = device_table_df.reindex(columns = device_table_df.columns.tolist() + missing_columns)
|
||||
|
||||
## reorder columns in device sheet
|
||||
column_count = 0
|
||||
for c in column_order:
|
||||
if c in list(device_table_df.columns):
|
||||
device_table_df.insert(column_count, c, device_table_df.pop(c))
|
||||
column_count += 1
|
||||
# print(device_table_df.columns.values.tolist())
|
||||
|
||||
# ## debug - check columns / NaN values exist
|
||||
# if sheet == 'lon-vpn03':
|
||||
# pandas.set_option('display.max_rows', None)
|
||||
# pandas.set_option('display.max_columns', None)
|
||||
# pandas.set_option('display.width', None)
|
||||
# logger.error(missing_columns)
|
||||
# logger.error(device_table_df)
|
||||
# logger.error(device_table_df.columns.values.tolist())
|
||||
# pandas.reset_option('all')
|
||||
|
||||
## check for p1/p2 3des
|
||||
# p1_3des = True
|
||||
device_table_df.loc[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
(device_table_df['p1_encr_algo'].astype(str).str.contains("3des", na=False, case=False))
|
||||
, 'p1_3des'] = True
|
||||
# p1_3des = False
|
||||
device_table_df.loc[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
~(device_table_df['p1_encr_algo'].astype(str).str.contains("3des", na=False, case=False))
|
||||
, 'p1_3des'] = False
|
||||
# p2_3des = True
|
||||
# .astype(str) required when all records are UP-IDLE and the p2_encr_algo column was filled with NaN values, the NAN populated column has no type (just one entry in this column would give it string type)
|
||||
device_table_df.loc[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
(device_table_df['p2_encr_algo'].astype(str).str.contains("3des", na=False, case=False))
|
||||
, 'p2_3des'] = True
|
||||
# p2_3des = False
|
||||
device_table_df.loc[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].astype(str).str.contains("3des", na=False, case=False))
|
||||
, 'p2_3des'] = False
|
||||
|
||||
## write new sheet to workbook
|
||||
device_table_df.to_excel(writer, index=False, sheet_name=sheet)
|
||||
|
||||
## add device dataframe to dict, 'VPN Hubs' will analyse each dataframe for stats
|
||||
devices_dataframes.update({sheet: device_table_df})
|
||||
|
||||
## copy each spoke to 'VPN Spokes'
|
||||
spokes_df.to_excel(writer, index=False, sheet_name='VPN Spokes')
|
||||
|
||||
## return dict of device/sheets dataframes for stats collection
|
||||
return devices_dataframes, spokes_df
|
||||
|
||||
def transform_devices_sheet(collection, writer, devices_df):
|
||||
logger = logging.getLogger('main')
|
||||
logger.info('building excel sheet VPN Hubs')
|
||||
|
||||
## vars
|
||||
manufacturer_models = {}
|
||||
devices_3des_stats = {'p1_3des_p2_3des_count': [], 'p1_3des_p2_ok_count': [], 'p1_ok_p2_3des_count': [], 'p1_ok_p2_ok_count': []}
|
||||
|
||||
## load 'VPN Hubs' collection into devices dataframe in alphabetical order
|
||||
projection = {'_id': 0, 'scrapli_platform': 0}
|
||||
vpn_devices_table_df = pandas.DataFrame.from_dict(collection.find({}, projection).sort('DeviceName', 1))
|
||||
|
||||
## use xlsxwriter directly where pandas xlsxwriter wrapper is limited
|
||||
workbook = writer.book
|
||||
worksheet = workbook.get_worksheet_by_name('VPN Hubs')
|
||||
link_format = workbook.add_format({'bold': False, 'font_color': 'blue', 'underline': True})
|
||||
|
||||
## create missing columns containing NaN values (may occur with 'ssh failed auth' / 'ssh failed connection' devices)
|
||||
# column_order = ["DeviceName", "FQDN", "IPAddress", "session_protocol", "DeviceRecNum", "DeviceType", "DeviceDescription", "DeviceStatus", "Site", "Region", "Country", "Division", "vendor", "os_flavour", "chassis", "serial", "os_version", "image", "p1_ok_p2_3des", "p1_3des_p2_ok", "p1_3des_p2_3des", "p1_ok_p2_ok", "tunnel_count", "transform_default_3des", "transform_default_3des_name", "spoke_aes_known_support", "spoke_default_p2_3des", "spoke_default_p2_not_3des", "spoke_default_p2_algo_unknown", "isakmp_policy_default_p1_3des", "isakmp_policy", "compliant"]
|
||||
column_order = ["DeviceName", "FQDN", "IPAddress", "session_protocol", "DeviceRecNum", "DeviceType", "DeviceDescription", "DeviceStatus", "Site", "Region", "Country", "Division", "vendor", "os_flavour", "chassis", "serial", "os_version", "image", "p1_ok_p2_3des", "p1_3des_p2_ok", "p1_3des_p2_3des", "p1_ok_p2_ok", "tunnel_count", "transform_default_3des", "transform_default_3des_name", "spoke_aes_known_support", "spoke_default_p2_3des", "spoke_default_p2_not_3des", "spoke_default_p2_algo_unknown", "isakmp_policy_default_p1_3des", "isakmp_policy"]
|
||||
missing_columns = [c for c in column_order if c not in list(vpn_devices_table_df.columns)]
|
||||
if len(missing_columns) >0:
|
||||
vpn_devices_table_df = vpn_devices_table_df.reindex(columns = vpn_devices_table_df.columns.tolist() + missing_columns)
|
||||
|
||||
## reorder columns in 'VPN Hubs' (_default) dataframe
|
||||
column_count = 0
|
||||
for c in column_order:
|
||||
if c in list(vpn_devices_table_df.columns):
|
||||
vpn_devices_table_df.insert(column_count, c, vpn_devices_table_df.pop(c))
|
||||
column_count += 1
|
||||
|
||||
## loop each 'device dataframe', update 'VPN Hubs' with stats
|
||||
for k, v in devices_df.items():
|
||||
sheet = k
|
||||
device_table_df = v
|
||||
|
||||
## update vpn_devices_table_df dataframe with device_table_df tunnel compliance state
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'p1_ok_p2_3des'] = len(device_table_df[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
(device_table_df['p1_3des'].dropna() == False) &
|
||||
(device_table_df['p2_3des'].dropna() == True)
|
||||
])
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'p1_3des_p2_ok'] = len(device_table_df[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
(device_table_df['p1_3des'].dropna() == True) &
|
||||
(device_table_df['p2_3des'].dropna() == False)
|
||||
])
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'p1_3des_p2_3des'] = len(device_table_df[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
(device_table_df['p1_3des'].dropna() == True) &
|
||||
(device_table_df['p2_3des'].dropna() == True)
|
||||
])
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'p1_ok_p2_ok'] = len(device_table_df[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
(device_table_df['p1_3des'].dropna() == False) &
|
||||
(device_table_df['p2_3des'].dropna() == False)
|
||||
])
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'tunnel_count'] = len(device_table_df[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna())
|
||||
])
|
||||
|
||||
# not evaluating transform sets by the dataframe, this information is now in the device collection
|
||||
# tfsa = device_table_df.loc[~(device_table_df['transform_sets'].isna())]['transform_sets'].values
|
||||
# for i in tfsa:
|
||||
# print('##############')
|
||||
# print(i)
|
||||
# print(i[0])
|
||||
# print(i[0]['DeviceName'])
|
||||
# print('##############')
|
||||
|
||||
## indicate if device has transform set(s) where default policy is 3des
|
||||
# default_p2_3des_count = len(device_table_df[(device_table_df['tunnel_complete'].dropna() == True) & (device_table_df['p2_default_3des'].dropna() == True)])
|
||||
default_p2_3des_count = len(device_table_df[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
(device_table_df['p2_default_3des'].dropna() == True)
|
||||
])
|
||||
# print(default_p2_3des_count)
|
||||
if default_p2_3des_count >0:
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'transform_default_3des'] = default_p2_3des_count
|
||||
## find transform sets that list 3des as the first entry
|
||||
# tfs = device_table_df.loc[device_table_df['p2_default_3des'] == True]['transform_sets'].values
|
||||
tfs = device_table_df.loc[device_table_df['p2_default_3des'] == True]['ordered_transform_set'].values
|
||||
tfs_l = []
|
||||
for i in tfs:
|
||||
tfs_l.append(i[0]['name']) # list inception, pandas returns list of fields that contain list of dicts, take 'name' of first (ordered) transform set entry
|
||||
# print(list(set(tfs_l)))
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'transform_default_3des_name'] = list(set(tfs_l))
|
||||
|
||||
## indicate if we can determine where the spoke is configured only for 3des, use the logic from the device collection keys
|
||||
# this is known where the transform set lists aes as the primary option yet the spoke negotiates 3des (or visa versa)
|
||||
# the flag is 'unknown' when the transform set is only/primary 3des and the spoke uses 3des (or transform set is only aes and the spoke uses aes - the spoke may still prefer 3des)
|
||||
spoke_default_p2_3des_count = len(device_table_df[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
~(device_table_df['spoke_p2_default_3des'].isna()) &
|
||||
(device_table_df['spoke_p2_default_3des'].dropna() == True)
|
||||
])
|
||||
spoke_default_p2_ok_count = len(device_table_df[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
~(device_table_df['spoke_p2_default_3des'].isna()) &
|
||||
(device_table_df['spoke_p2_default_3des'].dropna() == False)
|
||||
])
|
||||
spoke_unknown_p2_preference_count = len(device_table_df[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
~(device_table_df['spoke_p2_algo_preference'].isna()) &
|
||||
(device_table_df['spoke_p2_algo_preference'].dropna() == 'unknown')
|
||||
])
|
||||
if spoke_default_p2_3des_count >0:
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'spoke_default_p2_3des'] = spoke_default_p2_3des_count
|
||||
if spoke_default_p2_ok_count >0:
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'spoke_default_p2_not_3des'] = spoke_default_p2_ok_count
|
||||
if spoke_unknown_p2_preference_count >0:
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'spoke_default_p2_algo_unknown'] = spoke_unknown_p2_preference_count
|
||||
|
||||
## spokes capable of aes
|
||||
spoke_aes_supported_count = len(device_table_df[
|
||||
~(device_table_df['p1_encr_algo'].isna()) &
|
||||
~(device_table_df['p2_encr_algo'].isna()) &
|
||||
((device_table_df['p1_encr_algo'].astype(str).str.contains("aes", na=False, case=False)) |
|
||||
(device_table_df['p2_encr_algo'].astype(str).str.contains("aes", na=False, case=False)))
|
||||
])
|
||||
if spoke_aes_supported_count >0:
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'spoke_aes_known_support'] = spoke_aes_supported_count
|
||||
|
||||
## query mongo to get contextual spoke stats for summary
|
||||
# (this is an easy place to grab stats in the vpn_spreadsheet module, mongo queries are more flexible than pandas queries for this task, but this is a much more expensive operation)
|
||||
|
||||
# find unique combos of manufacturer/model
|
||||
spoke_models = collection[sheet].aggregate([
|
||||
{ "$group": {
|
||||
"_id" : { "manufacturer": "$Manufacturer", "model": "$Model" },
|
||||
"count": {"$sum": 1}
|
||||
}
|
||||
}
|
||||
])
|
||||
spoke_manufacturer_models = []
|
||||
for unique_device in spoke_models:
|
||||
manufacturer = unique_device['_id']['manufacturer'] if 'manufacturer' in unique_device['_id'] else 'unknown'
|
||||
model = unique_device['_id']['model'] if 'model' in unique_device['_id'] else 'unknown'
|
||||
count = unique_device['count'] if 'count' in unique_device else 0
|
||||
entry = {'manufacturer': manufacturer, 'model': model, 'count': count}
|
||||
# entry = {'manufacturer': manufacturer, 'model': model}
|
||||
spoke_manufacturer_models.append(entry)
|
||||
# add to top level manufacturer/model dict
|
||||
if manufacturer in manufacturer_models:
|
||||
if model in manufacturer_models[manufacturer]:
|
||||
rolling_count = manufacturer_models[manufacturer][model]['count'] + count
|
||||
manufacturer_models[manufacturer][model].update({'count': rolling_count})
|
||||
else:
|
||||
manufacturer_models[manufacturer].update({model: {'count': count}})
|
||||
else:
|
||||
manufacturer_models.update({manufacturer: {model: {'count': count}}})
|
||||
|
||||
# add spoke hardware info to device sheet
|
||||
vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet, 'spoke_hardware'] = str(spoke_manufacturer_models)
|
||||
|
||||
# TODO - could this be done in pandas faster? come back to this, there are 4 queries per host, info already in dataframes but the syntax is not as flexible
|
||||
|
||||
# find 3des stats by manufacturer/model
|
||||
for unique_device in spoke_manufacturer_models:
|
||||
manufacturer = unique_device['manufacturer']
|
||||
model = unique_device['model']
|
||||
triple_des_regex = re.compile('(?i).*3DES.*')
|
||||
not_triple_des_regex = {'$not': re.compile('(?i).*3DES.*')}
|
||||
# ensure stats for devices with no manufacturer/model are matched
|
||||
if manufacturer == 'unknown':
|
||||
manufacturer_match = {'$exists': False}
|
||||
else:
|
||||
manufacturer_match = manufacturer
|
||||
if model == 'unknown':
|
||||
model_match = {'$exists': False}
|
||||
else:
|
||||
model_match = model
|
||||
p1_3des_p2_3des_query = {'Manufacturer': manufacturer_match, 'Model': model_match, 'p1_encr_algo': {'$exists': True}, 'p2_encr_algo': {'$exists': True}, 'p1_encr_algo': triple_des_regex, 'p2_encr_algo': triple_des_regex} # if current_config implemented it needs to be in these queries
|
||||
p1_3des_p2_ok_query = {'Manufacturer': manufacturer_match, 'Model': model_match, 'p1_encr_algo': {'$exists': True}, 'p2_encr_algo': {'$exists': True}, 'p1_encr_algo': triple_des_regex, 'p2_encr_algo': not_triple_des_regex}
|
||||
p1_ok_p2_3des_query = {'Manufacturer': manufacturer_match, 'Model': model_match, 'p1_encr_algo': {'$exists': True}, 'p2_encr_algo': {'$exists': True}, 'p1_encr_algo': not_triple_des_regex, 'p2_encr_algo': triple_des_regex}
|
||||
p1_ok_p2_ok_query = {'Manufacturer': manufacturer_match, 'Model': model_match, 'p1_encr_algo': {'$exists': True}, 'p2_encr_algo': {'$exists': True}, 'p1_encr_algo': not_triple_des_regex, 'p2_encr_algo': not_triple_des_regex}
|
||||
p1_3des_p2_3des_count = collection[sheet].count(p1_3des_p2_3des_query)
|
||||
p1_3des_p2_ok_count = collection[sheet].count(p1_3des_p2_ok_query)
|
||||
p1_ok_p2_3des_count = collection[sheet].count(p1_ok_p2_3des_query)
|
||||
p1_ok_p2_ok_count = collection[sheet].count(p1_ok_p2_ok_query)
|
||||
# add 3des stats to top level manufacturer/model dict
|
||||
count_3des_conditions = {'p1_3des_p2_3des_count': p1_3des_p2_3des_count, 'p1_3des_p2_ok_count': p1_3des_p2_ok_count, 'p1_ok_p2_3des_count': p1_ok_p2_3des_count, 'p1_ok_p2_ok_count': p1_ok_p2_ok_count}
|
||||
for k,v in count_3des_conditions.items():
|
||||
if k in manufacturer_models[manufacturer][model]:
|
||||
rolling_count = manufacturer_models[manufacturer][model][k] + v
|
||||
manufacturer_models[manufacturer][model].update({k: rolling_count})
|
||||
else:
|
||||
manufacturer_models[manufacturer][model].update({k: v})
|
||||
# add device name to by 3des status to lists (could be done in pandas but the info is available here at hand)
|
||||
if v >0:
|
||||
if sheet not in devices_3des_stats[k]:
|
||||
devices_3des_stats[k].append(sheet)
|
||||
|
||||
## add hyperlinks in 'VPN Hubs' to each device sheet, are written into the excel sheets directly
|
||||
name_col_idx = vpn_devices_table_df.columns.get_loc("DeviceName")
|
||||
device_row_idx = vpn_devices_table_df.loc[vpn_devices_table_df['DeviceName'] == sheet].index.to_list()[0] + 1
|
||||
cell = xlsxwriter.utility.xl_rowcol_to_cell(device_row_idx, name_col_idx)
|
||||
worksheet.write_url(cell, f"internal:'{sheet}'!A1", link_format, string=sheet, tip="Device page")
|
||||
worksheet.conditional_format(cell, {'type': 'no_errors','format': link_format}) # cell format doesnt apply over an existing field with the write_url method, reapply link formatting
|
||||
## add hyperlink from device sheet back to 'VPN Hubs'
|
||||
devicesheet = workbook.get_worksheet_by_name(sheet)
|
||||
devicesheet.write_url('A1', f"internal:'VPN Hubs'!{cell}", link_format, tip="Back")
|
||||
devicesheet.conditional_format('A1', {'type': 'no_errors','format': link_format})
|
||||
|
||||
## debug manufacturer/model and per device 3des stats
|
||||
# logger.info(json.dumps(manufacturer_models, indent=4))
|
||||
# logger.info(json.dumps(devices_3des_stats, indent=4))
|
||||
|
||||
## create temporary devices stats data object
|
||||
devices_with_tunnels = sorted(list(set(devices_3des_stats['p1_3des_p2_3des_count'] + devices_3des_stats['p1_3des_p2_ok_count'] + devices_3des_stats['p1_ok_p2_3des_count'] + devices_3des_stats['p1_ok_p2_ok_count'])))
|
||||
devices_with_3des_tunnels = sorted(list(set(devices_3des_stats['p1_3des_p2_3des_count'] + devices_3des_stats['p1_3des_p2_ok_count'] + devices_3des_stats['p1_ok_p2_3des_count'])))
|
||||
devices_with_no_3des_tunnels = sorted([device for device in devices_3des_stats['p1_ok_p2_ok_count'] if device not in list(set(devices_3des_stats['p1_3des_p2_3des_count'] + devices_3des_stats['p1_3des_p2_ok_count'] + devices_3des_stats['p1_ok_p2_3des_count']))])
|
||||
devices_with_only_p1_3des_p2_3des_tunnels = sorted([device for device in devices_3des_stats['p1_3des_p2_3des_count'] if device not in list(set(devices_3des_stats['p1_3des_p2_ok_count'] + devices_3des_stats['p1_ok_p2_3des_count'] + devices_3des_stats['p1_ok_p2_ok_count']))])
|
||||
devices_with_only_p1_3des_p2_ok = sorted([device for device in devices_3des_stats['p1_3des_p2_ok_count'] if device not in list(set(devices_3des_stats['p1_3des_p2_3des_count'] + devices_3des_stats['p1_ok_p2_3des_count'] + devices_3des_stats['p1_ok_p2_ok_count']))])
|
||||
devices_with_only_p1_ok_p2_3des = sorted([device for device in devices_3des_stats['p1_ok_p2_3des_count'] if device not in list(set(devices_3des_stats['p1_3des_p2_3des_count'] + devices_3des_stats['p1_3des_p2_ok_count'] + devices_3des_stats['p1_ok_p2_ok_count']))])
|
||||
devices_with_p1_3des_p2_3des = sorted(devices_3des_stats['p1_3des_p2_3des_count'])
|
||||
devices_with_p1_3des_p2_ok = sorted(devices_3des_stats['p1_3des_p2_ok_count'])
|
||||
devices_with_p1_ok_p2_3des = sorted(devices_3des_stats['p1_ok_p2_3des_count'])
|
||||
devices_with_p1_ok_p2_ok = sorted(devices_3des_stats['p1_ok_p2_ok_count'])
|
||||
hub_3des_stats = {'devices_with_tunnels': devices_with_tunnels,
|
||||
'devices_with_no_3des_tunnels': devices_with_no_3des_tunnels,
|
||||
'devices_with_3des_tunnels': devices_with_3des_tunnels,
|
||||
'devices_with_p1_3des_p2_3des': devices_with_p1_3des_p2_3des,
|
||||
'devices_with_p1_3des_p2_ok': devices_with_p1_3des_p2_ok,
|
||||
'devices_with_p1_ok_p2_3des': devices_with_p1_ok_p2_3des,
|
||||
'devices_with_p1_ok_p2_ok': devices_with_p1_ok_p2_ok,
|
||||
'devices_with_only_p1_3des_p2_3des_tunnels': devices_with_only_p1_3des_p2_3des_tunnels,
|
||||
'devices_with_only_p1_3des_p2_ok': devices_with_only_p1_3des_p2_ok,
|
||||
'devices_with_only_p1_ok_p2_3des': devices_with_only_p1_ok_p2_3des
|
||||
}
|
||||
device_3des_stats_dict = {'hub': hub_3des_stats, 'spoke': manufacturer_models}
|
||||
|
||||
# TODO - can do this same logic with 'manufacturer_models' - find spoke device types that only have p1_ok_p2_ok_count or p1_3des_p2_3des_count - might be easier shoved into a dataframe, sorted and then 'deduced'
|
||||
|
||||
## tidy up 'VPN Hubs' dataframe with contextual information
|
||||
# devices not in DNS or SSH failed auth/connection or junos
|
||||
vpn_devices_table_df.loc[
|
||||
(vpn_devices_table_df['FQDN'] == 'unknown') &
|
||||
(vpn_devices_table_df['IPAddress'] == 'unknown'),
|
||||
['os_flavour', 'os_version', 'image', 'vendor', 'chassis', 'serial']
|
||||
] = 'unknown'
|
||||
vpn_devices_table_df.loc[
|
||||
(vpn_devices_table_df['session_protocol'] == 'ssh failed auth') |
|
||||
(vpn_devices_table_df['session_protocol'] == 'ssh failed connection'),
|
||||
['os_flavour', 'os_version', 'image', 'vendor', 'chassis', 'serial']
|
||||
] = 'unknown'
|
||||
vpn_devices_table_df.loc[
|
||||
(vpn_devices_table_df['vendor'] == 'junos'),
|
||||
['os_flavour', 'os_version', 'image', 'chassis', 'serial']
|
||||
] = 'unknown'
|
||||
|
||||
# populate 0 counts for empty stats where the 'cisco' device is contactable (readability)
|
||||
possible_empty = ['tunnel_count', 'p1_3des_p2_ok', 'p1_ok_p2_3des', 'p1_3des_p2_3des', 'p1_ok_p2_ok', 'transform_default_3des', 'spoke_default_p2_3des', 'spoke_default_p2_not_3des', 'spoke_default_p2_algo_unknown', 'spoke_aes_known_support']
|
||||
for f in possible_empty:
|
||||
vpn_devices_table_df.loc[
|
||||
(vpn_devices_table_df['session_protocol'] == 'ssh') &
|
||||
(vpn_devices_table_df['vendor'] == 'cisco') &
|
||||
(vpn_devices_table_df[f].isna()),
|
||||
[f]
|
||||
] = 0
|
||||
|
||||
# ## compliance bool tally
|
||||
# # the device maybe compliant where all spokes are !3des but there is still an unused 3des transform set - technically uncompliant but functionally not
|
||||
# # the device maybe compliant where the CPE is not TNS owned but there are 3des tunnels and transform set - technically uncompliant but outside of scope / remediation
|
||||
# vpn_devices_table_df.loc[~(vpn_devices_table_df['tunnel_count'].isna()), 'compliant'] = ~(vpn_devices_table_df.fillna(0)['p1_3des_p2_ok'] + vpn_devices_table_df.fillna(0)['p1_ok_p2_3des'] + vpn_devices_table_df.fillna(0)['p1_3des_p2_3des']).astype(bool)
|
||||
|
||||
## write sheet to workbook
|
||||
vpn_devices_table_df.to_excel(writer, index=False, sheet_name='VPN Hubs')
|
||||
return vpn_devices_table_df, device_3des_stats_dict
|
||||
|
||||
def generate_summary_sheet(collection, writer, vpn_devices_table_df, device_3des_stats_dict):
|
||||
# print('building excel sheet VPN Hub Summary')
|
||||
logger = logging.getLogger('main')
|
||||
logger.info('building excel sheet VPN Hub Summary')
|
||||
|
||||
### collect stats from 'VPN Hubs' sheet, populate a hub device dict of stats
|
||||
|
||||
# vars
|
||||
hub_summary_dict = {}
|
||||
|
||||
# device list sources
|
||||
hub_summary_dict.update({"Hub Devices": ""})
|
||||
hub_summary_dict.update({"break1": "break"})
|
||||
hub_summary_dict.update({"DMVPN device list": "http://ipdesk.corp.tnsi.com/newipdesk/dmvpnhubscan.php"})
|
||||
hub_summary_dict.update({"IP-P2PAGG device list": "http://ipdesk.corp.tnsi.com/newipdesk/report_p2p_aggs.php"})
|
||||
hub_summary_dict.update({"break2": "break"})
|
||||
|
||||
## tunnel stats
|
||||
|
||||
# total tunnels
|
||||
tunnel_count = vpn_devices_table_df['tunnel_count'].sum()
|
||||
hub_summary_dict.update({"Total tunnel count": tunnel_count})
|
||||
# complaint p1_ok_p2_ok
|
||||
compliant_tunnel_count = vpn_devices_table_df['p1_ok_p2_ok'].sum()
|
||||
hub_summary_dict.update({"Total compliant tunnel count": compliant_tunnel_count})
|
||||
# uncompliant tunnel count
|
||||
uncompliant_tunnel_count = vpn_devices_table_df['p1_3des_p2_ok'].sum() + vpn_devices_table_df['p1_ok_p2_3des'].sum() + vpn_devices_table_df['p1_3des_p2_3des'].sum()
|
||||
hub_summary_dict.update({"Total uncompliant tunnel count": uncompliant_tunnel_count})
|
||||
# uncompliant p1_3des_p2_ok
|
||||
uncompliant_p1_3des_p2_ok = vpn_devices_table_df['p1_3des_p2_ok'].sum()
|
||||
hub_summary_dict.update({"Total uncompliant p1_3des_p2_ok": uncompliant_p1_3des_p2_ok})
|
||||
# uncompliant p1_ok_p2_3des
|
||||
uncompliant_p1_ok_p2_3des = vpn_devices_table_df['p1_ok_p2_3des'].sum()
|
||||
hub_summary_dict.update({"Total uncompliant p1_ok_p2_3des": uncompliant_p1_ok_p2_3des})
|
||||
# uncompliant p1_3des_p2_3des
|
||||
uncompliant_p1_3des_p2_3des = vpn_devices_table_df['p1_3des_p2_3des'].sum()
|
||||
hub_summary_dict.update({"Total uncompliant p1_3des_p2_3des": uncompliant_p1_3des_p2_3des})
|
||||
hub_summary_dict.update({"break3": "break"})
|
||||
|
||||
## hub stats
|
||||
|
||||
# total devices
|
||||
total_devices = len(vpn_devices_table_df)
|
||||
hub_summary_dict.update({"Total devices": total_devices})
|
||||
# total contactable devices
|
||||
total_contactable_devices = len(vpn_devices_table_df[~vpn_devices_table_df['session_protocol'].isin(['unknown', 'ssh failed auth', 'ssh failed connection'])])
|
||||
hub_summary_dict.update({"Total contactable devices": total_contactable_devices})
|
||||
# total uncontactable devices
|
||||
total_uncontactable_devices = len(vpn_devices_table_df[vpn_devices_table_df['session_protocol'].isin(['unknown', 'ssh failed connection'])])
|
||||
hub_summary_dict.update({"Total uncontactable devices": total_uncontactable_devices})
|
||||
# total auth fail
|
||||
total_auth_fail = len(vpn_devices_table_df[vpn_devices_table_df['session_protocol'] == 'ssh failed auth'])
|
||||
hub_summary_dict.update({"Total auth fail": total_auth_fail})
|
||||
hub_summary_dict.update({"break4": "break"})
|
||||
|
||||
# devices with tunnels
|
||||
devices_with_tunnel = len(vpn_devices_table_df[(vpn_devices_table_df['tunnel_count'] > 0).dropna()])
|
||||
# hub_summary_dict.update({"Total devices with tunnels": devices_with_tunnel})
|
||||
hub_summary_dict.update({"Total devices with tunnels": len(device_3des_stats_dict['hub']['devices_with_tunnels']) }) # values differ by 1 - locate source of issue
|
||||
# devices without tunnels
|
||||
devices_without_tunnel = len(vpn_devices_table_df[(vpn_devices_table_df['tunnel_count'] == 0).dropna()])
|
||||
hub_summary_dict.update({"Total devices without tunnels": devices_without_tunnel})
|
||||
|
||||
# devices where 3DES not seen in p1 or p2
|
||||
hub_summary_dict.update({"Total devices with tunnels matching: 3DES not seen in p1 or p2": len(device_3des_stats_dict['hub']['devices_with_no_3des_tunnels'])})
|
||||
# hub_summary_dict.update({"Device list where ALL tunnels matching: 3DES not seen in p1 or p2": device_3des_stats_dict['hub']['devices_with_no_3des_tunnels']})
|
||||
# devices where 3DES seen in p1 or p2
|
||||
hub_summary_dict.update({"Total devices with tunnels matching: 3DES seen in p1 or p2": len(device_3des_stats_dict['hub']['devices_with_3des_tunnels'])})
|
||||
# devices where p1 = 3DES, p2 = 3DES
|
||||
hub_summary_dict.update({"Total devices with tunnels matching: 3DES seen in p1 and p2": len(device_3des_stats_dict['hub']['devices_with_p1_3des_p2_3des'])})
|
||||
# devices where p1 = 3DES, p2 = OK
|
||||
hub_summary_dict.update({"Total devices with tunnels matching: 3DES seen in p1 and not p2": len(device_3des_stats_dict['hub']['devices_with_p1_3des_p2_ok'])})
|
||||
# devices where p1 = OK, p2 = 3DES
|
||||
hub_summary_dict.update({"Total devices with tunnels matching: 3DES not seen in p1 and in p2": len(device_3des_stats_dict['hub']['devices_with_p1_ok_p2_3des'])})
|
||||
# devices where p1 = OK, p2 = OK
|
||||
hub_summary_dict.update({"Total devices with tunnels matching: 3DES not seen in p1 or p2": len(device_3des_stats_dict['hub']['devices_with_p1_ok_p2_ok'])})
|
||||
# devices where only p1 = 3DES, p2 = 3DES
|
||||
hub_summary_dict.update({"Total devices where ALL tunnels matching: 3DES seen in p1 and p2": len(device_3des_stats_dict['hub']['devices_with_only_p1_3des_p2_3des_tunnels'])})
|
||||
hub_summary_dict.update({"Device list where ALL tunnels matching: 3DES not seen in p1 or p2": device_3des_stats_dict['hub']['devices_with_only_p1_3des_p2_3des_tunnels']})
|
||||
# devices where only p1 = 3DES, p2 = ok
|
||||
hub_summary_dict.update({"Total devices where ALL tunnels matching: 3DES seen in p1 and not p2": len(device_3des_stats_dict['hub']['devices_with_only_p1_3des_p2_ok'])})
|
||||
hub_summary_dict.update({"Device list where ALL tunnels matching: 3DES not seen in p1 and not p2": device_3des_stats_dict['hub']['devices_with_only_p1_3des_p2_ok']})
|
||||
# devices where only p1 = ok, p2 = 3DES
|
||||
hub_summary_dict.update({"Total devices where ALL tunnels matching: 3DES not seen in p1 and in p2": len(device_3des_stats_dict['hub']['devices_with_only_p1_ok_p2_3des'])})
|
||||
hub_summary_dict.update({"Device list where ALL tunnels matching: 3DES not seen in p1 and in p2": device_3des_stats_dict['hub']['devices_with_only_p1_ok_p2_3des']})
|
||||
hub_summary_dict.update({"break5": "break"})
|
||||
|
||||
# devices where transform set has primary definition with 3des
|
||||
devices_default_tf_triple_des = len(vpn_devices_table_df[(vpn_devices_table_df['transform_default_3des'] > 0).dropna()])
|
||||
hub_summary_dict.update({"Total devices with 3DES in primary transform set": devices_default_tf_triple_des})
|
||||
# devices where isakmp policy has primary definition with 3des
|
||||
devices_default_isakmp_triple_des = len(vpn_devices_table_df[(vpn_devices_table_df['isakmp_policy_default_p1_3des'] == True).dropna()])
|
||||
hub_summary_dict.update({"Total devices with 3DES in primary ISAKMP policy": devices_default_isakmp_triple_des})
|
||||
|
||||
# ## devices compliant - disabled as complaince check is fuzzy
|
||||
# devices_compliant = len(vpn_devices_table_df[(vpn_devices_table_df['compliant'] == True).dropna()])
|
||||
# hub_summary_dict.update({"Total devices compliant": devices_compliant})
|
||||
# # devices uncompliant
|
||||
# devices_uncompliant = len(vpn_devices_table_df[(vpn_devices_table_df['compliant'] == False).dropna()])
|
||||
# hub_summary_dict.update({"Total devices uncompliant": devices_uncompliant})
|
||||
# hub_summary_dict.update({"break7": "break"})
|
||||
|
||||
## device attributes, bit of a cheat using mongo to sidestep sorting lists from the dataframe
|
||||
device_attributes = {
|
||||
'DeviceType': 'Total devices type',
|
||||
'vendor': 'Total devices vendor',
|
||||
'os_flavour': 'Total devices OS',
|
||||
'chassis': 'Total devices model',
|
||||
'image': 'Total devices firmware',
|
||||
'Region': 'Total devices region',
|
||||
'Site': 'Total devices site',
|
||||
'Division': 'Total devices division'
|
||||
}
|
||||
for attribute,message in device_attributes.items():
|
||||
uniq_attribute = collection.distinct(attribute)
|
||||
for a in uniq_attribute:
|
||||
if a not in ['unknown', '']:
|
||||
attribute_count = collection.count_documents({attribute: a})
|
||||
uniq_message = f'{message}' + ' ' + f'{a}'
|
||||
hub_summary_dict.update({uniq_message: attribute_count})
|
||||
|
||||
### collect stats, populate a spoke device dict of stats
|
||||
|
||||
# vars
|
||||
spoke_summary_dict = {}
|
||||
|
||||
# spoke column
|
||||
spoke_summary_dict.update({"Spoke Devices": ""})
|
||||
spoke_summary_dict.update({"break1": "break"})
|
||||
# spokes that support aes, seen in p1 or p2
|
||||
spoke_aes_supported = vpn_devices_table_df['spoke_aes_known_support'].sum()
|
||||
spoke_summary_dict.update({"Total spokes that support AES (AES seen in either p1/p2)": spoke_aes_supported})
|
||||
# spokes where transform set has primary definition with 3des
|
||||
spokes_default_tf_triple_des = vpn_devices_table_df['spoke_default_p2_3des'].sum()
|
||||
spoke_summary_dict.update({"Total spokes with 3DES in phase2 config (offered AES chose 3DES)": spokes_default_tf_triple_des})
|
||||
# spokes where transform set cannot have primary definition with 3des
|
||||
spokes_default_tf_not_triple_des = vpn_devices_table_df['spoke_default_p2_not_3des'].sum()
|
||||
spoke_summary_dict.update({"Total spokes with 3DES not in phase2 config (offered 3DES chose AES)": spokes_default_tf_not_triple_des})
|
||||
# spokes where transform set is unknown, they negotiate with the device preference
|
||||
spokes_unknown_tf_triple_des = vpn_devices_table_df['spoke_default_p2_algo_unknown'].sum()
|
||||
spoke_summary_dict.update({"Total spokes where phase2 algo preference unknown (negotiate same as device transform)": spokes_unknown_tf_triple_des})
|
||||
spoke_summary_dict.update({"break2": "break"})
|
||||
|
||||
# ## debug
|
||||
# logger.info(spoke_summary_dict)
|
||||
|
||||
### write hub_summary_dict directly to the summary sheet (no pandas)
|
||||
workbook = writer.book
|
||||
colA_format_summary = workbook.add_format({'bold': True, 'text_wrap': False, 'align': 'left', 'valign': 'top', 'fg_color': '#77B0D1', 'border': 1})
|
||||
colB_format_summary = workbook.add_format({'bold': False, 'text_wrap': False, 'align': 'right', 'valign': 'top', 'border': 1})
|
||||
colB_hyperlink_format_summary = workbook.add_format({'bold': False, 'text_wrap': False, 'align': 'left', 'valign': 'top', 'border': 1, 'font_color': 'blue', 'underline': True})
|
||||
colD_format_summary = workbook.add_format({'bold': True, 'text_wrap': False, 'align': 'left', 'valign': 'top', 'fg_color': '#79C9A5', 'border': 1})
|
||||
# worksheet = writer.sheets['VPN Hub Summary']
|
||||
worksheet = writer.sheets['VPN Device Summary']
|
||||
|
||||
## write hub dict to sheet
|
||||
row = 0
|
||||
for k, v in hub_summary_dict.items():
|
||||
stat = k
|
||||
if isinstance(v, list):
|
||||
count = str(v)
|
||||
else:
|
||||
count = v
|
||||
row += 1
|
||||
if 'DMVPN device list' in stat or 'IP-P2PAGG device list' in stat:
|
||||
worksheet.write(f'A{row}', stat, colA_format_summary)
|
||||
worksheet.write_url(f'B{row}', count, colB_hyperlink_format_summary)
|
||||
elif count == 'break':
|
||||
worksheet.write(f'A{row}', '', colA_format_summary)
|
||||
worksheet.write(f'B{row}', '', colB_format_summary)
|
||||
else:
|
||||
worksheet.write(f'A{row}', stat, colA_format_summary)
|
||||
worksheet.write(f'B{row}', count, colB_format_summary)
|
||||
|
||||
## write spoke dict to sheet
|
||||
row = 0
|
||||
for k,v in spoke_summary_dict.items():
|
||||
stat = k
|
||||
count = v
|
||||
row +=1
|
||||
if count == 'break':
|
||||
worksheet.write(f'D{row}', '', colD_format_summary)
|
||||
worksheet.write(f'E{row}', '', colB_format_summary)
|
||||
else:
|
||||
worksheet.write(f'D{row}', stat, colD_format_summary)
|
||||
worksheet.write(f'E{row}', count, colB_format_summary)
|
||||
|
||||
## write spoke 3des summary stats to sheet
|
||||
row +=1
|
||||
spoke_filter_start = f'D{row}'
|
||||
worksheet.write(f'D{row}', 'Model', colD_format_summary)
|
||||
worksheet.write(f'E{row}', 'Count', colD_format_summary)
|
||||
worksheet.write(f'F{row}', 'p1_ok_p2_ok_count', colD_format_summary)
|
||||
worksheet.write(f'G{row}', 'p1_3des_p2_3des_count', colD_format_summary)
|
||||
worksheet.write(f'H{row}', 'p1_3des_p2_ok_count', colD_format_summary)
|
||||
worksheet.write(f'I{row}', 'p1_ok_p2_3des_count', colD_format_summary)
|
||||
for m in device_3des_stats_dict['spoke'].keys():
|
||||
manufacturer = m
|
||||
for k,v in device_3des_stats_dict['spoke'][m].items():
|
||||
row +=1
|
||||
model = f'{manufacturer} {k}'
|
||||
count = v['count']
|
||||
worksheet.write(f'D{row}', model, colD_format_summary)
|
||||
worksheet.write(f'E{row}', count, colB_format_summary)
|
||||
worksheet.write(f'F{row}', v['p1_ok_p2_ok_count'], colB_format_summary)
|
||||
worksheet.write(f'G{row}', v['p1_3des_p2_3des_count'], colB_format_summary)
|
||||
worksheet.write(f'H{row}', v['p1_3des_p2_ok_count'], colB_format_summary)
|
||||
worksheet.write(f'I{row}', v['p1_ok_p2_3des_count'], colB_format_summary)
|
||||
spoke_filter_end = f'I{row}'
|
||||
|
||||
## autofit, set column widths, autofilter
|
||||
worksheet.autofit()
|
||||
worksheet.set_column('B:B', 55)
|
||||
worksheet.set_column('C:C', 5)
|
||||
worksheet.set_column('E:E', 8)
|
||||
worksheet.set_column('F:I', 24)
|
||||
worksheet.autofilter(f'{spoke_filter_start}:{spoke_filter_end}')
|
||||
|
||||
def pretty_sheet(device_df, writer, header_format, sheet):
|
||||
worksheet = writer.sheets[sheet]
|
||||
# add sheet header from dataframe column names
|
||||
for col_num, value in enumerate(device_df.columns.values):
|
||||
worksheet.write(0, col_num, value, header_format)
|
||||
# set scope of autofilter
|
||||
(max_row, max_col) = device_df.shape
|
||||
worksheet.autofilter(0, 0, max_row, max_col - 1)
|
||||
# autofit columns
|
||||
worksheet.autofit()
|
||||
# set column width for specific columns (device + devices sheets), overwrite autofit for specific fields for readability
|
||||
column_width = {'FQDN': 10,
|
||||
'image': 10,
|
||||
'chassis': 12,
|
||||
'DeviceDescription': 17,
|
||||
'ipsec_flow': 30,
|
||||
'ordered_transform_set': 50,
|
||||
'crypto_map_interface': 25,
|
||||
'last_modified': 20,
|
||||
'session_protocol': 15,
|
||||
"p1_ok_p2_3des": 16,
|
||||
"p1_3des_p2_ok": 16,
|
||||
"p1_3des_p2_3des": 16,
|
||||
"p1_ok_p2_ok": 16,
|
||||
"tunnel_count": 12,
|
||||
"transform_default_3des": 16,
|
||||
"transform_default_3des_name": 16,
|
||||
"spoke_aes_known_support": 16,
|
||||
"spoke_default_p2_3des": 16,
|
||||
"spoke_default_p2_not_3des": 16,
|
||||
"spoke_default_p2_algo_unknown": 16,
|
||||
'isakmp_policy': 35,
|
||||
"isakmp_policy_default_p1_3des": 16
|
||||
}
|
||||
for col_num, value in enumerate(device_df.columns.values):
|
||||
if value in list(column_width.keys()):
|
||||
width = column_width[value]
|
||||
worksheet.set_column(col_num, col_num, width)
|
||||
# scrolling header
|
||||
worksheet.freeze_panes(1, 0)
|
||||
# header_format_devices uses 'text_wrap': True, set row to double depth for long field names
|
||||
if sheet == 'VPN Hubs':
|
||||
worksheet.set_row(0, 45)
|
||||
|
||||
# init excel workbook
|
||||
with pandas.ExcelWriter(outfile, engine='xlsxwriter') as writer:
|
||||
workbook = writer.book
|
||||
|
||||
# define header formats
|
||||
header_format_devices = workbook.add_format({'bold': True, 'text_wrap': True, 'valign': 'top', 'fg_color': '#77B0D1', 'border': 1})
|
||||
header_format_device = workbook.add_format({'bold': True, 'text_wrap': False, 'valign': 'top', 'fg_color': '#79C9A5', 'border': 1})
|
||||
|
||||
# create sheets in workbook order
|
||||
#static_sheets = ['VPN Hub Summary', 'VPN Devices'] # remove
|
||||
static_sheets = ['VPN Device Summary', 'VPN Spokes', 'VPN Hubs']
|
||||
sheets = workbook_sheets_order(writer, collection, devices_dict, static_sheets)
|
||||
|
||||
# populate devices sheets and 'VPN Spokes' sheet
|
||||
devices_df_dict, spokes_df = populate_device_sheets(collection, writer, sheets)
|
||||
|
||||
# transform and populate 'VPN Hubs' sheet
|
||||
vpn_devices_table_df, device_3des_stats_dict = transform_devices_sheet(collection, writer, devices_df_dict)
|
||||
|
||||
# generate 'VPN Device Summary' sheet
|
||||
generate_summary_sheet(collection, writer, vpn_devices_table_df, device_3des_stats_dict)
|
||||
|
||||
# pretty device sheets
|
||||
for k, v in devices_df_dict.items():
|
||||
sheet = k
|
||||
device_df = v
|
||||
pretty_sheet(device_df, writer, header_format_device, sheet)
|
||||
pretty_sheet(vpn_devices_table_df, writer, header_format_devices, 'VPN Hubs')
|
||||
pretty_sheet(spokes_df, writer, header_format_device, 'VPN Spokes')
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"folders": [
|
||||
{
|
||||
"path": "."
|
||||
}
|
||||
],
|
||||
"settings": {
|
||||
"python.defaultInterpreterPath": "/home/tseed/WORK/python/vpn_venv/bin/python",
|
||||
"python.terminal.activateEnvironment": true,
|
||||
"python.terminal.activateEnvInCurrentTerminal": true,
|
||||
"files.defaultLanguage": "python"
|
||||
}
|
||||
}
|
||||
Loading…
Reference in New Issue