All URLs referenced in this documentation have the following base:
The REST API is served over http at the moment. https is not supported.
The API is currently used without authentication.
To return a cleaned up version of the CSV file.
csv | The CSV to clean. | String |
version | A version number for the API | String |
originalHeaders | A list of original headers extracted from the CSV file | Array |
removedHeaders | A list of the headers this API has removed from the CSV | Array |
reCSV | A version of the CSV with extraneous columns removed | text |
Example usage in python
from pathlib import Path import requests import json # csv_file_path = "./myCsvFile.csv" csv = Path(csv_file_path).read_text() # api_endpoint = "http://www.octapi.tech/reCSV" data = {'csv':csv} r = requests.post(url=api_endpoint, data=data) y = eval(r.text) version = y["version"] originalHeaders = y["originalHeaders"] removedHeaders = y["removedHeaders"] reCSV = y["reCSV"]
To convert a ZIP archive containing JSON data into CSV format.
zip | The base64 encoded zip file to convert. | Base64 Encoded Data |
status | OK or NG | String |
csv | Data in CSV format | String |
message | Error message, only in the case of an error | String |
Example usage in python
from pathlib import Path import requests import json import base64 # fileName = "./myZipFile.zip" with open(fileName, "rb") as f: bytes = f.read() encoded = base64.b64encode(bytes) api_endpoint = "http://www.octapi.tech/zip2csv" data = {'zip':encoded} theResponse = requests.post(url=api_endpoint, data=data) theJson = json.loads(theResponse.text) theCsv = theJson["csv"] #If you want to write out the CSV, f = open(directory_name+'csvFile.csv', 'w') f.write(theCsv) f.close()
To convert a CSV file into JSON format.
csv | The CSV to convert. | String |
version | A version number for the API | String |
headers | A list of headers extracted from the CSV file | Array |
dataSet | The data set from the CSV file, structured as a JSON object | Object |
topLinest | The top lines in the CSV file, before the data, as a list | Array |
summaryData | The summary data from the bottom of the CSV file | Array of objects |
Example usage in python
from pathlib import Path import requests import json # csv_file_path = "./myCsvFile.csv" csv = Path(csv_file_path).read_text() # api_endpoint = "http://www.octapi.tech/objectify" data = {'csv':csv} r = requests.post(url=api_endpoint, data=data) y = eval(r.text) version = y["version"] headers = y["headers"] dataSet = y["dataSet"]
To convert a CSV file into CSV format.
csv | The CSV to convert. | String |
version | A version number for the API | String |
csv | A cleaned up version of the CSV file provided as input | Text |
Example usage in python
from pathlib import Path import requests import json # csv_file_path = "./myCsvFile.csv" csv = Path(csv_file_path).read_text() # api_endpoint = "http://www.octapi.tech/csv2csv" data = {'csv':csv} r = requests.post(url=api_endpoint, data=data) y = eval(r.text) version = y["version"] newCSV = y["csv"]
Analyze the time series data and turn it into statistics. Creates statistical values (e.g. mean, max, min, etc.) for the various measured parameters (e.g. throughput, streams, MCS, etc.) for each value of attenuation.
headers | The set of headers (see objectify) | Array |
dataSet | The data set (see objectify) | Object or String |
run | The run number to analyze | Number |
version | A version number for the API | String |
crunchedData | The statistical data for the data set, structured as a JSON object | Object |
Notes | Alerts, Warnings, Commentary, and Summary based on analysis of the data | Object |
Example usage in python
api_endpoint = "http://www.octapi.tech/crunch" run=1; data = {'headers':headers, 'dataSet':json.dumps(dataSet), 'run':run} r = requests.post(url=api_endpoint, data=data) y = json.loads(r.text) version = y["version"] analyzedData = y["crunchedData"] print("The number of sets of analyzed data is {}".format(len(analyzedData))) for a in range(0,len(analyzedData)): print("The number of attenuation steps in set {} is {}".format(a+1,len(analyzedData[a])))
Analyze the statistical data. For example, provide feedback on the "quality" of a rate vs. range curve. In addition, compare the results of some statistical tests to pass/fail criteria, if those criteria are supplied.
headers | The set of headers (see objectify) | Array |
dataSet | The data set (see objectify) | Object or String |
run | The run number to analyze | Number |
crunchedData | The statistical data (see crunch) | Object or String |
pe | The Pal Endpoint to analyze | Number |
LimitsCSV | The CSV containing the limits information, if limits checking is desired | String (Example File) |
aName | The name of the analysis being called | String |
version | A version number for the API | String |
analyzedData | The statistical data for the data set, structured as a JSON object (same as input) | Object |
Notes | Notes based on analysis of the data. Includes Alerts, Warnings, and Notes | Array |
limitTestResults | Results of limit test analysis, if limits file was provided | Array |
Example usage in python
limits_file_path = "./Limits.csv" limits = Path(limits_file_path).read_text() # api_endpoint = "http://www.octapi.tech/analyze" pe = 1 data = {'headers':headers, 'dataSet':json.dumps(dataSet), 'run':run, 'crunchedData':json.dumps(analyzedData), 'pe':pe,'LimitsCSV':limits,'aName':'analyze_ThroughputVsAttenuation'} r = requests.post(url=api_endpoint, data=data) y = json.loads(r.text) print("Version : {}".format(y["version"])) print("Analyzed data length : {}".format(len(y["analyzedData"]))) print("Alerts length : {}".format(len(y["Notes"]["Alerts"]))) print("Warnings length : {}".format(len(y["Notes"]["Warnings"]))) print("Limits test length : {}".format(len(y["limitTestResults"]))) limitTest = y["limitTestResults"] print("Limit test results: {}".format(json.dumps(limitTest,indent=4)))
Analyze the statistical data. For example, provide feedback on the "quality" of a rate vs. range curve. In addition, compare the results of some statistical
tests to pass/fail criteria, if those criteria are supplied.
This test can also be supplied with the original CSV file instead of the intermediate
statistical files.
DataCSV | The CSV data to use | Array |
LimitsCSV | The CSV containing the limits information, if limits checking is desired | String |
aName | The name of the analysis being called | String |
version | A version number for the API | String |
analyzedData | The statistical data for the data set, structured as a JSON object (same as input) | Object |
Notes | Notes based on analysis of the data. Includes Alerts, Warnings, and Notes | Array |
limitTestResults | Results of limit test analysis, if limits file was provided | Array |
Example usage in python
api_endpoint = "http://octapi.tech/analyze" data = {'DataCSV':csv, 'LimitsCSV':limits,'aName':'analyze_ThroughputVsAttenuation'} r = requests.post(url=api_endpoint, data=data) y = json.loads(r.text) print("Version : {}".format(y["version"])) print("Analyzed data length : {}".format(len(y["analyzedData"]))) print("Alerts length : {}".format(len(y["Notes"]["Alerts"]))) print("Warnings length : {}".format(len(y["Notes"]["Warnings"]))) print("Limits test length : {}".format(len(y["limitTestResults"]))) limitTest = y["limitTestResults"] print("Limit test results: {}".format(json.dumps(limitTest,indent=4)))
Get MCS based on the 802.11 standard.
standard | 802.11 standard: 11n, 11ac, 11ax | String |
RSSI | Received signal strength (dBm) | String |
BW | Bandwidth (MHz): 20, 40, 80, 160 | String |
streams | Number of streams: 1-8 | String |
status | Status of the return | String |
MCS | MCS value | Number |
Example usage in python
api_endpoint = "http://www.octapi.tech/getMCS" standard = "11ax" RSSI = "-50" BW = "80" streams = "2" api_endpoint += "?" + "standard=" + standard api_endpoint += "&" + "RSSI=" + RSSI api_endpoint += "&" + "BW=" + BW api_endpoint += "&" + "streams=" + streams r = requests.get(url=api_endpoint) y = eval(r.text) status = y["status"] MCS = y["MCS"] print("Returned with status {}".format(status)) print("MCS found is {}".format(MCS))
Get data rate based on the 802.11 standard.
standard | 802.11 standard: 11n, 11ac, 11ax | String |
MCS | MCS (see /getMCS) | String |
BW | Bandwidth (MHz): 20, 40, 80, 160 | String |
streams | Number of streams: 1-8 | String |
GI | Guard interval (ns). Depends on standard, but usually 400, 800, 1600, 3200 | String |
tones | Number of tones for OFDMA. Optional, and only valid for 11ax. Should be an allowed number of tones. | String |
status | Status of the return | String |
rate | Data rate | Number |
Example usage in python
api_endpoint = "http://www.octapi.tech/getDRate" standard = "11ax" MCS = "7" BW = "80" streams = "2" GI = "800" api_endpoint += "?" + "standard=" + standard api_endpoint += "&" + "MCS=" + MCS api_endpoint += "&" + "BW=" + BW api_endpoint += "&" + "streams=" + streams api_endpoint += "&" + "GI=" + GI print("The api endpoint is " + api_endpoint) r = requests.get(url=api_endpoint) y = eval(r.text) status = y["status"] rate = y["rate"] print("Returned with status {}".format(status)) print("Rate found is {}".format(rate))
Parse a PCAP file. The parsing (dissection) engine can either be Expert Analysis "native", or it can use TSHARK. Over time we expect TSHARK to be the preferred method.
fileName | Name of the PCAP file. Note that for EA native dissection this must be a .pcap file. For TSHARK, any format (pcap, pcapng, gz) can be used. | String |
pcapDir | Directory location for the file. If the file is in the default (pcapDirectory), this can be ignored. Otherwise, use a full path. | String |
preferTshark | Indicate that tshark should be used for packet dissection, if possible. The current default is FALSE, but this is expected to change. | String, "true" or "false" |
filters | Packet Types on which to filter. Exactly the same as on the UI.
| String |
categories | Packet Categories on which to filter. Exactly the same as on the UI.
| String |
actions | Packet Action Categories on which to filter. Exactly the same as on the UI.
| String |
ToMacs | A comma-separated list of MAC addresses that should be in the Transmitter, or Source, address of the packet. | String |
FromMacs | A comma-separated list of MAC addresses that should be in the Receiver, or Destination, address of the packet. | String |
Logic | This should be "AND" or "OR". It is used with the To/From addresses. The packets should either have packets To "AND" From the addresses, or To "OR" From the addresses. | String |
startAt | The first packet number to use in the analysis. | String |
endAt | The last packet number to use in the analysis. | String |
uploadFiles | The list of files in the default PCAP directory. | Array of strings |
errored | Indicates return error condition | Boolean |
errorMessage | Message in error case | String |
maxPacketProcessed | The number of the last packet analyzed | Integer |
packetsProcessed | The total number of packets analyzed | Integer |
more | Indicates if there is more analysis to be done. This API handles a maximum of 20K packets per call. If more are requested, the analysis will analyze the first 20K, and set more to True | Boolean |
timeMsec | Total analysis time, and time per packet. | Object, {total, perPacket} |
fields | This is the list of returned packets. This is an array of objects, and the object will be different for each packet type. Click here to see an example for a Trigger. | Array of Objects. |
firstTimestamp | The first timestamp in the analyzed data. | Integer |
firstFrameTimestamp | The first frame timestamp in the analyzed data. | Integer |
firstArrivalTime | The arrival time of the first frame in the analyzed data. | Float |
channelSwitches | Channel switch announcement information from beacons | Array |
bssTransitions | DEPRECATED | Empty Array |
roamThisMAC | MAC address for the roaming station in a roaming analysis. | String |
Example usage in python
api_endpoint = "http://localhost:8080/" + "parsePCAP?fileName="+pcapFile+"&pcapDir="+pcapDir apiResponse = requests.get(api_endpoint, timeout=REST_TIMEOUT) returnObject = json.loads(apiResponse.text) totalPackets = returnObject["totalPackets"] thePackets = returnObject["fields"]
These API calls make use of the PCAP parser, above, but they go a step further. There are some built-in analyses of the PCAP files that can be useful in further analysis.
fileName | Name of the PCAP file. Note that for EA native dissection this must be a .pcap file. For TSHARK, any format (pcap, pcapng, gz) can be used. | String (can be CSV for MLO) |
pcapDir | Directory location for the file. If the file is in the default (pcapDirectory), this can be ignored. Otherwise, use a full path. | String (can be CSV for MLO) |
preferTshark | Indicate that tshark should be used for packet dissection, if possible. The current default is FALSE, but this is expected to change. | String, "true" or "false" |
noDuplicates | Indicate that the parser should attempt to remove all duplicate packets in the capture. By default this is false. This should only be used if there are multiple sniffers that are expected to be capturing the same packets, and you do not want duplicates to be analyzed. | String, "true" or "false" |
filters | Packet Types on which to filter. Exactly the same as on the UI. (see above) | String |
categories | Packet Categories on which to filter. Exactly the same as on the UI. (see above) | String |
actions | Packet Action Categories on which to filter. Exactly the same as on the UI. (see above) | String |
to | A comma-separated list of MAC addresses that should be in the Transmitter, or Source, address of the packet. | String |
from | A comma-separated list of MAC addresses that should be in the Receiver, or Destination, address of the packet. | String |
logic | This should be "AND" or "OR". It is used with the To/From addresses. The packets should either have packets To "AND" From the addresses, or To "OR" From the addresses. | String |
firstPacket | The first packet number to use in the analysis. | String |
lastPacket | The last packet number to use in the analysis. | String |
direction | The direction of the traffic flow, if that information is needed for the analysis. | String, either "dl" or "ul". |
MLObands | A set of bands that match the input files | Comma separated list |
action | Which of the available "tools" to select. The list follows.
| String |
roamAnalyses | Which of the roam time analysis methods to use, when the action=getRoaming is selected. The list follows.
| String |
Varies, depending on the analysis performed. See specific examples above, and in here.
Example usage in python
baseURL = "http://localhost:8080/pcapAPI" api_endpoint = baseURL + "?fileName="+pcapFile+"&pcapDir="+pcapDir+"&firstPacket=1&lastPacket=1000&action=getAIDlist" print(">>>Running the captured file through a PCAP analysis. Get the AID list.") apiResponse = requests.get(api_endpoint, timeout=REST_TIMEOUT) returnObject = json.loads(apiResponse.text) apiListResult = returnObject["result"] print('>> aidsPerBssid: ', apiListResult["aidsPerBssid"])