I need to pare down the number of records returned in our reports- I have already eliminated as many categories as possible, and I have restricted the filtering to HTTP and HTTPS results. The chart that is too long is a Table called detailed Web Browsing Log. It truncates at 10000 records, at about 400 pages. One possibility would be to design a query that makes sure that reporting only grabs one record every 30-40 seconds- Instead, it grabs every record, up to 10 per second. Pie-in-the-sky: In addition to that, I'd like to get a list of each website's hostname, timestamp, and browsing time. Is there a log besides the traffic log that includes the right fields? It looks like Traffic contains the browsing time and hostname is in Web. Thanks!
Nominating a forum post submits a request to create a new Knowledge Article based on the forum post topic. Please ensure your nomination includes a solution within the reply.
Hi there, by my understanding, it is meaningless to get browse time for each timestamp.
Anyway, according to your request, we can truncate time to minute level by below dataset:
logtype: traffic
select date_trunc('minute', from_itime(itime)) as timestamp, coalesce(nullifna(hostname), ipstr(`dstip`)) as domain, sum($browse_time) as browsetime from $log where $filter group by timestamp, domain having sum($browse_time)>0 order by timestamp
Then create a table chart, select this dataset, then set table type to drilldown
1st Column: timestamp, 2nd column: domain, 3rd column: browsetime
Check "Bundle rest into "Others"
"order by": timestamp,
"Show Top" : 0,
drilldown top: 5 or specify your own number
if 10000 record is not enough for you, you can change it up to 100,000 by cli:
config system report setting
set max-table-rows <num>
regards,
hz
Thanks so much for your help!
I have been working with this to understand how to best present the data out managers need- The only issue with the drilldown report is that it totals every minute, and it creates a cluttered report in the process. As a variation on this report, I tried eliminating the Browsing Time column, and am close to what I am looking for. The next step would be to take a list from the table, for example:
9861 2016-06-28 13:23:00 hangouts.google.com
9862 2016-06-28 13:24:00 hangouts.google.com
9863 2016-06-28 13:25:00 hangouts.google.com
9864 2016-06-28 13:26:00 hangouts.google.com
9865 2016-06-28 13:27:00 hangouts.google.com
9866 2016-06-28 13:28:00 hangouts.google.com
9867 2016-06-28 13:29:00 hangouts.google.com
9868 2016-06-28 13:30:00 hangouts.google.com
9869 2016-06-28 13:31:00 hangouts.google.com
9870 2016-06-28 13:32:00 mail.google.com
9871 2016-06-28 13:32:00 hangouts.google.com
9872 2016-06-28 13:33:00 hangouts.google.com
9873 2016-06-28 13:34:00 hangouts.google.com
9874 2016-06-28 13:35:00 hangouts.google.com
9875 2016-06-28 13:36:00 hangouts.google.com
9876 2016-06-28 13:37:00 hangouts.google.com
9877 2016-06-28 13:37:00 play.google.com
9878 2016-06-28 13:37:00 plus.google.com
and have it list this as one entry until it reaches the next domain name- mail.google.com. The idea would be to avoid the repeats- I would consider extending the table restriction, but we would be looking at an unmanageable report- it could be thousands of pages long. I am not a PostGRE guy (yet)- would we use a statement like a SELECT DISTINCT in there somehow?
Thanks again for your help!
Dan
Select Forum Responses to become Knowledge Articles!
Select the “Nominate to Knowledge Base” button to recommend a forum post to become a knowledge article.
User | Count |
---|---|
1712 | |
1093 | |
752 | |
447 | |
231 |
The Fortinet Security Fabric brings together the concepts of convergence and consolidation to provide comprehensive cybersecurity protection for all users, devices, and applications and across all network edges.
Copyright 2024 Fortinet, Inc. All Rights Reserved.