Recent Content
Cato Connect Event: AMA with Professional Services
Ever wish you could get direct time with the experts? On June 3rd, 2025 at 11:00 AM EDT, you’ll get just that — a live AMA with two of our Principal Consultants from the Cato Professional Services team. We’ll cover topics like: Designing and implementing a CMA deployment Best practices we’ve seen across real-world environments Your questions — seriously, bring them Here’s how to get the most out of it: Click here to register and get the calendar invite and join us live Post your questions below in the comments — we’ll answer pre-submitted ones first, before tackling live chat during the session + See a question you like? Give it a “like” to help it rise to the top Note: We won’t be able to look at specific CMA instances — demos will be done using internal environments. That’s it — register, post your questions, and we’ll see you there! Presenters: Principal Consultant Professional Services, Italy Principal Consultant Professional Services, USA If you run into any issues, @mention me or email us at [email protected]250Views5likes1CommentReporting on Max amount of licenses reached
It's rather embarrassing to run out of SDP licenses as it provides for a negative new joiner experience when their Cato connection won't come up as expected. As Cato in their wisdom has decided there is no need to alert admins when the license count is reached (they'd probably rather we waste money purchasing a sufficient surplus of said licenses), is there a way to use the API to query for this status? Yes, I submitted an RMA for this last year that went nowhere. And yes, I know we can probably hack something together on our end that statically compares the number of licenses to the membership count in our provisioning groups. But this feels like it should be a basic feature of a SaaS service, especially as there is a hard stop when the license count is reached.62Views0likes4CommentsBypassing Cato via WAN Bypass and Split Tunnel
We need to add around 200 subnets to bypass Cato. My understanding is that they need to be added to all sites under the Site Configuration/Router/Bypass/Destination and for all SDP users via Access/Client Access Control/Split Tunnel policy. We have nearly 90 sites. Manually adding 200 subnets to 90 sites doesn't seem like a good time. Is this possible via the API? If so, can you point me toward the correct commands.22Views1like2CommentsSetting up SSO with IdPs other than the default nine?
I would like to ask about the possibilities of setting up SSO integration with Identity Providers (IdPs) that are not among the nine default options provided. What methods are available for establishing SSO connections with IdPs beyond the default nine? Is there a way to configure a generic IdP setting, or can we leverage the existing nine IdP configurations to connect with other IdPs? Additionally, is there a process to request a new IdP to be officially supported or added as a connection option? Any insights or guidance on this would be greatly appreciated. Thank you. Sincerely, hisashi5Views0likes0CommentsCan Cato API - AuditFeed be used in S3 integration?
Hi Team, A customer is trying to push Audit trail logs to the Amazon S3 integration, looking at the documentation I do not see how this is possible, I was wondering if there is any way to accomplish this or if it required an RFE.10Views0likes1CommentHow to Delete VPN Users via GraphQL API
Greetings, I'm working on automating user cleanup and am attempting to delete inactive VPN users via the Cato API. According to the API conventions, I assumed the following mutation would work to remove users from our account: Sorry for the poor formatting. mutation deleteEntities($accountID: ID!, $entityIDs: [ID!]!) { deleteEntities(accountID: $accountID, entityIDs: $entityIDs) { success failed { userID reason } } } I'm calling it in Python with: delete_variables = { "accountID": account_id, "entityIDs": [uid] } delete_response = requests.post(API_URL, headers=HEADERS, json={ "query": delete_mutation, "variables": delete_variables }) However, I receive the following error in the response: { "errors": [ { "message": "Cannot query field 'deleteEntities' on type 'Mutation'.", "extensions": { "code": "GRAPHQL_VALIDATION_FAILED" } } ], "data": null } What I am trying to figure out is: Is deleteEntities a valid mutation for deleting VPN users? If not, what is the correct GraphQL mutation for deleting users? Thank you guys!12Views1like1CommentReporting the wrong category goes nowhere
As per https://support.catonetworks.com/hc/en-us/articles/4413280530449-Customizing-the-Warning-Block-Page: "The Cato Security team regularly reviews reported wrong categories and validates that the content for the category is correct. When websites or applications belong to the wrong category, the Cato Security team updates the definition of the category." Not so much. I just went through the last two months of such reports (filter for "Sub-Type Is Misclassification" in the Events log) and found 31 such requests from our users - most were for perfectly legit sites that for some reason were categorized as "Porn". And they still are - every single one of them. If the Cato security team is indeed not reviewing these submissions as originally intended, it would be great if that was communicated so that we can remove that misleading reporting link and take care of the Brightcloud submissions ourselves.36Views0likes2CommentsQuestion regarding EntityID
Hi Team, We are working with a customer who needs to retrieve a list of users whose last connection exceeds one month. As advised by our Cato regional Sales Engineer, we are attempting to achieve this using the API in two steps: Use query entityLookup to obtain the EntityID (userID) Use query accountSnapshot to retrieve each user's last connection timestamp However, we're encountering a challenge due to API rate limits. The entityLookup query is limited to 30 requests per minute (or 1500 over 5 hours), which makes it impractical to retrieve EntityIDs for all 2600+ users in a reasonable timeframe. Below is the Python code we are currently using in our attempt: import requests import json from datetime import datetime, timedelta # Cato GraphQL endpoint URL url = "https://api.catonetworks.com/api/v1/graphql2" # HTTP headers와 API key headers = { "Content-Type": "application/json", "x-api-key": "Our client API key" } # Query 1: EntityID(UserID) API 명령문 query1 = """ query AllMyRemoteUsers { entityLookup(accountID:4265, type: vpnUser) { items { entity { id name } description } total } } """ # Query 1 실행 payload = { "query": query1 } response = requests.post(url, json=payload, headers=headers) data = response.json() # EntityID 추출 userIDs = [] try: items = data['data']['entityLookup']['items'] for item in items: user_id = int(item['entity']['id']) userIDs.append(user_id) except KeyError as e: print(f"Error parsing response: {e}") print(json.dumps(data, indent=2)) print(userIDs) # GraphQL EntityID list string으로 생성 user_id_list_str = ",".join(str(uid) for uid in userIDs) print("EntityID 추출 완료") # Query 2: accountSnapshot API 명령문 query2 = f""" query accountSnapshot {{ accountSnapshot(accountID: 4265) {{ users(userIDs:[{user_id_list_str}]) {{ info {{ name email phoneNumber status authMethod origin }} lastConnected version }} }} }} """ # Query 2 실행 payload = { "query": query2 } response = requests.post(url, json=payload, headers=headers) from datetime import datetime, timedelta # query2 Json reponse 파싱 result = response.json() # 한달간 접속이력이 없었던 사용자 정보 출력 cutoff_date = datetime.utcnow() - timedelta(days=30) import csv # Prepare list to hold all rows to be saved csv_rows = [] try: users = result['data']['accountSnapshot']['users'] for user in users: last_connected_str = user.get('lastConnected') if last_connected_str: last_connected = datetime.strptime(last_connected_str, "%Y-%m-%dT%H:%M:%SZ") if last_connected > cutoff_date: name = user['info']['name'] email = user['info']['email'] csv_rows.append([name, email, last_connected.strftime("%Y-%m-%d %H:%M:%S")]) except KeyError as e: print(f"Error extracting user data: {e}") # Save to CSV csv_file_path = "한달간 접속이력 없는 사용자.csv" with open(csv_file_path, mode='w', newline='', encoding='utf-8') as file: writer = csv.writer(file) writer.writerow(["Name", "Email", "Last Connected"]) writer.writerows(csv_rows) print(f"\nCSV file이 저장되었습니다: {csv_file_path}") On line 57, you can see that we need to put all the EntityID(UserID) to check each Users Last connection info. But because of entityLookup's limit, it only put 30 SDP user's EntityID. Could you please provide us if there is any other way to get all the EntityID(userID) by using API so we can list the users according to the Last connection? Best regards,12Views0likes0CommentsVoices Behind the Stack: Nick and Jack of Redner’s
This month, we’re spotlighting two IT leaders who have been keeping a multi-location retail operation at the forefront of cybersecurity for over 20 years and doing it with unmatched clarity, curiosity, and consistency. Meet Nick Hidalgo (aka NickH), VP of IT, and Jack Senesap (aka JackSenesap), Director of Infrastructure and Security at Redner’s, a locally owned and family-oriented retail food company in the US. Their secret? A passion for unifying complexity, a love of visibility, and a belief that the right tools and the right people make all the difference. “We always know where our users are. We can deny access to things by default. That’s huge.” – Jack “It’s the first tool I look at in the morning. Everything’s in one place.” – Nick These two were early adopters of SASE from way back when it still sounded like just another buzzword. What changed their minds? Visibility. Simplicity. And the sense that this shift actually reduced complexity instead of adding more. They chose Cato Networks for its performance and security and stayed because it became a trusted part of how they work. “Now we have the resources to continue to improve.” Why these two stand out: They’re always pushing forward: from expanding their TLSi reporting to exploring orchestration and automation. They’re deeply curious about AI: not just how it can help, but how it might reshape their roles. They’re passionate about their industry and always looking for ways to do more. Off the clock? Nick is out on the lake or at the gym. Jack is tearing up the trails on his mountain bike or shooting hoops with a crew of all ages. And fun fact: Jack once won a car at a software user conference. (Seriously.) “Security never sleeps” Jack says, and hearing about everything he’s accomplishing at work, apparently neither does he. Huge thanks to Nick and Jack for their time, insights, and everything they do to keep their organization secure and forward-looking. For more Redner’s fun – check out this nifty customer story here.10Views1like0Comments