Main Article Content

Fhatur Robby Tanzil Herris
Hondor Saragih
Anindito Anindito

Abstract

Security Operation Center (SOC) analysts encounter significant delays due to "Swivel Chair Analysis," a manual and fragmented process for triaging Indicators of Compromise (IoC). This study addresses this inefficiency by developing "CyberGuardianBot," an automated ChatOps assistant built using the Rapid Application Development (RAD) methodology and the Telegram Bot API. Applying Security Orchestration, Automation, and Response (SOAR) principles, the system asynchronously orchestrates multi-source intelligence from VirusTotal, AbuseIPDB, URLScan.io, AlienVault OTX, and MobSF. A key novelty is the integration of Google Gemini to perform cognitive synthesis, translating raw API data into actionable insights. Blackbox testing validated the system across 15 test cases, confirming the successful automation of URL, IP, and file triage. The bot generates natural language executive summaries and structured reports (.txt and .pdf), significantly enhancing the speed and accuracy of the triage process while reducing the cognitive load on analysts.

Downloads

Download data is not yet available.

Article Details

How to Cite
Herris, F. R. T. ., Saragih, H., & Anindito, A. (2025). Generative AI and multi-source intelligence for automated security triage. Journal of Intelligent Decision Support System (IDSS), 8(4), 237-243. https://doi.org/10.35335/idss.v8i4.326
References
Al Zaidy, A. (2024). The Impact of Generative AI on Student Engagement and Ethics in Higher Education. Journal of Information Technology, Cybersecurity, and Artificial Intelligence, 1(1), 30–38. https://doi.org/10.70715/jitcai.2024.v1.i1.004
Ali, M. S. M., Wasel, K. Z. A., & Abdelhamid, A. M. M. (2024). Generative AI and Media Content Creation: Investigating the Factors Shaping User Acceptance in the Arab Gulf States. Journalism and Media, 5(4), 1624–1645. https://doi.org/10.3390/journalmedia5040101
Alqahtani, H., & Kumar, G. (2025). A comprehensive review of generative AI techniques and their impact on cybersecurity. Soft Computing, 29(13), 4945–4982. https://doi.org/10.1007/s00500-025-10702-z
Dhoni, P., & Kumar, R. (2023). Synergizing Generative AI and Cybersecurity: Roles of Generative AI Entities, Companies, Agencies, and Government in Enhancing Cybersecurity. https://doi.org/10.36227/techrxiv.23968809.v1
Dua, I. K., & Patel, P. G. (2024). Software Optimization for Generative AI. In I. K. Dua & P. G. Patel (Eds.), Optimizing Generative AI Workloads for Sustainability: Balancing Performance and Environmental Impact in Generative AI (pp. 85–122). Apress. https://doi.org/10.1007/979-8-8688-0917-0_4
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy. IEEE Access, 11, 80218–80245. https://doi.org/10.1109/ACCESS.2023.3300381
Ishtaiwi, A., al-Qerem, A., Aldweesh, A., & Alkasassbeh, M. (2025). A Framework for Addressing Cybersecurity Risks in the Metaverse Safeguarding Against Generative AI Threats (pp. 381–400). https://doi.org/10.4018/979-8-3373-0832-6.ch016
Kazato, Y., Nakagawa, Y., & Nakatani, Y. (2020). Improving Maliciousness Estimation of Indicator of Compromise Using Graph Convolutional Networks. 2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC), 1–7. https://doi.org/10.1109/CCNC46108.2020.9045113
Khai, C. D., & Juremi, J. (2023). BEsafe - Validating URLs and Domains with the aid of Indicator of Compromise. 2023 15th International Conference on Developments in ESystems Engineering (DeSE), 309–313. https://doi.org/10.1109/DeSE58274.2023.10099965
Kim, G., Lee, J., Kang, M., Go, W., & Hou, J.-U. (2024). Highlights Enhancing Optical Character Recognition Performance in the Cybersecurity Domain for Indicator of Compromise Analysis Enhancing Optical Character Recognition Performance in the Cybersecurity Domain for Indicator of Compromise Analysis. https://github.com/GangsuKim/CAP
Krishnamurthy, O., & Vemulapalli, G. (2025). Advancing Sustainable Cybersecurity: Exploring Trends and Overcoming Challenges with Generative AI. In P. Whig, N. Silva, A. A. Elngar, N. Aneja, & P. Sharma (Eds.), Sustainable Development through Machine Learning, AI and IoT (pp. 16–25). Springer Nature Switzerland.
Nadella, G. S., Addula, S. R., Yadulla, A. R., Sajja, G. S., Meesala, M., Maturi, M. H., Meduri, K., & Gonaygunta, H. (2025). Generative AI-Enhanced Cybersecurity Framework for Enterprise Data Privacy Management. Computers, 14(2), 55. https://doi.org/10.3390/computers14020055
Oh, S., & Shon, T. (2023). Cybersecurity Issues in Generative AI. 2023 International Conference on Platform Technology and Service (PlatCon), 97–100. https://doi.org/10.1109/PlatCon60102.2023.10255179
Peci, F., Hamiti, E., & Khan, I. (2025). Agentic AI with Chatops for Large Scale Network Operations. 2025 IEEE Conference on Artificial Intelligence (CAI), 1617–1626. https://doi.org/10.1109/CAI64502.2025.00242
Prieto, I., & Blakely, B. (2024). Proposed Uses of Generative AI in a Cybersecurity-Focused Soar Agent. Proceedings of the AAAI Symposium Series, 2(1), 386–390. https://doi.org/10.1609/aaaiss.v2i1.27704
Rodger, D., Mann, S. P., Earp, B., Savulescu, J., Bobier, C., & Blackshaw, B. P. (2025). Generative AI in healthcare education: How AI literacy gaps could compromise learning and patient safety. Nurse Education in Practice, 87, 104461. https://doi.org/10.1016/j.nepr.2025.104461
Sai, S., Yashvardhan, U., Chamola, V., & Sikdar, B. (2024). Generative AI for Cyber Security: Analyzing the Potential of ChatGPT, DALL-E, and Other Models for Enhancing the Security Space. IEEE Access, 12, 53497–53516. https://doi.org/10.1109/ACCESS.2024.3385107
Salamun, M. A., Muttaqin, F. Z., & Rosyid, N. R. (2023). Design and implementation of honeypot indicator of compromise (IoC) profiling using malware information sharing platform (MISP). 110003. https://doi.org/10.1063/5.0164216
Shaila Rana, & Ronda Chicone. (2025). Generative AI in Cybersecurity. In Generative AI Security (pp. 1–24). https://doi.org/https://doi.org/10.1002/9781394368532.ch1
Sternberg, R. J. (2024). Do Not Worry That Generative AI May Compromise Human Creativity or Intelligence in the Future: It Already Has. Journal of Intelligence, 12(7), 69. https://doi.org/10.3390/jintelligence12070069
Teo, Z. L., Quek, C. W. N., Wong, J. L. Y., & Ting, D. S. W. (2024). Cybersecurity in the generative artificial intelligence era. Asia-Pacific Journal of Ophthalmology, 13(4), 100091. https://doi.org/10.1016/J.APJO.2024.100091
Tostes, B., Ventura, L., Lovat, E., Martins, M., & Menasché, D. (2023). Learning When to Say Goodbye: What Should be the Shelf Life of an Indicator of Compromise? 2023 IEEE International Conference on Cyber Security and Resilience (CSR), 503–510. https://doi.org/10.1109/CSR57506.2023.10224937
Wang, S. K., Ma, S. P., Lai, G. H., & Chao, C. H. (2024). ChatOps for microservice systems: A low-code approach using service composition and large language models. Future Generation Computer Systems, 161, 518–530. https://doi.org/10.1016/J.FUTURE.2024.07.029
Zhang, G., & Yu, T. (2025). Association between Generative AI self-efficacy and Generative AI acceptance: The mediating role of Generative AI trust and the moderating role of Generative AI risk perception. Acta Psychologica, 261, 105791. https://doi.org/10.1016/J.ACTPSY.2025.105791
Zheng, D. Y., Tong, K. K. S., Lim, M. T. A., Chan, W. J., & Goh, W. (2023). AfterImage: Evading Traditional Indicator of Compromise (IOC) Blocking. 2023 IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI), 1–6. https://doi.org/10.1109/SOLI60636.2023.10425081