Chunking has emerged as a basic property of human cognition. Computationally, chunking has been proposed as a process for compressing information also has been identified in neural processes in the brain and used in models of these processes. Our purpose in this paper is to expand understanding of how chunking impacts both learning and performance using the Computational-Unified Learning Model (C-ULM) a multi-agent computational model. Chunks in C-ULM long-term memory result from the updating of concept connection weights via statistical learning. Concept connection weight values move toward the accurate weight value needed for a task and a confusion interval reflecting certainty in the weight value is shortened each time a concept is attended in working memory and each time a task is solved, and the confusion interval is lengthened when a chunk is not retrieved over a number of cycles and each time a task solution attempt fails. The dynamic tension between these updating mechanisms allows chunks to come to represent the history of relative frequency of co-occurrence for the concept connections present in the environment; thereby encoding the statistical regularities in the environment in the long-term memory chunk network. In this paper, the computational formulation of chunking in the C-ULM is described, followed by results of simulation studies examining impacts of chunking versus no chunking on agent learning and agent effectiveness. Then, conclusions and implications of the work both for understanding human learning and for applications within cognitive informatics, artificial intelligence, and cognitive computing are discussed.