Lightweight Multithreaded Architectures (LIMA) advance latency hiding capability by creating a platform for the execution of many concurrent lightweight threads. Fine-grained management of these threads must have very low overhead since the threads themselves are created, execute and are destroyed very quickly. Several decision policies are presented and evaluated here in the context of behavior trends over a range of application characteristics. These polices are evaluated via simulation tools created specifically to model the novel aspects of LIMA as related to thread management. Two policies, RMA and Block, use simple marking mechanisms to guide thread management according to memory access, and show improvements in performance as the number of remote accesses in an application increase. A third policy, Overflow, alters the priority of scheduling newly spawned threads, and a fourth, PFail-CBlock manages threads based on producer-consumer semantics. Overflow and PFail-CBlock positively impact performance of frequent group synchronization operations.