Lesson 7.3: Testing, Validation & Reflection
Validate Your Implementation
Now comes the critical step: Does your implementation match your specification?
This is not casual testing. This is systematic validation. Every success criterion from Lesson 7.1 becomes a test case. Every claim in your spec is verified.
Validation Checklist
Go through each success criterion from your specification. For each one, write a test.
Template for Each Criterion
✅ Success Criterion: [Copy from your spec, Lesson 7.1]
Test Method:
[How you will test this criterion with ROS 2 CLI tools]
Expected Outcome:
[What should happen if spec is met]
Actual Outcome:
[What actually happened when you tested]
Pass/Fail: [YES or NO]
If Failed:
Issue: [What went wrong?]
Root Cause: [Why?]
Fix Applied: [How did you fix it?]
Re-Test Result: [Did it pass after fix?]
Example Validation: Turtle Navigation
Let's work through a complete example. Assume your spec says:
Success Criterion: "Turtle reaches goal position within 2 seconds of service call"
Test 1: Basic Navigation
# Terminal 1: Start turtlesim
ros2 run turtlesim turtlesim_node
# Terminal 2: Start your robot controller
ros2 launch robot_controller start_system.launch.py
# Terminal 3: Call navigation service (measure time yourself)
time ros2 service call /navigate_to robot_controller/srv/NavigateTo "{x: 5.0, y: 5.0, theta: 0.0}"
Expected: Service returns in < 2 seconds, turtle moves to position
Actual Result (if everything works):
real 1.87s
response:
success: true
time_taken: 1.87
Pass: ✅ YES (1.87s < 2.0s)
Test 2: Multiple Goal Calls
# Call multiple goals in sequence to test robustness
for i in {1..5}; do
time ros2 service call /navigate_to robot_controller/srv/NavigateTo "{x: $((RANDOM % 10)), y: $((RANDOM % 10)), theta: 0.0}"
sleep 1
done
Expected: All 5 calls succeed within time limit
Pass/Fail: ✅ or ❌ (check response success=true for each)
Test 3: Status Topic Frequency
Specification says: "Status published every 100ms (±10ms tolerance)"
# Terminal 1: Record timestamp of each message received
ros2 topic echo /robot/status | awk '{print NR, strftime("%T"), $0}' > status_log.txt
# Let it run for 10 seconds, then analyze
# Count messages, compute intervals
Expected: ~100 messages in 10 seconds (10 Hz), intervals ≈ 100ms
Analysis:
# Rough check: tail -20 status_log.txt should show regular intervals
# More precise: write Python script to compute mean/std of intervals
Test 4: Obstacle Handling
Specification says: "System continues operating when 3+ obstacles detected"
# Terminal 1: Monitor /obstacles topic and count detections
ros2 topic echo /obstacles | grep -c "ranges:" &
ECHO_PID=$!
# Let system run for 30 seconds
sleep 30
kill $ECHO_PID
# Should see detections published regularly (5 Hz = ~150 messages in 30s)
# More importantly: did system crash or error? (Check node status)
ros2 node list # Should still show all 3 nodes running
Expected:
- Obstacle messages published regularly
- No errors in node logs
- Navigator still responds to service calls
Pass: ✅ if all nodes still running, no error logs
Systematic Testing Script
Create a test script that validates all criteria:
robot_controller_test.sh:
#!/bin/bash
echo "=== TURTLE CONTROLLER SYSTEM VALIDATION ==="
echo ""
# Color codes for pass/fail
GREEN='\033[0;32m'
RED='\033[0;31m'
NC='\033[0m' # No Color
PASS_COUNT=0
FAIL_COUNT=0
# Test 1: Node availability
echo "Test 1: All nodes running..."
NODES=$(ros2 node list)
if echo "$NODES" | grep -q "navigator" && \
echo "$NODES" | grep -q "status_monitor" && \
echo "$NODES" | grep -q "obstacle_detector"; then
echo -e "${GREEN}✅ PASS${NC}: All nodes running"
((PASS_COUNT++))
else
echo -e "${RED}❌ FAIL${NC}: Not all nodes found"
((FAIL_COUNT++))
fi
echo ""
# Test 2: Topics available
echo "Test 2: Topics published..."
TOPICS=$(ros2 topic list)
if echo "$TOPICS" | grep -q "/robot/status" && \
echo "$TOPICS" | grep -q "/obstacles" && \
echo "$TOPICS" | grep -q "/turtle1/cmd_vel"; then
echo -e "${GREEN}✅ PASS${NC}: All topics available"
((PASS_COUNT++))
else
echo -e "${RED}❌ FAIL${NC}: Expected topics not found"
echo "Available topics: $TOPICS"
((FAIL_COUNT++))
fi
echo ""
# Test 3: Service available
echo "Test 3: Navigation service..."
SERVICES=$(ros2 service list)
if echo "$SERVICES" | grep -q "/navigate_to"; then
echo -e "${GREEN}✅ PASS${NC}: /navigate_to service available"
((PASS_COUNT++))
else
echo -e "${RED}❌ FAIL${NC}: /navigate_to service not found"
((FAIL_COUNT++))
fi
echo ""
# Test 4: Service responds
echo "Test 4: Service responds to call..."
RESPONSE=$(timeout 5 ros2 service call /navigate_to robot_controller/srv/NavigateTo "{x: 2.0, y: 2.0, theta: 0.0}" 2>&1)
if echo "$RESPONSE" | grep -q "success:"; then
if echo "$RESPONSE" | grep -q "success: true"; then
echo -e "${GREEN}✅ PASS${NC}: Service succeeded"
((PASS_COUNT++))
else
echo -e "${RED}❌ FAIL${NC}: Service returned success=false"
((FAIL_COUNT++))
fi
else
echo -e "${RED}❌ FAIL${NC}: Service did not respond"
((FAIL_COUNT++))
fi
echo ""
# Test 5: Status topic publishes
echo "Test 5: Status messages flowing..."
COUNT=$(timeout 3 ros2 topic echo /robot/status 2>/dev/null | grep -c "x:")
if [ "$COUNT" -gt 0 ]; then
echo -e "${GREEN}✅ PASS${NC}: Received $COUNT status messages in 3 seconds"
((PASS_COUNT++))
else
echo -e "${RED}❌ FAIL${NC}: No status messages received"
((FAIL_COUNT++))
fi
echo ""
# Summary
echo "=== VALIDATION SUMMARY ==="
echo -e "Passed: ${GREEN}$PASS_COUNT${NC}"
echo -e "Failed: ${RED}$FAIL_COUNT${NC}"
echo ""
if [ $FAIL_COUNT -eq 0 ]; then
echo -e "${GREEN}✅ ALL TESTS PASSED${NC}"
exit 0
else
echo -e "${RED}❌ SOME TESTS FAILED${NC}"
exit 1
fi
Run it:
chmod +x robot_controller_test.sh
./robot_controller_test.sh
Debugging Integration Issues
When tests fail, debug systematically:
Symptom: Service call times out
Diagnosis:
# Is navigator node running?
ros2 node list | grep navigator
# Is service registered?
ros2 service list | grep navigate_to
# Are there errors in navigator logs?
ros2 node info /navigator
Fix:
- Check node started without error:
ros2 launch robot_controller start_system.launch.py --log-level DEBUG - Check service callback is registered (add logging in code)
- Check dependencies: import statements, message types available
Symptom: Status topic not publishing
Diagnosis:
# Is status_monitor node running?
ros2 node list | grep status_monitor
# Is topic created?
ros2 topic list | grep robot/status
# Check publisher in code is using correct topic name
grep "create_publisher" status_monitor_node.py
Fix:
- Verify custom message compiled:
colcon build --packages-select robot_controller - Check timer callback is registered (add logging)
- Verify message fields match specification
Symptom: Turtle doesn't move
Diagnosis:
# Is cmd_vel topic being published?
ros2 topic list | grep cmd_vel
# Test publisher manually
ros2 topic pub --once /turtle1/cmd_vel geometry_msgs/Twist "{linear: {x: 0.5}}"
# Check turtle responds to manual command
Fix:
- Verify turtlesim is running:
ros2 run turtlesim turtlesim_node - Check velocity magnitude in navigator node
- Ensure topic name matches exactly:
/turtle1/cmd_vel(with/turtle1)
Reflection: Learning Crystallization
Now that your system is built and validated, reflect. This is where learning becomes understanding.
Reflection Questions
1. Specification Quality
- Did your specification anticipate all the implementation details you discovered?
- What did you miss initially?
- What was clear that made implementation smooth?
Example answer: "I missed that I needed to track state in Navigator. The spec didn't say where state comes from. In real systems, this would be from odometry."
2. Composition vs Creation
- Did you actually create new code, or compose existing patterns?
- Which Chapter 4-6 skills did you use most?
- What would you refactor if you had more time?
Example answer: "I mostly used Chapter 4's publisher pattern and Chapter 6's launch files. The hardest part was understanding how service callbacks work. I'd refactor to handle multiple concurrent goals."
3. Design Trade-Offs
- Why did you choose service for navigation instead of topic?
- Why did you use topic for status instead of service?
- What would change if you needed lower latency?
Example answer: "Services are request/response, which matched 'accept goal, return completion status.' Topics are streams, which matched 'continuous status updates.' If we needed lower latency, we'd use DDS over UDP directly instead of ROS middleware."
4. Failure Points
- What went wrong during integration?
- What would you test earlier?
- What prerequisite knowledge was missing?
Example answer: "Integration was hard because I didn't understand node lifecycle. Next time I'd test communication paths earlier. I should have learned about rclpy.spin() better in Chapter 4."
5. System Thinking vs Node Thinking
- How did your perspective change from Chapter 4 to Chapter 7?
- In Chapter 4, you built one publisher. Here you coordinated 3 nodes. What's different?
- How would you explain 'system architecture' to someone who just finished Chapter 3?
Example answer: "In Chapter 4, I thought 'how do I make a publisher?' In Chapter 7, I think 'how do these 3 nodes talk to each other?' The difference is thinking about interfaces and contracts, not just individual components."
Document Your Reflection
Write a 1-2 page reflection (500-800 words) covering:
- What worked well (2-3 specific things)
- What was harder than expected (2-3 specific challenges)
- How you solved problems (debugging approach, who helped, resources)
- What you'd do differently (design changes, architectural improvements)
- What came next (Module 2 preview—what concepts are you ready for?)
Module 2 Preview: What's Next
You've completed Module 1. You understand:
- ✅ ROS 2 architecture (nodes, topics, services)
- ✅ Multi-node systems
- ✅ Specification-first design
- ✅ System integration and debugging
Module 2 builds on this with:
1. Robot Description (URDF)
What: XML format describing robot structure, joints, links, sensors
Why: To simulate realistic robot kinematics and visually represent your robot
Connection to Module 1: You commanded turtlesim directly. Module 2 you'll command a described robot in a physics simulator.
Layer: L1 (Manual foundation of URDF syntax) → L2 (AI helps build descriptions) → L3 (Parameterized URDF skills)
2. Physics Simulation (Gazebo/Unity)
What: Full physics simulation with gravity, collisions, realistic sensor output
Why: Safer testing before real robots; realistic feedback for control algorithms
Connection to Module 1: Module 1 you used turtlesim (kinematic simulation). Module 2 you'll use Gazebo (physics simulation).
Layer: L1 (Understanding simulation gap) → L2 (Building worlds and spawning robots) → L3 (Reusable world templates)
3. Perception (Sensors)
What: Processing camera images, LIDAR point clouds, IMU data
Why: Robots see and feel their environment; you need to process this data
Connection to Module 1: You simulated obstacles as data. Module 2 you'll generate realistic sensor data from simulation.
Layer: L1 (Understanding sensor data formats) → L2 (Processing with OpenCV, point cloud libraries) → L3 (Reusable perception pipelines)
4. Navigation Stack
What: Path planning, obstacle avoidance, autonomous movement
Why: Your robot can now move to goals autonomously, not just respond to commands
Connection to Module 1: You planned movement manually. Module 2 the robot plans and executes autonomously.
Layer: L1 (Understanding A*, graph search) → L2 (Using ROS 2 Nav2 stack) → L3 (Customizing planners and controllers)
5. Simulation-to-Real Transfer
What: Deploying code from simulation to physical robots
Why: Physical robots are expensive and dangerous; you validate in simulation first
Connection to Module 1: Everything in Module 1 was simulation. Module 2 you'll understand the gap between Gazebo and Jetson/physical robots.
Layer: L1 (Understanding sim-to-real challenges) → L2 (Transferring code, handling differences) → L3 (Building robust, transferable systems)
Your ROS 2 Journey
Reflect on where you've come:
| Module | Focus | Your Skill |
|---|---|---|
| Ch 1-2 | Understanding robotics | Conceptual foundation |
| Ch 3 | Exploring ROS 2 | Architecture awareness |
| Ch 4 | Publisher/subscriber | Communication patterns |
| Ch 5 | Services & messages | RPC and custom types |
| Ch 6 | Parameters & launch | System orchestration |
| Ch 7 | Specification & integration | Professional development |
| Module 2 | Robot description & simulation | Realistic environments |
| Module 3 | AI-robot integration | Embodied intelligence |
| Module 4 | Humanoid VLA | Full autonomous systems |
Try With AI
Setup: Use chat AI for reflection and planning.
Reflection Prompt:
I've completed Module 1's capstone. Here's what my system does:
[describe your 3-node controller]
Here's what was harder than expected:
[describe challenges]
Reflecting on what I learned: [draft 1-2 paragraph reflection]
Does this reflection show deep learning, or am I missing key insights?
What should I add?
Module 2 Readiness Prompt:
I've completed Module 1 (ROS 2 fundamentals). For Module 2 (URDF and Gazebo),
I'm wondering: What's the biggest conceptual jump?
Here's what I understand from Module 1:
[list concepts]
What should I focus on in Module 2 to build on this?
Expected Outcome:
- Reflection document capturing learning from Module 1
- Clarity on what's coming in Module 2
- Readiness to move from simulated turtles to described robots
Safety Note: In Module 2, you'll work with real physics simulation. Gazebo is much more realistic than turtlesim. Your old assumptions about movement speed and control might not hold. Use simulation to learn those differences safely before physical deployment.
Validation Completion Checklist
Before you consider the capstone complete:
- Specification written (Lesson 7.1) - Clear, unambiguous, testable
- System implemented (Lesson 7.2) - All 3+ nodes running, integrated
- All success criteria tested (Lesson 7.3) - Each one validated systematically
- No integration failures - Debugging complete, system stable
- Reflection written - 500-800 words covering learning crystallization
- System launchable - Single
ros2 launchcommand starts everything - Specification matches implementation - Design decisions documented
- You can explain it - Architecture, trade-offs, design reasoning clear
Module 1 Complete
Congratulations. You've gone from "What's ROS 2?" to "I can architect and build multi-node robot systems."
From Chapter 1's conceptual foundation to Chapter 7's specification-driven integration.
You've learned:
- How robots see and move (Ch 1-2)
- How ROS 2 enables communication (Ch 3)
- How to publish and subscribe (Ch 4)
- How to create services and interfaces (Ch 5)
- How to orchestrate systems with parameters and launch (Ch 6)
- How to design and validate systems from specifications (Ch 7)
And you've applied the 4-Layer Teaching Method:
- Manual foundation: Understanding concepts deeply
- AI collaboration: Building with AI partnership
- Intelligence design: Creating reusable skills
- Specification-driven integration: Orchestrating at scale
Try With AI (Final Capstone Prompt)
Setup: One final conversation with your AI co-learner about the entire module.
Capstone Synthesis Prompt:
I've completed Module 1 of a ROS 2 curriculum:
- Ch 1: What is Physical AI
- Ch 2: The Robot System
- Ch 3: Meet ROS 2
- Ch 4: Your First ROS 2 Code (publishers/subscribers)
- Ch 5: Communication Mastery (services/messages)
- Ch 6: Building Systems (parameters/launch)
- Ch 7: Capstone (specification-driven multi-node system)
Here's what I built for the capstone:
[describe your system]
Reflecting on the whole module: What's the through-line?
What's the core idea that connects all 7 chapters?
How does this prepare me for Module 2?
Expected Response: Your AI should help you see the architecture of learning itself—how manual foundation enables AI collaboration, how composition enables system thinking, how specification-first is the capstone skill.
You're ready for Module 2.
The robots get more realistic. The systems get more complex. The intelligence compounds.
But the foundational skills you've learned—communication patterns, system architecture, specification-first design—those are the bedrock everything else builds on.
Welcome to embodied AI development.