gravity simulator banner

gravity simulator

14 devlogs
30h 7m 9s

Gravity Simulator is an interactive 3D physics sandbox where you can drop, collide, and experiment with different objects under real-time gravity, then tweak mass, size, and force settings to see how motion and impact change

This project uses AI

i have used github copilot for basic error solving, making my normal readme into professional readme and improving web ui

Demo Repository

Loading README...

adhikarisubodh999

Shipped this project!

Built a cleaner, tighter Gravity Simulator and learned that small UI alignment tweaks plus better collider setup make a huge difference in how “real” the physics feels.

adhikarisubodh999

Final pass: detailed UI alignment + shape contact fixes

This update is based directly on usability feedback from the simulator during repeated drop tests.

Why the UI was changed:

  • The previous strong orange-heavy look felt distracting during long testing sessions.
  • The goal was to keep the interface readable but reduce visual noise and preserve more viewport for physics observation.
  • Option lengths were reduced because long fields and wide action buttons were taking too much horizontal space and covering too much of the scene.

What was changed in the UI (detailed):

  • Restored readable text sizes for headings, labels, buttons, and number fields.
  • Reduced control length instead of shrinking typography.
  • Standardized alignment into one compact column per section:
    • labels,
    • number inputs,
    • major action buttons (Drop Nuke / Destruction Mode / Clear All).
  • Fixed button positioning so controls no longer look offset from related text.
  • Kept independent left/right tray collapse behavior and correct handle reappearance side.

Physics fixes in this pass (detailed):

  • Pyramid:

    • Reworked collider setup to avoid center/ground mismatch.
    • Addresses the issue where dropped pyramids could clip deep into the floor.
  • Cone:

    • Updated collider alignment and tip/base parameters for more stable contact.
    • Reduces deep penetration on first impact.
  • Torus:

    • Replaced disc-like collider approximation with a ring-style compound collider.
    • Improves realistic rolling and reduces unrealistic upright balancing.
  • Cylinder and capsule:

    • Collider orientation and shape behavior were tuned to favor horizontal rolling.
    • Additional visual-physics orientation alignment was applied so mesh and body motion match.

Current result:

  • UI remains compact without tiny text.
  • Labels and buttons are now properly aligned.
  • Core ground-contact issues for pyramid/cone and stability issues for torus/cylinder/capsule are significantly improved for normal sandbox use.
Attachment
0
adhikarisubodh999

UI polish and interaction clarity

Ran a polish pass focused on readability and interaction clarity rather than adding new physics features. I tightened spacing, improved hierarchy, and cleaned control presentation so the panel is easier to scan during active simulation.

Added an in-panel shortcut reference to surface common commands without forcing users to check external documentation. I also refined hover, active, and focus-visible states so pointer and keyboard interactions feel consistent and intentional.

For mobile reliability, I added safe-area-aware spacing rules so panel and tray-handle placement remain usable on devices with notches or inset edges. This stage was about making the experience feel finished and predictable across device types.

Attachment
0
adhikarisubodh999

Mobile fullscreen control layer

Added a dedicated fullscreen control that handles both enter and exit actions from one button. This removed ambiguity from mobile usage where browser behavior can vary depending on gesture and platform restrictions.

I implemented fullscreen state checks using both standard and WebKit-compatible API paths to keep behavior consistent across different mobile browsers. Fullscreen state change events are wired to UI labels so button text and status indicators always match actual browser state.

I kept automatic fullscreen attempts on mobile startup for convenience, but manual override remains available at all times. That balance keeps the experience smooth when auto-entry works and still reliable when it does not.

Attachment
0
adhikarisubodh999

Nuke entity and cooldown flow

Implemented a dedicated nuke path in the object manager with its own spawn profile, mass, and collision behavior. Unlike standard spawned objects, this entity is configured specifically for delayed high-impact interaction and blast-style response.

On strong collision, the nuke calls explodeNuke() and applies a larger-radius impulse with higher force than regular effects. The same flow also triggers corresponding visual feedback so the blast effect is readable and not just a hidden physics event.

To stop rapid retriggers, I added cooldown state with lastNukeDropTime and nukeCooldownMs. dropNuke() now returns success/failure so UI can respond directly: disable the button, show cooldown status, and re-enable controls once deployment is allowed again.

Attachment
0
adhikarisubodh999

Destruction mode workflow

Added destruction mode as a dedicated interaction state so cleanup is deliberate instead of accidental. Users can toggle it from the panel button or with the D shortcut, and both paths update the same internal mode flag to keep behavior consistent.

When the mode is active, clicks use raycast targeting to find the selected mesh, map it back to its tracked object record, and remove both the physics body and rendered mesh together. This preserves object-store integrity and avoids orphaned entities.

I added small on-screen notifications for mode changes so users always know whether they are in edit or destroy behavior. That reduced accidental deletions and made high-density cleanup much faster during collision testing.

Attachment
0
adhikarisubodh999

Modular architecture refactor

Refactored the implementation from a single script into focused modules for physics, rendering, object management, UI, and interaction handling. The main goal was to reduce coupling and make feature edits safer as the project complexity grows.

Each module now owns a clear responsibility and exposes a small API surface, which keeps cross-module assumptions low. Physics does not directly mutate UI state, UI does not own physics internals, and object lifecycle logic sits in one place instead of being spread across event handlers.

I updated bootstrap initialization so module wiring is explicit in one entry path. This made the code easier to read and made future feature staging much cleaner, because new behavior can be placed in the right module instead of patching random sections.

Attachment
0
adhikarisubodh999

Environment bounds and cleanup logic

Introduced environment controls for ground width, ground height, and void threshold so arena behavior can be tuned without code edits. The width/height inputs are wired into renderer-side ground updates, which means the visible play area reflects new dimensions immediately.

On the simulation side, I added out-of-bounds checks that combine horizontal extents with a vertical threshold. When objects cross those limits, they are marked for cleanup to keep the scene from accumulating distant or unreachable bodies.

Instead of hard deleting on first detection, I added a fade-before-remove pass. That keeps cleanup visually smooth and makes culling feel intentional, especially during heavy stress tests where many objects leave the valid arena at once.

Attachment
0
adhikarisubodh999

Runtime stats and collision audio

Added a live stats layer to expose key simulation signals: object count, FPS, and active body count. This gives immediate visibility into scene complexity and helps detect when performance starts degrading as object density increases.

FPS is derived from frame progression and pushed to the UI at a steady readable cadence so numbers stay useful instead of flickering constantly. I kept the implementation lightweight to avoid the stats panel becoming a performance cost itself.

For feedback, I integrated Web Audio API collision pings generated directly in code. Impact strength is mapped to sound parameters so harder collisions sound sharper or stronger. I also kept basic throttling logic so dense collision bursts do not overwhelm audio output.

Attachment
0
adhikarisubodh999

Multi-shape object factory

Expanded the object factory beyond sphere and box by adding cylinder, cone, and pyramid creation paths. Each type has dedicated geometry setup so visual output matches user selection and remains easy to identify during collision-heavy scenes.

For physics representation, I used practical collider choices rather than forcing exact complex geometry in every case. That kept simulation stable and avoided excessive computational load, especially when many objects are active at the same moment.

I also tuned default mass and size per type so mixed scenes feel believable and controllable. Without those per-type defaults, some shapes would dominate interactions unnaturally. This stage made the sandbox feel more like a true test environment instead of a single-shape demo.

Attachment
0
adhikarisubodh999

Live physics tuning controls

Added control inputs for gravity, friction, and restitution so simulation behavior can be tuned directly from the UI. I bound input events to physics setters instead of using deferred apply buttons, so changes are visible immediately while objects are still active in the scene.

When friction or restitution changes, I update the contact material configuration used by the physics world to keep collision response consistent. Without this step, values shown in the UI and values actually used by collisions can diverge, which creates confusing test results.

I kept display values synchronized with engine state after each change so the panel always represents the current configuration. That made experimentation much faster because behavior can be adjusted and observed in a single loop without restarting the simulation.

Attachment
0
adhikarisubodh999

Interactive object spawning

Added interactive spawn actions in the control panel so the simulator can create objects on demand instead of relying on hardcoded test entities. The first implementation covers sphere and box because they are enough to validate different contact profiles and center-of-mass behavior.

Each spawn path creates both layers together: a Cannon body for simulation and a Three.js mesh for rendering. I store them as linked { body, mesh } records in a shared collection so updates, selection, and cleanup all operate from one consistent data shape.

I also added a clear-all path that iterates through tracked records and removes both the physics body and visual mesh safely. That prevents stale references and keeps the scene reusable during repeated testing cycles. This structure became the base for all later object types.

Attachment
0
adhikarisubodh999

Physics world integration

Integrated Cannon.js by creating a physics world with gravity and defining a static ground body that acts as the main collision surface. I kept the gravity direction explicit and grounded it against the same visual floor plane used by the renderer, so the physics world and visible world stayed aligned.

To verify the integration, I spawned one dynamic sphere body and paired it with a Three.js mesh. On each frame, the mesh copies the body position and quaternion so physics remains the source of truth for movement and rotation. This avoided split behavior where visuals drift away from simulation state.

I tuned contact material parameters to get predictable bounce and friction on impact. Then I switched stepping to a fixed simulation cadence so behavior remains stable even when rendering cadence fluctuates. That gave much more consistent collisions and stacking behavior in later stages.

Attachment
0
adhikarisubodh999

Base rendering setup

Set up the base Three.js pipeline first so every later feature had a stable visual layer to build on. I initialized THREE.Scene, created a perspective camera with practical near/far values, and attached a WebGL renderer to the page so the canvas always fills the available space cleanly.

For lighting, I used a simple ambient + directional combination. The ambient light keeps faces readable even when they are not directly lit, and the directional light gives enough shadow contrast to make rotation obvious. That made it easier to spot orientation issues immediately during early testing.

I added a single cube and rotated it inside the main animation loop as a sanity check for render cadence, transform updates, and camera framing. I also wired resize handling so camera aspect and renderer dimensions stay synchronized whenever the viewport changes. This stage was intentionally focused on clarity and reliability, not features.

Attachment
2

Comments

aloyak
aloyak 4 days ago

looks great! what physics api are u planning to use? or you’re gonna implement your own?? this is really cool

adhikarisubodh999

I am new in browser 3d simulation and this my first project related to browser 3d simulation so I am gonna make it entirely with html css and java script