Contents
- Navigation: projects, folders, and the organization node
- IAM in GCP vs AWS (for AWS refugees)
- Core services: Compute Engine, GKE, Cloud Run, BigQuery, Vertex AI
- Billing and budget alerts
- gcloud CLI vs the console: when to use each
- Security: Security Command Center
- Productivity tips
- Next step
- Frequently asked questions
- Can I manage multiple GCP organizations from one console session?
- Is the GCP console available in Spanish or Portuguese?
- Do I need Security Command Center Premium from day one?
- How does project-level billing work if I use a shared VPC?
- What's the fastest way to audit who has access to a project?
- Should we use basic roles like Owner or Editor in production?
Teams moving workloads into Google Cloud often underestimate how different the console's mental model is from AWS or Azure. Projects, folders, and organization nodes are not cosmetic; they shape IAM, billing, and governance from day one. Getting that structure wrong forces painful refactors six months later.
This guide is written for platform leads, cloud architects, and engineering managers who need the GCP console to work as an operational tool, not a clickable inventory. The goal: move faster, spend less time hunting for settings, and keep security and cost under control as the environment scales.
We will walk through navigation, IAM differences if you come from AWS, the core services most teams actually use, billing discipline, when to drop the console for gcloud, and how Security Command Center fits into day-to-day operations.
Navigation: projects, folders, and the organization node
Everything in GCP lives under a resource hierarchy: Organization → Folders → Projects → Resources. The organization node is tied to your Google Workspace or Cloud Identity domain. Folders group projects by business unit, environment, or product line. Projects are the actual billing and IAM boundary — resources (VMs, buckets, datasets) always belong to exactly one project.
A common mistake is running everything in one or two mega-projects. That breaks blast radius isolation and makes IAM messy. A cleaner pattern:
- One folder per environment (prod, staging, dev) or per business unit.
- One project per application or service, per environment. For example:
checkout-prod,checkout-staging,checkout-dev. - Shared services (networking, logging, security tooling) in their own folder with dedicated projects.
The project picker at the top of the console is your main navigation tool. Pin frequently used projects, and use the search bar (press /) to jump to any resource, service, or setting without menu diving.
IAM in GCP vs AWS (for AWS refugees)
If your team is coming from AWS, IAM is where the biggest conceptual shift happens. AWS attaches policies to users, groups, or roles, and policies define allowed actions on resources. GCP inverts this: you grant a principal a role on a resource, and the resource's IAM policy is the source of truth.
Key differences to internalize:
- Principals in GCP include users, groups, service accounts, and domains. Service accounts are first-class identities, not just machine credentials.
- Roles are collections of permissions. There are basic roles (Owner, Editor, Viewer — avoid in production), predefined roles (per service, least-privilege-friendly), and custom roles.
- Inheritance flows down the hierarchy: a role granted at the organization or folder level applies to every project underneath. There is no deny-by-default equivalent of AWS SCPs unless you use IAM Deny policies or Organization Policies explicitly.
- No policy documents on the principal side. You won't find an "attach policy to user" flow. You go to the resource (or project/folder/org), open IAM, and add the principal with a role.
For a deeper side-by-side, see our breakdown of GCP vs AWS, which covers networking, IAM, and pricing models in detail.
Core services: Compute Engine, GKE, Cloud Run, BigQuery, Vertex AI
Most production GCP footprints revolve around five services. Knowing when each one fits saves months of rework.
| Service | Best fit | Operational overhead |
|---|---|---|
| Compute Engine | Lift-and-shift VMs, legacy workloads, GPU jobs | High — you manage OS, patching, scaling |
| GKE (Autopilot or Standard) | Microservices, portable workloads, complex orchestration | Medium (Autopilot) to High (Standard) |
| Cloud Run | Stateless HTTP services, event-driven jobs, internal APIs | Low — fully managed, scales to zero |
| BigQuery | Analytics, data warehousing, ad-hoc SQL on TB–PB | Low — serverless, pay per query or slot |
| Vertex AI | Model training, tuning, deployment, and GenAI workflows | Low to Medium depending on custom training |
A practical heuristic: start with Cloud Run for new services. Move to GKE only when you need sidecars, stateful sets, or multi-container orchestration. Keep Compute Engine for workloads that truly need a VM (licensing, GPU, specific kernels). For analytics, BigQuery is almost always the right default — avoid building your own Spark cluster unless you have a documented reason.
Vertex AI has consolidated what used to be a fragmented ML stack. Model Garden, Pipelines, and the Gemini API all live under the same console surface, which shortens the path from prototype to production.
Billing and budget alerts
GCP billing is organized by billing accounts, which are linked to one or more projects. A billing account can be a self-serve credit card account or an invoiced account (required for most enterprise customers).
Three console habits keep costs visible:
- Set budget alerts per project or per billing account. Go to Billing → Budgets & alerts. Configure thresholds at 50%, 90%, and 100% of forecasted spend, with email and Pub/Sub notifications so alerts can trigger automation.
- Use labels consistently. Tag every resource with
env,team,cost-center, andapp. Labels flow into the billing export and let you slice costs in BigQuery or Looker Studio. - Enable the BigQuery billing export. The console-level cost reports are useful for quick checks, but real FinOps work happens on the exported dataset, where you can join usage with labels and build custom dashboards.
Committed Use Discounts (CUDs) and Sustained Use Discounts apply automatically on many services, but spend-based CUDs for Compute and GKE require active purchase. Review utilization quarterly — [VERIFY: typical CUD savings range of 20–55% per Google Cloud pricing documentation 2025].
gcloud CLI vs the console: when to use each
The console is excellent for exploration, troubleshooting, and one-off administrative tasks. It is the wrong tool for anything repeatable.
Use the console when:
- Exploring a service you haven't used before (the UI surfaces defaults and options).
- Debugging IAM: the Policy Troubleshooter and Policy Analyzer are console-only in practice.
- Reviewing logs, metrics, and Security Command Center findings visually.
Use gcloud (or Terraform) when:
- Creating more than one of anything. If you clicked it twice, script it.
- Managing infrastructure that must be reproducible across environments.
- Running bulk operations — listing all service accounts with a given role, rotating keys, updating firewall rules.
A strong pattern: provision everything via Terraform, use gcloud for operational scripts, and treat the console as read-mostly in production. Cloud Shell (the built-in browser terminal) already has gcloud, kubectl, and terraform preinstalled, which removes friction for quick interventions.
Security: Security Command Center
Security Command Center (SCC) is GCP's native security posture and threat detection platform. It ingests findings from Cloud Asset Inventory, Event Threat Detection, Container Threat Detection, Web Security Scanner, and third-party integrations into a single console.
Two tiers matter: Standard (free, basic misconfigurations and IAM recommendations) and Premium/Enterprise (paid, adds threat detection, vulnerability scanning, compliance reports, and attack path simulation). For any regulated workload — PCI, HIPAA, SOC 2 — Premium is effectively required.
Operationalize SCC by:
- Enabling it at the organization level so findings cover every project.
- Routing high-severity findings to Pub/Sub, then into your SIEM or a ticketing workflow.
- Using the Attack Path view (Premium) to prioritize fixes based on exploitability, not just CVSS score.
- Reviewing the compliance dashboard monthly against your target framework (CIS, NIST, ISO 27001).
SCC replaces a lot of what teams used to stitch together with Forseti, custom scripts, and third-party CSPM tools.
Productivity tips
Small habits compound into hours saved every week:
- Keyboard search: press
/anywhere in the console to jump to services, resources, or documentation. - Pin services in the left navigation. Most teams only touch 8–12 services regularly.
- Use Cloud Shell for quick
gcloudcommands without leaving the browser. It persists 5 GB of home directory storage across sessions. - Create dashboards in Cloud Monitoring scoped by label, not by project, so they survive project reorganizations.
- Bookmark project-specific URLs. Every console page includes the project ID as a query parameter, so direct links work.
- Enable the activity panel (top-right clock icon) to audit recent changes across the project without digging into Cloud Audit Logs.
Next step
If your team is standing up GCP for the first time — or cleaning up an environment that grew faster than its governance — a focused diagnostic will save months of rework. Contact us to book a 30-minute session with a Nivelics cloud architect. Transform faster.
Frequently asked questions
Can I manage multiple GCP organizations from one console session?
Yes. The console picker at the top lets you switch between any organization, folder, or project your account has access to. For multi-org scenarios (M&A, subsidiaries), many teams use a single Cloud Identity domain with folders per entity instead of separate organizations, which simplifies IAM and billing.
Is the GCP console available in Spanish or Portuguese?
Yes. Language is set per user account under preferences and includes Spanish, Portuguese, and most major languages. Resource names, error messages from APIs, and logs remain in English, which is usually preferable for troubleshooting with global teams.
Do I need Security Command Center Premium from day one?
Not always. Start with Standard to catch obvious misconfigurations and IAM anti-patterns. Move to Premium when you have production workloads handling regulated data, or when your security team needs threat detection and attack path analysis.
How does project-level billing work if I use a shared VPC?
Network charges (egress, load balancers) are billed to the project that owns the resource, not the host project of the shared VPC. Plan your labels and budget alerts accordingly so cross-project traffic doesn't show up as a surprise line item.
What's the fastest way to audit who has access to a project?
In the console, go to IAM & Admin → IAM, then use the "View by principals" toggle. For deeper analysis — including inherited roles from folders and the organization — use Policy Analyzer, which exports results to BigQuery for reporting.
Should we use basic roles like Owner or Editor in production?
No. Basic roles grant thousands of permissions and violate least privilege. Use predefined roles scoped to each service, and create custom roles when predefined ones are too broad. Reserve Owner for break-glass accounts protected by strong controls.
Need to optimize your cloud infrastructure?
Schedule a free assessment with our team.
Talk to an expert