Professional Data Engineer Sample Questions
The Data Engineer sample questions will familiarize you with the format of exam questions and example content that may be covered on the exam.

The sample questions do not represent the range of topics or level of difficulty of questions presented on the exam. Performance on the sample questions should not be used to predict your Data Engineer exam result.
Do you have a candidate ID? *
If you completed a Google Cloud certification exam, you received a notification email with your candidate ID.
Enter your candidate ID *
Registration
First Name *
jack
Last Name *
jones
Primary Email *
Recovery Email
Organization (Employer or School) *
Woolworths
Organization email (an email associated with your current organization)
Country *
Primary Relationship to Google *
Send me offers, updates and useful tips for getting the most out of Google Cloud training and certification products and services. *
You are building storage for files for a data pipeline on Google Cloud. You want to support JSON files. The schema of these files will occasionally change. Your analyst teams will use running aggregate ANSI SQL queries on this data. What should you do?
Feedback
B is correct because of the requirement to support occasionally (schema) changing JSON files and aggregate ANSI SQL queries: you need to use BigQuery, and it is quickest to use 'Automatically detect' for schema changes.
A is not correct because you should not provide format files: you can simply turn on the 'Automatically detect' schema changes flag.
C, D are not correct because you should not use Cloud Storage for this scenario: it is cumbersome and doesn't add value.
You use a Hadoop cluster both for serving analytics and for processing and transforming data. The data is currently stored on HDFS in Parquet format. The data processing jobs run for 6 hours each night. Analytics users can access the system 24 hours a day. Phase 1 is to quickly migrate the entire Hadoop environment without a major re-architecture. Phase 2 will include migrating to BigQuery for analytics and to Dataflow for data processing. You want to make the future migration to BigQuery and Dataflow easier by following Google-recommended practices and managed services. What should you do?
Feedback
A is not correct because it is not recommended to attach persistent HDFS to Dataproc clusters in Google Cloud. (see references link)
B Is not correct because they want to leverage managed services which would mean Dataproc.
C is not correct because it is recommended that Dataproc clusters be job specific.
D Is correct because it leverages a managed service (Dataproc), the data is stored on Cloud Storage in Parquet format which can easily be loaded into BigQuery in the future and the Dataproc clusters are job specific.
You are building a new real-time data warehouse for your company and will use BigQuery streaming inserts. There is no guarantee that data will only be sent in once but you do have a unique ID for each row of data and an event timestamp. You want to ensure that duplicates are not included while interactively querying data. Which query type should you use?
Feedback
A is not correct because this will just return one row.
B is not correct because this doesn’t get you the latest value, but will get you a sum of the same event over time which doesn’t make too much sense if you have duplicates.
C is not correct because if you have events that are not duplicated, it will be excluded.
D is correct because it will just pick out a single row for each set of duplicates.
You are designing a streaming pipeline for ingesting player interaction data for a mobile game. You want the pipeline to handle out-of-order data delayed up to 15 minutes on a per-player basis and exponential growth in global users. What should you do?
Feedback
A Is correct because the question requires delay be handled on a per-player basis and session windowing will do that. Pub/Sub handles the need to scale exponentially with traffic coming from around the globe.
B Is not correct because Apache Kafka will not be able to handle an exponential growth in users globally as well as Pub/Sub.
C is not correct because a global window does not meet the requirements of handling out-of-order delay on a per-player basis.
D is not correct because a global window does not meet the requirements of handling out-of-order delay on a per-player basis.
Your company is loading CSV files into BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file. What is the most likely cause of this problem?
Feedback
A is not correct because if another data format other than CSV was selected then the data would not import successfully.
B is not correct because the data was fully imported meaning no rows were skipped.
C is correct because this is the only situation that would cause successful import.
D is not correct because whether the data has been previously transformed will not affect whether the source file will match the BigQuery table.
Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster. What should you do?
Feedback
A is not correct because the goal is to re-use their Hadoop jobs and MapReduce and/or Spark jobs cannot simply be moved to Dataflow.
B is not correct because the goal is to persist the data beyond the life of the ephemeral clusters, and if HDFS is used as the primary attached storage mechanism, it will also disappear at the end of the cluster’s life.
C is not correct because the goal is to use managed services as much as possible, and this is the opposite.
D is correct because it uses managed services, and also allows for the data to persist on GCS beyond the life of the cluster.
E is not correct because of the same reasons as option C.
You have 250,000 devices which produce a JSON device status event every 10 seconds. You want to capture this event data for outlier time series analysis. What should you do?
Feedback
C is correct because the data type, volume, and query pattern best fits BigTable capabilities and also Google best practices as linked below.
A, B are not correct because you do not need to use BigQuery for the query pattern in this scenario.
D is not correct because you can use the simpler method of 'cbt tool' to support this scenario.
You are selecting a messaging service for log messages that must include final result message ordering as part of building a data pipeline on Google Cloud. You want to stream input for 5 days and be able to query the current status. You will be storing the data in a searchable repository. How should you set up the input messages?
Feedback
A is correct because of recommended Google practices; see the links below.
B is not correct because you should not attach a GUID to each message to support the scenario.
C, D are not correct because you should not use Apache Kafka for this scenario (it is overly complex compared to using Pub/Sub, which can support all of the requirements).
You want to publish system metrics to Google Cloud from a large number of on-prem hypervisors and VMs for analysis and creation of dashboards. You have an existing custom monitoring agent deployed to all the hypervisors and your on-prem metrics system is unable to handle the load. You want to design a system that can collect and store metrics at scale. You don't want to manage your own time series database. Metrics from all agents should be written to the same table but agents must not have permission to modify or read data written by other agents. What should you do?
Feedback
A Is correct because Bigtable can store and analyze time series data, and the solution is using managed services which is what the requirements are calling for.
B Is not correct because BigTable cannot limit access to specific tables.
C is not correct because it requires deployment of an HBase cluster.
D is not correct because it requires deployment of a Cassandra cluster.
You are designing storage for CSV files and using an I/O-intensive custom Apache Spark transform as part of deploying a data pipeline on Google Cloud. You intend to use ANSI SQL to run queries for your analysts. How should you transform the input data?
Correct answer
Feedback
B is correct because of the requirement to use custom Spark transforms; use Dataproc. ANSI SQL queries require the use of BigQuery.
A is not correct because Dataflow does not support Spark.
C, D are not correct because Cloud Storage does not support SQL, and you should not use Dataflow, either.
You are designing a relational data repository on Google Cloud to grow as needed. The data will be transactionally consistent and added from any location in the world. You want to monitor and adjust node count for input traffic, which can spike unpredictably. What should you do?
Feedback
B is correct because of the requirement to globally scalable transactions—use Cloud Spanner. CPU utilization is the recommended metric for scaling, per Google best practices, linked below.
A is not correct because you should not use storage utilization as a scaling metric.
C, D are not correct because you should not use Cloud Bigtable for this scenario.
You have a Spark application that writes data to Cloud Storage in Parquet format. You scheduled the application to run daily using DataProcSparkOperator and Apache Airflow DAG by Cloud Composer. You want to add tasks to the DAG to make the data available to BigQuery users. You want to maximize query speed and configure partitioning and clustering on the table. What should you do?
Feedback
B Is not correct because bq cp is for existing BigQuery tables only
A Is not correct because bq insert will not set the partitioning and clustering and only supports JSON
C is correct because it loads the data and sets partitioning and clustering
D is not correct because an external table will not satisfy the query speed requirement
You have a website that tracks page visits for each user and then creates a Pub/Sub message with the session ID and URL of the page. You want to create a Dataflow pipeline that sums the total number of pages visited by each user and writes the result to BigQuery. User sessions timeout after 30 minutes. Which type of Dataflow window should you choose?
Feedback
B. There is no per-user metric being used so it’s possible a sum will be created for some users while they are still browsing the site.
D. If a user is still visiting the site when the 30-min window closes, the sum will be wrong.
C. This is correct because it continues to sum user page visits during their browsing session and completes at the same time as the session timeout.
A. A user-specific sum is never calculated, just sums for arbitrary 30-min windows of time staggered by 5 minutes.
You are designing a basket abandonment system for an ecommerce company. The system will send a message to a user based on these rules: a). No interaction by the user on the site for 1 hour b). Has added more than $30 worth of products to the basket c). Has not completed a transaction. You use Dataflow to process the data and decide if a message should be sent. How should you design the pipeline?
Feedback
A is not correct because assuming there is one key per user, a message will be sent every 60 minutes.
B is not correct because assuming there is one key per user, a message will be sent 60 minutes after they first started browsing even if they are still browsing.
C is correct because it will send a message per user after that user is inactive for 60 minutes.
D is not correct because it will cause messages to be sent out every 60 minutes to all users regardless of where they are in their current session.
You need to stream time-series data in Avro format, and then write this to both BigQuery and Cloud Bigtable simultaneously using Dataflow. You want to achieve minimal end-to-end latency. Your business requirements state this needs to be completed as quickly as possible. What should you do?
Feedback
A Is not correct because ParDo doesn’t write to BigQuery or BigTable
B Is not correct because Combine doesn’t write to BigQuery or Bigtable
C Is correct because this is the right set of transformations that accepts and writes to the required data stores.
D Is not correct because to meet the business requirements, it is much faster and easier using dataflow answer C
Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?
Feedback
A is correct because Google recommends using Cloud Storage instead of HDFS as it is much more cost effective especially when jobs aren’t running.
B is not correct because this will decrease the compute cost but not the storage cost.
C is not correct because while this will reduce cost somewhat, it will not be as cost effective as using Cloud Storage.
D is not correct because while this will reduce cost somewhat, it will not be as cost effective as using Cloud Storage.
You are designing storage for two relational tables that are part of a 10-TB database on Google Cloud. You want to support transactions that scale horizontally. You also want to optimize data for range queries on non-key columns. What should you do?
Feedback
A is not correct because Cloud SQL does not natively scale horizontally.
B is not correct because Cloud SQL does not natively scale horizontally.
C is correct because Cloud Spanner scales horizontally, and you can create secondary indexes for the range queries that are required.
D is not correct because Dataflow is a data pipelining tool to move and transform data, but the use case is centered around querying.
Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance. How should the row key be redesigned to improve Bigtable performance on queries that populate real-time dashboards?
Feedback
A is not correct because this will cause most writes to be pushed to a single node (known as hotspotting)
B is not correct because this will not allow for multiple readings from the same sensor as new readings will overwrite old ones.
C is not correct because this will cause most writes to be pushed to a single node (known as hotspotting)
D is correct because it will allow for retrieval of data based on both sensor id and timestamp but without causing hotspotting.
You are developing an application on Google Cloud that will automatically generate subject labels for users’ blog posts. You are under competitive pressure to add this feature quickly, and you have no additional developer resources. No one on your team has experience with machine learning. What should you do?
Feedback
A is correct because it provides a managed service and a fully trained model, and the user is pulling the entities, which is the right label.
B is not correct because sentiment is the incorrect label for this use case.
C is not correct because this requires experience with machine learning.
D is not correct because this requires experience with machine learning.
Your company is using WILDCARD tables to query data across multiple tables with similar names. The SQL statement is currently failing with the error shown below. Which table name will make the SQL statement work correctly?
Captionless Image
Feedback
A is not correct because this is not the correct wildcard syntax as there is no wildcard character present.
B is not correct because this is not the correct wildcard syntax since it’s missing backticks.
C is not correct because this is not the correct wildcard syntax since it’s not using a backtick as the last character
D is correct because it follows the correct wildcard syntax of enclosing the table name in backticks and including the * wildcard character.
You are working on an ML-based application that will transcribe conversations between manufacturing workers. These conversations are in English and between 30-40 sec long. Conversation recordings come from old enterprise radio sets that have a low sampling rate of 8000 Hz, but you have a large dataset of these recorded conversations with their transcriptions. You want to follow Google-recommended practices. How should you proceed with building your application?
Feedback
A Is correct because synchronous mode is recommended for short audio files.
B is incorrect since the recommended way to process short audio files (shorter than 1 minutes) is a synchronous recognize request and not an asynchronous one.
C Is incorrect since using the native sample rate is recommended over resampling.
D Is incorrect since there is nothing in the question that suggests the off-the-shelf model will not perform sufficiently.
You are developing an application on Google Cloud that will label famous landmarks in users’ photos. You are under competitive pressure to develop a predictive model quickly. You need to keep service costs low. What should you do?
Feedback
B is correct because of the requirement to quickly develop a model that generates landmark labels from photos.
This is supported in Cloud Vision API; see the link below.
A is not correct because you should not inspect the generated MID values; instead, you should simply pass the image locations to the API and use the labels, which are output.
C, D are not correct because you should not build a custom classification TF model for this scenario.
You are building a data pipeline on Google Cloud. You need to select services that will host a deep neural network machine-learning model also hosted on Google Cloud. You also need to monitor and run jobs that could occasionally fail. What should you do?
Feedback
B is correct because of the requirement to host an ML DNN and Google-recommended monitoring object (Jobs); see the links below.
A is not correct because you should not use the Operation object to monitor failures.
C, D are not correct because you should not use a Kubernetes Engine cluster for this scenario.
You work on a regression problem in a natural language processing domain, and you have 100M labeled examples in your dataset. You have randomly shuffled your data and split your dataset into training and test samples (in a 90/10 ratio). After you have trained the neural network and evaluated your model on a test set, you discover that the root-mean-squared error (RMSE) of your model is twice as high on the train set as on the test set. How should you improve the performance of your model?
Feedback
A Is incorrect since test sample is large enough
C Is incorrect since regularization helps to avoid overfitting and we have a clear underfitting case
B is incorrect since dataset is pretty large already, and having more data typically helps with overfitting and not with underfitting
D Is correct since increasing model complexity generally helps when you have an underfitting problem
You are using Pub/Sub to stream inventory updates from many point-of-sale (POS) terminals into BigQuery. Each update event has the following information: product identifier "prodSku", change increment "quantityDelta", POS identification "termId", and "messageId" which is created for each push attempt from the terminal. During a network outage, you discovered that duplicated messages were sent, causing the inventory system to over-count the changes. You determine that the terminal application has design problems and may send the same event more than once during push retries. You want to ensure that the inventory update is accurate. What should you do?
Feedback
B Is not correct because duplication in this case could be caused by a terminal re-try, in which case messageId could be different for the same event.
A Is not correct because publishTime cannot uniquely identify a message and it does not address push retries.
D is correct because the client application must include a unique identifier to disambiguate possible duplicates due to push retries.
C is not correct because there are many terminals. Calculating the projected inventory values on the terminal introduces a race condition where multiple terminals could update the inventory data simultaneously.
You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database table must now store 100 times more patient records. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?
Feedback
A is not correct because adding additional compute resources is not a recommended way to resolve database schema problems.
B is not correct because this will reduce the functionality of the database and make running reports more difficult.
C is correct because this option provides the least amount of inconvenience over using pre-specified date ranges or one table per clinic while also increasing performance due to avoiding self-joins.
D is not correct because this will likely increase the number of tables so much that it will be more difficult to generate reports vs. the correct option.
Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasets stored in BigQuery. Teams have the freedom to use the service as they see fit, and they have not documented their use cases. You have been asked to secure the data warehouse. You need to discover what everyone is doing. What should you do first?
Correct answer
Feedback
A is correct because this is the best way to get granular access to data showing which users are accessing which data.
B is not correct because we already know that all users already have access to all data, so this information is unlikely to be useful. It will also not show what users have done, just what they can do.
C is not correct because slot usage will not inform security policy.
D is not correct because a billing account is typically shared among many people and will only show the amount of data queried and stored.
You created a job which runs daily to import highly sensitive data from an on-premises location to Cloud Storage. You also set up a streaming data insert into Cloud Storage via a Kafka node that is running on a Compute Engine instance. You need to encrypt the data at rest and supply your own encryption key. Your key should not be stored in the Google Cloud. What should you do?
Correct answer
Feedback
D is correct because the scenario requires you to use your own key and also to not store your key on Compute Engine, and also this is a Google recommended practice; see the links below.
A is not correct because the scenario states that you must supply your own encryption key instead of using one generated by Google Cloud.
B is not correct because the scenario states that you should use, but not store, your own key with Google Cloud services.
C is not correct because it does not meet the scenario requirement to reference, but not store, your own key with Google Cloud services.
You are working on a project with two compliance requirements. The first requirement states that your developers should be able to see the Google Cloud billing charges for only their own projects. The second requirement states that your finance team members can set budgets and view the current charges for all projects in the organization. The finance team should not be able to view the project contents. You want to set permissions. What should you do?
Feedback
B is correct because it uses the principle of least privilege for IAM roles; use the Billing Administrator IAM role for that job function.
A, C, D are not correct because it is a best practice to use pre-defined IAM roles when they exist and match your business scenario; see the links below.
This form was created inside Google.com. Privacy & Terms