Skip to content

Commit

Permalink
[FLINK-23859] Fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
hapihu committed Sep 1, 2021
1 parent 61b5b0a commit 6628237
Show file tree
Hide file tree
Showing 49 changed files with 59 additions and 59 deletions.
2 changes: 1 addition & 1 deletion docs/content/docs/deployment/elastic_scaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ If you want the JobManager to stop after a certain time without enough TaskManag

With Reactive Mode enabled, the [`jobmanager.adaptive-scheduler.resource-stabilization-timeout`]({{< ref "docs/deployment/config">}}#jobmanager-adaptive-scheduler-resource-stabilization-timeout) configuration key will default to `0`: Flink will start running the job, as soon as there are sufficient resources available.
In scenarios where TaskManagers are not connecting at the same time, but slowly one after another, this behavior leads to a job restart whenever a TaskManager connects. Increase this configuration value if you want to wait for the resources to stabilize before scheduling the job.
Additionally, one can configure [`jobmanager.adaptive-scheduler.min-parallelism-increase`]({{< ref "docs/deployment/config">}}#jobmanager-adaptive-scheduler-min-parallelism-increase): This configuration option specifics the minumum amount of additional, aggregate parallelism increase before triggering a scale-up. For example if you have a job with a source (parallelism=2) and a sink (parallelism=2), the aggregate parallelism is 4. By default, the configuration key is set to 1, so any increase in the aggregate parallelism will trigger a restart.
Additionally, one can configure [`jobmanager.adaptive-scheduler.min-parallelism-increase`]({{< ref "docs/deployment/config">}}#jobmanager-adaptive-scheduler-min-parallelism-increase): This configuration option specifics the minimum amount of additional, aggregate parallelism increase before triggering a scale-up. For example if you have a job with a source (parallelism=2) and a sink (parallelism=2), the aggregate parallelism is 4. By default, the configuration key is set to 1, so any increase in the aggregate parallelism will trigger a restart.

#### Recommendations

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -542,7 +542,7 @@ private static List<String> extractRelativizedURLsForJarsFromDirectory(File dire
throws MalformedURLException {
Preconditions.checkArgument(
directory.listFiles() != null,
"The passed File does not seem to be a directory or is not acessible: "
"The passed File does not seem to be a directory or is not accessible: "
+ directory.getAbsolutePath());

final List<String> relativizedURLs = new ArrayList<>();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
import org.apache.flink.configuration.ConfigOptions;
import org.apache.flink.configuration.Configuration;

/** The options tht can be set for the {@link SourceReaderBase}. */
/** The options that can be set for the {@link SourceReaderBase}. */
public class SourceReaderOptions {

public static final ConfigOption<Long> SOURCE_READER_CLOSE_TIMEOUT =
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ public class CustomCassandraAnnotatedPojo {
@Column(name = "batch_id")
private Integer batchId;

/** Necessary for the driver's mapper instanciation. */
/** Necessary for the driver's mapper instantiation. */
public CustomCassandraAnnotatedPojo() {}

public CustomCassandraAnnotatedPojo(String id, Integer counter, Integer batchId) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ private static BlockLocation[] getBlockLocationsForFile(FileStatus file, FileSys
// the
// file (too expensive) but make some sanity checks to catch early the common cases where
// incorrect
// bloc info is returned by the implementation.
// block info is returned by the implementation.

long totalLen = 0L;
for (BlockLocation block : blocks) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
* <p>This format makes no difference between creating readers from scratch (new file) or from a
* checkpoint. Because of that, if the reader actively checkpoints its position (via the {@link
* Reader#getCheckpointedPosition()} method) then the checkpointed offset must be a byte offset in
* the file from which the stream can be resumed as if it were te beginning of the file.
* the file from which the stream can be resumed as if it were the beginning of the file.
*
* <p>For all other details, please check the docs of {@link StreamFormat}.
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ private void testDeserializationFull(final boolean withInProgress, final String
.map(file -> file.getFileName().toString())
.collect(Collectors.toSet());

// after restoring all pending files are comitted.
// after restoring all pending files are committed.
// there is no "inporgress" in file name for the committed files.
for (int i = 0; i < noOfPendingCheckpoints; i++) {
final String part = "part-0-" + i;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@
import java.util.concurrent.TimeUnit;

/**
* The HBaseRowDataAsyncLookupFunction is an implemenation to lookup HBase data by rowkey in async
* The HBaseRowDataAsyncLookupFunction is an implementation to lookup HBase data by rowkey in async
* fashion. It looks up the result as {@link RowData}.
*/
@Internal
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
import java.io.Serializable;

/**
* A wrapper of Hive functions that instantiate function instances and ser/de functino instance
* A wrapper of Hive functions that instantiate function instances and ser/de function instance
* cross process boundary.
*
* @param <UDFType> The type of UDF.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -819,7 +819,7 @@ private RelNode genTableLogicalPlan(String tableAlias, HiveParserQB qb)
// 3. Get Table Logical Schema (Row Type)
// NOTE: Table logical schema = Non Partition Cols + Partition Cols + Virtual Cols

// 3.1 Add Column info for non partion cols (Object Inspector fields)
// 3.1 Add Column info for non partition cols (Object Inspector fields)
StructObjectInspector rowObjectInspector =
(StructObjectInspector) table.getDeserializer().getObjectInspector();
List<? extends StructField> fields = rowObjectInspector.getAllStructFieldRefs();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -940,7 +940,7 @@ protected void validateUDF(
if (fi.getGenericUDTF() != null) {
throw new SemanticException(ErrorMsg.UDTF_INVALID_LOCATION.getMsg());
}
// UDAF in filter condition, group-by caluse, param of funtion, etc.
// UDAF in filter condition, group-by caluse, param of function, etc.
if (fi.getGenericUDAFResolver() != null) {
if (isFunction) {
throw new SemanticException(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,11 @@ public class HiveASTParseDriver {
// for the lexical analysis part of antlr. By converting the token stream into
// upper case at the time when lexical rules are checked, this class ensures that the
// lexical rules need to just match the token with upper case letters as opposed to
// combination of upper case and lower case characteres. This is purely used for matching
// combination of upper case and lower case characters. This is purely used for matching
// lexical
// rules. The actual token text is stored in the same way as the user input without
// actually converting it into an upper case. The token values are generated by the consume()
// function of the super class ANTLRStringStream. The LA() function is the lookahead funtion
// function of the super class ANTLRStringStream. The LA() function is the lookahead function
// and is purely used for matching lexical rules. This also means that the grammar will only
// accept capitalized tokens in case it is run from other tools like antlrworks which
// do not have the ANTLRNoCaseStringStream implementation.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2366,7 +2366,7 @@ public void analyzeRowFormat(HiveParserASTNode child) throws SemanticException {
nullFormat = unescapeSQLString(rowChild.getChild(0).getText());
break;
default:
throw new AssertionError("Unkown Token: " + rowChild);
throw new AssertionError("Unknown Token: " + rowChild);
}
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ public class HiveParserQBParseInfo {
*/
private final HashMap<String, HiveParserASTNode> destToSortby;

/** Maping from table/subquery aliases to all the associated lateral view nodes. */
/** Mapping from table/subquery aliases to all the associated lateral view nodes. */
private final HashMap<String, ArrayList<HiveParserASTNode>> aliasToLateralViews;

private final HashMap<String, HiveParserASTNode> destToLateralView;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -552,7 +552,7 @@ public boolean subqueryRestrictionsCheck(
*/
// Following is special cases for different type of subqueries which have aggregate and no
// implicit group by
// and are correlatd
// and are correlated
// * EXISTS/NOT EXISTS - NOT allowed, throw an error for now. We plan to allow this later
// * SCALAR - only allow if it has non equi join predicate. This should return true since
// later in subquery remove
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ public void validateAndMakeEffective() throws SemanticException {
wFn.setWindowSpec(wdwSpec);
}

// 2. A Window Spec with no Parition Spec, is Partitioned on a Constant(number 0)
// 2. A Window Spec with no Partition Spec, is Partitioned on a Constant(number 0)
applyConstantPartition(wdwSpec);

// 3. For missing Wdw Frames or for Frames with only a Start Boundary, completely
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ default String quoteIdentifier(String identifier) {

/**
* Get dialect upsert statement, the database has its own upsert syntax, such as Mysql using
* DUPLICATE KEY UPDATE, and PostgresSQL using ON CONFLICT... DO UPDATE SET..
* DUPLICATE KEY UPDATE, and PostgreSQL using ON CONFLICT... DO UPDATE SET..
*
* @return None if dialect does not support upsert statement, the writer will degrade to the use
* of select + update/insert, this performance is poor.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
import static org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberUtils.getTopicMetadata;

/**
* A subscriber to a fixed list of topics. The subscribed topics must hav existed in the Kafka
* A subscriber to a fixed list of topics. The subscribed topics must have existed in the Kafka
* cluster, otherwise an exception will be thrown.
*/
class TopicListSubscriber implements KafkaSubscriber {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ default void open(DeserializationSchema.InitializationContext context) throws Ex
* ConsumerRecord ConsumerRecords}.
*
* <p>Note that the {@link KafkaDeserializationSchema#isEndOfStream(Object)} method will no
* longer be used to determin the end of the stream.
* longer be used to determine the end of the stream.
*
* @param kafkaDeserializationSchema the legacy {@link KafkaDeserializationSchema} to use.
* @param <V> the return type of the deserialized record.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ public void testConfigureDisableOffsetCommitWithoutCheckpointing() throws Except
}

/**
* Tests that subscribed partitions didn't change when there's no change on the intial topics.
* Tests that subscribed partitions didn't change when there's no change on the initial topics.
* (filterRestoredPartitionsWithDiscovered is active)
*/
@Test
Expand Down Expand Up @@ -300,7 +300,7 @@ public void testSetFilterRestoredParitionsWithAddedTopic() throws Exception {
}

/**
* Tests that subscribed partitions are the same when there's no change on the intial topics.
* Tests that subscribed partitions are the same when there's no change on the initial topics.
* (filterRestoredPartitionsWithDiscovered is disabled)
*/
@Test
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ public void setClientAndEnsureNoJobIsLingering() throws Exception {
// ------------------------------------------------------------------------

/**
* Test that ensures the KafkaConsumer is properly failing if the topic doesnt exist and a wrong
* Test that ensures the KafkaConsumer is properly failing if the topic doesn't exist and a wrong
* broker was specified.
*
* @throws Exception
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ public enum RecordPublisherType {
POLLING
}

/** The EFO registration type reprsents how we are going to de-/register efo consumer. */
/** The EFO registration type represents how we are going to de-/register efo consumer. */
public enum EFORegistrationType {

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ public class KinesisDataFetcher<T> {
* The most recent watermark, calculated from the per shard watermarks. The initial value will
* never be emitted and also apply after recovery. The fist watermark that will be emitted is
* derived from actually consumed records. In case of recovery and replay, the watermark will
* rewind, consistent wth the shard consumer sequence.
* rewind, consistent with the shard consumer sequence.
*/
private long lastWatermark = Long.MIN_VALUE;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -433,7 +433,7 @@ public static Properties replaceDeprecatedProducerKeys(Properties configProps) {
}

/**
* A set of configuration paremeters associated with the describeStreams API may be used if: 1)
* A set of configuration parameters associated with the describeStreams API may be used if: 1)
* an legacy client wants to consume from Kinesis 2) a current client wants to consumer from
* DynamoDB streams
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ static StartCursor fromMessageId(MessageId messageId) {

/**
* @param messageId Find the available message id and start consuming from it.
* @param inclusive {@code ture} would include the given message id.
* @param inclusive {@code true} would include the given message id.
*/
static StartCursor fromMessageId(MessageId messageId, boolean inclusive) {
return new MessageIdStartCursor(messageId, inclusive);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -465,7 +465,7 @@ tokenAudience=
# Authentication plugin to use when connecting to bookies
bookkeeperClientAuthenticationPlugin=

# BookKeeper auth plugin implementatation specifics parameters name and values
# BookKeeper auth plugin implementation specifics parameters name and values
bookkeeperClientAuthenticationParametersName=
bookkeeperClientAuthenticationParameters=

Expand Down Expand Up @@ -904,7 +904,7 @@ journalSyncData=false

# For each ledger dir, maximum disk space which can be used.
# Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will
# be written to that partition. If all ledger dir partions are full, then bookie
# be written to that partition. If all ledger dir partitions are full, then bookie
# will turn to readonly mode if 'readOnlyModeEnabled=true' is set, else it will
# shutdown.
# Valid values should be in between 0 and 1 (exclusive).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ interface RMQCollector<T> extends Collector<T> {
* <p>If not set explicitly, the {@link AMQP.BasicProperties#getCorrelationId()} and {@link
* Envelope#getDeliveryTag()} will be used.
*
* <p><b>NOTE:</b>Can be called once for a single invokation of a {@link
* <p><b>NOTE:</b>Can be called once for a single invocation of a {@link
* RMQDeserializationSchema#deserialize(Envelope, AMQP.BasicProperties, byte[],
* RMQCollector)} method.
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ public TypeSerializerSchemaCompatibility<T> resolveSchemaCompatibility(
* "redirect" the compatibility check to the corresponding {code TypeSerializerSnapshot} class.
*
* <p>Please note that if it is possible to directly override {@link
* TypeSerializerConfigSnapshot#resolveSchemaCompatibility} and preform the redirection logic
* TypeSerializerConfigSnapshot#resolveSchemaCompatibility} and perform the redirection logic
* there, then that is the preferred way. This interface is useful for cases where there is not
* enough information, and the new serializer should assist with the redirection.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ public class CheckpointingOptions {
.defaultValue(1)
.withDescription("The maximum number of completed checkpoints to retain.");

/** @deprecated Checkpoints are aways asynchronous. */
/** @deprecated Checkpoints are always asynchronous. */
@Deprecated
public static final ConfigOption<Boolean> ASYNC_SNAPSHOTS =
ConfigOptions.key("state.backend.async")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ public ConfigOption<T> withDeprecatedKeys(String... deprecatedKeys) {

/**
* Creates a new config option, using this option's key and default value, and adding the given
* description. The given description is used when generation the configuration documention.
* description. The given description is used when generation the configuration documentation.
*
* @param description The description for this option.
* @return A new config option, with given description.
Expand All @@ -172,7 +172,7 @@ public ConfigOption<T> withDescription(final String description) {

/**
* Creates a new config option, using this option's key and default value, and adding the given
* description. The given description is used when generation the configuration documention.
* description. The given description is used when generation the configuration documentation.
*
* @param description The description for this option.
* @return A new config option, with given description.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ public abstract class RecoverableFsDataOutputStream extends FSDataOutputStream {
* as a "close in order to dispose" or "close on failure".
*
* <p>In order to persist all previously written data, one needs to call the {@link
* #closeForCommit()} method and call {@link Committer#commit()} on the retured committer
* #closeForCommit()} method and call {@link Committer#commit()} on the returned committer
* object.
*
* @throws IOException Thrown if an error occurred during closing.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@ protected Class<?> resolveProxyClass(String[] interfaces)

/**
* An {@link ObjectInputStream} that ignores serialVersionUID mismatches when deserializing
* objects of anonymous classes or our Scala serializer classes and also replaces occurences of
* objects of anonymous classes or our Scala serializer classes and also replaces occurrences of
* GenericData.Array (from Avro) by a dummy class so that the KryoSerializer can still be
* deserialized without Avro being on the classpath.
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -349,7 +349,7 @@ public void testReadWithBufferSizeIsMultiple() throws IOException {
assertTrue(format.reachedEnd());
format.close();

// this one must have read one too many, because the next split will skipp the trailing
// this one must have read one too many, because the next split will skip the trailing
// remainder
// which happens to be one full record
assertEquals(3, count);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -821,7 +821,7 @@ public void testUnionFields() {
final Value[][] values =
new Value[][] {
{new IntValue(56), null, new IntValue(-7628761)},
{null, new StringValue("Hellow Test!"), null},
{null, new StringValue("Hello Test!"), null},
{null, null, null, null, null, null, null, null},
{
null, null, null, null, null, null, null, null, null, null, null, null,
Expand Down
Loading

0 comments on commit 6628237

Please sign in to comment.