Update to a working version of Scalariform
This commit is contained in:
parent
cae070bd93
commit
c66ce62d63
616 changed files with 5966 additions and 5436 deletions
|
|
@ -305,7 +305,7 @@ object Source {
|
|||
*/
|
||||
def zipWithN[T, O](zipper: function.Function[java.util.List[T], O], sources: java.util.List[Source[T, _ <: Any]]): Source[O, NotUsed] = {
|
||||
val seq = if (sources != null) Util.immutableSeq(sources).map(_.asScala) else immutable.Seq()
|
||||
new Source(scaladsl.Source.zipWithN[T, O](seq => zipper.apply(seq.asJava))(seq))
|
||||
new Source(scaladsl.Source.zipWithN[T, O](seq ⇒ zipper.apply(seq.asJava))(seq))
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -341,61 +341,65 @@ object Source {
|
|||
new Source(scaladsl.Source.queue[T](bufferSize, overflowStrategy).mapMaterializedValue(new SourceQueueAdapter(_)))
|
||||
|
||||
/**
|
||||
* Start a new `Source` from some resource which can be opened, read and closed.
|
||||
* Interaction with resource happens in a blocking way.
|
||||
*
|
||||
* Example:
|
||||
* {{{
|
||||
* Source.unfoldResource(
|
||||
* () -> new BufferedReader(new FileReader("...")),
|
||||
* reader -> reader.readLine(),
|
||||
* reader -> reader.close())
|
||||
* }}}
|
||||
*
|
||||
* You can use the supervision strategy to handle exceptions for `read` function. All exceptions thrown by `create`
|
||||
* or `close` will fail the stream.
|
||||
*
|
||||
* `Restart` supervision strategy will close and create blocking IO again. Default strategy is `Stop` which means
|
||||
* that stream will be terminated on error in `read` function by default.
|
||||
*
|
||||
* You can configure the default dispatcher for this Source by changing the `akka.stream.blocking-io-dispatcher` or
|
||||
* set it for a given Source by using [[ActorAttributes]].
|
||||
*
|
||||
* @param create - function that is called on stream start and creates/opens resource.
|
||||
* @param read - function that reads data from opened resource. It is called each time backpressure signal
|
||||
* is received. Stream calls close and completes when `read` returns None.
|
||||
* @param close - function that closes resource
|
||||
*/
|
||||
def unfoldResource[T, S](create: function.Creator[S],
|
||||
read: function.Function[S, Optional[T]],
|
||||
close: function.Procedure[S]): javadsl.Source[T, NotUsed] =
|
||||
new Source(scaladsl.Source.unfoldResource[T,S](create.create,
|
||||
* Start a new `Source` from some resource which can be opened, read and closed.
|
||||
* Interaction with resource happens in a blocking way.
|
||||
*
|
||||
* Example:
|
||||
* {{{
|
||||
* Source.unfoldResource(
|
||||
* () -> new BufferedReader(new FileReader("...")),
|
||||
* reader -> reader.readLine(),
|
||||
* reader -> reader.close())
|
||||
* }}}
|
||||
*
|
||||
* You can use the supervision strategy to handle exceptions for `read` function. All exceptions thrown by `create`
|
||||
* or `close` will fail the stream.
|
||||
*
|
||||
* `Restart` supervision strategy will close and create blocking IO again. Default strategy is `Stop` which means
|
||||
* that stream will be terminated on error in `read` function by default.
|
||||
*
|
||||
* You can configure the default dispatcher for this Source by changing the `akka.stream.blocking-io-dispatcher` or
|
||||
* set it for a given Source by using [[ActorAttributes]].
|
||||
*
|
||||
* @param create - function that is called on stream start and creates/opens resource.
|
||||
* @param read - function that reads data from opened resource. It is called each time backpressure signal
|
||||
* is received. Stream calls close and completes when `read` returns None.
|
||||
* @param close - function that closes resource
|
||||
*/
|
||||
def unfoldResource[T, S](
|
||||
create: function.Creator[S],
|
||||
read: function.Function[S, Optional[T]],
|
||||
close: function.Procedure[S]): javadsl.Source[T, NotUsed] =
|
||||
new Source(scaladsl.Source.unfoldResource[T, S](
|
||||
create.create,
|
||||
(s: S) ⇒ read.apply(s).asScala, close.apply))
|
||||
|
||||
/**
|
||||
* Start a new `Source` from some resource which can be opened, read and closed.
|
||||
* It's similar to `unfoldResource` but takes functions that return `CopletionStage` instead of plain values.
|
||||
*
|
||||
* You can use the supervision strategy to handle exceptions for `read` function or failures of produced `Futures`.
|
||||
* All exceptions thrown by `create` or `close` as well as fails of returned futures will fail the stream.
|
||||
*
|
||||
* `Restart` supervision strategy will close and create resource. Default strategy is `Stop` which means
|
||||
* that stream will be terminated on error in `read` function (or future) by default.
|
||||
*
|
||||
* You can configure the default dispatcher for this Source by changing the `akka.stream.blocking-io-dispatcher` or
|
||||
* set it for a given Source by using [[ActorAttributes]].
|
||||
*
|
||||
* @param create - function that is called on stream start and creates/opens resource.
|
||||
* @param read - function that reads data from opened resource. It is called each time backpressure signal
|
||||
* is received. Stream calls close and completes when `CompletionStage` from read function returns None.
|
||||
* @param close - function that closes resource
|
||||
*/
|
||||
def unfoldResourceAsync[T, S](create: function.Creator[CompletionStage[S]],
|
||||
read: function.Function[S, CompletionStage[Optional[T]]],
|
||||
close: function.Function[S, CompletionStage[Done]]): javadsl.Source[T, NotUsed] =
|
||||
new Source(scaladsl.Source.unfoldResourceAsync[T,S](() ⇒ create.create().toScala,
|
||||
(s: S) ⇒ read.apply(s).toScala.map(_.asScala)(akka.dispatch.ExecutionContexts.sameThreadExecutionContext),
|
||||
(s: S) ⇒ close.apply(s).toScala))
|
||||
* Start a new `Source` from some resource which can be opened, read and closed.
|
||||
* It's similar to `unfoldResource` but takes functions that return `CopletionStage` instead of plain values.
|
||||
*
|
||||
* You can use the supervision strategy to handle exceptions for `read` function or failures of produced `Futures`.
|
||||
* All exceptions thrown by `create` or `close` as well as fails of returned futures will fail the stream.
|
||||
*
|
||||
* `Restart` supervision strategy will close and create resource. Default strategy is `Stop` which means
|
||||
* that stream will be terminated on error in `read` function (or future) by default.
|
||||
*
|
||||
* You can configure the default dispatcher for this Source by changing the `akka.stream.blocking-io-dispatcher` or
|
||||
* set it for a given Source by using [[ActorAttributes]].
|
||||
*
|
||||
* @param create - function that is called on stream start and creates/opens resource.
|
||||
* @param read - function that reads data from opened resource. It is called each time backpressure signal
|
||||
* is received. Stream calls close and completes when `CompletionStage` from read function returns None.
|
||||
* @param close - function that closes resource
|
||||
*/
|
||||
def unfoldResourceAsync[T, S](
|
||||
create: function.Creator[CompletionStage[S]],
|
||||
read: function.Function[S, CompletionStage[Optional[T]]],
|
||||
close: function.Function[S, CompletionStage[Done]]): javadsl.Source[T, NotUsed] =
|
||||
new Source(scaladsl.Source.unfoldResourceAsync[T, S](
|
||||
() ⇒ create.create().toScala,
|
||||
(s: S) ⇒ read.apply(s).toScala.map(_.asScala)(akka.dispatch.ExecutionContexts.sameThreadExecutionContext),
|
||||
(s: S) ⇒ close.apply(s).toScala))
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -577,8 +581,9 @@ final class Source[+Out, +Mat](delegate: scaladsl.Source[Out, Mat]) extends Grap
|
|||
*
|
||||
* @see [[#concat]].
|
||||
*/
|
||||
def concatMat[T >: Out, M, M2](that: Graph[SourceShape[T], M],
|
||||
matF: function.Function2[Mat, M, M2]): javadsl.Source[T, M2] =
|
||||
def concatMat[T >: Out, M, M2](
|
||||
that: Graph[SourceShape[T], M],
|
||||
matF: function.Function2[Mat, M, M2]): javadsl.Source[T, M2] =
|
||||
new Source(delegate.concatMat(that)(combinerToScala(matF)))
|
||||
|
||||
/**
|
||||
|
|
@ -617,8 +622,9 @@ final class Source[+Out, +Mat](delegate: scaladsl.Source[Out, Mat]) extends Grap
|
|||
*
|
||||
* @see [[#prepend]].
|
||||
*/
|
||||
def prependMat[T >: Out, M, M2](that: Graph[SourceShape[T], M],
|
||||
matF: function.Function2[Mat, M, M2]): javadsl.Source[T, M2] =
|
||||
def prependMat[T >: Out, M, M2](
|
||||
that: Graph[SourceShape[T], M],
|
||||
matF: function.Function2[Mat, M, M2]): javadsl.Source[T, M2] =
|
||||
new Source(delegate.prependMat(that)(combinerToScala(matF)))
|
||||
|
||||
/**
|
||||
|
|
@ -645,8 +651,9 @@ final class Source[+Out, +Mat](delegate: scaladsl.Source[Out, Mat]) extends Grap
|
|||
*
|
||||
* @see [[#alsoTo]]
|
||||
*/
|
||||
def alsoToMat[M2, M3](that: Graph[SinkShape[Out], M2],
|
||||
matF: function.Function2[Mat, M2, M3]): javadsl.Source[Out, M3] =
|
||||
def alsoToMat[M2, M3](
|
||||
that: Graph[SinkShape[Out], M2],
|
||||
matF: function.Function2[Mat, M2, M3]): javadsl.Source[Out, M3] =
|
||||
new Source(delegate.alsoToMat(that)(combinerToScala(matF)))
|
||||
|
||||
/**
|
||||
|
|
@ -718,8 +725,9 @@ final class Source[+Out, +Mat](delegate: scaladsl.Source[Out, Mat]) extends Grap
|
|||
*
|
||||
* @see [[#merge]].
|
||||
*/
|
||||
def mergeMat[T >: Out, M, M2](that: Graph[SourceShape[T], M],
|
||||
matF: function.Function2[Mat, M, M2]): javadsl.Source[T, M2] =
|
||||
def mergeMat[T >: Out, M, M2](
|
||||
that: Graph[SourceShape[T], M],
|
||||
matF: function.Function2[Mat, M, M2]): javadsl.Source[T, M2] =
|
||||
new Source(delegate.mergeMat(that)(combinerToScala(matF)))
|
||||
|
||||
/**
|
||||
|
|
@ -778,8 +786,9 @@ final class Source[+Out, +Mat](delegate: scaladsl.Source[Out, Mat]) extends Grap
|
|||
*
|
||||
* @see [[#zip]].
|
||||
*/
|
||||
def zipMat[T, M, M2](that: Graph[SourceShape[T], M],
|
||||
matF: function.Function2[Mat, M, M2]): javadsl.Source[Out @uncheckedVariance Pair T, M2] =
|
||||
def zipMat[T, M, M2](
|
||||
that: Graph[SourceShape[T], M],
|
||||
matF: function.Function2[Mat, M, M2]): javadsl.Source[Out @uncheckedVariance Pair T, M2] =
|
||||
this.viaMat(Flow.create[Out].zipMat(that, Keep.right[NotUsed, M]), matF)
|
||||
|
||||
/**
|
||||
|
|
@ -794,8 +803,9 @@ final class Source[+Out, +Mat](delegate: scaladsl.Source[Out, Mat]) extends Grap
|
|||
*
|
||||
* '''Cancels when''' downstream cancels
|
||||
*/
|
||||
def zipWith[Out2, Out3](that: Graph[SourceShape[Out2], _],
|
||||
combine: function.Function2[Out, Out2, Out3]): javadsl.Source[Out3, Mat] =
|
||||
def zipWith[Out2, Out3](
|
||||
that: Graph[SourceShape[Out2], _],
|
||||
combine: function.Function2[Out, Out2, Out3]): javadsl.Source[Out3, Mat] =
|
||||
new Source(delegate.zipWith[Out2, Out3](that)(combinerToScala(combine)))
|
||||
|
||||
/**
|
||||
|
|
@ -807,9 +817,10 @@ final class Source[+Out, +Mat](delegate: scaladsl.Source[Out, Mat]) extends Grap
|
|||
*
|
||||
* @see [[#zipWith]].
|
||||
*/
|
||||
def zipWithMat[Out2, Out3, M, M2](that: Graph[SourceShape[Out2], M],
|
||||
combine: function.Function2[Out, Out2, Out3],
|
||||
matF: function.Function2[Mat, M, M2]): javadsl.Source[Out3, M2] =
|
||||
def zipWithMat[Out2, Out3, M, M2](
|
||||
that: Graph[SourceShape[Out2], M],
|
||||
combine: function.Function2[Out, Out2, Out3],
|
||||
matF: function.Function2[Mat, M, M2]): javadsl.Source[Out3, M2] =
|
||||
new Source(delegate.zipWithMat[Out2, Out3, M, M2](that)(combinerToScala(combine))(combinerToScala(matF)))
|
||||
|
||||
/**
|
||||
|
|
@ -877,27 +888,26 @@ final class Source[+Out, +Mat](delegate: scaladsl.Source[Out, Mat]) extends Grap
|
|||
def recoverWith[T >: Out](pf: PartialFunction[Throwable, _ <: Graph[SourceShape[T], NotUsed]]): Source[T, Mat @uncheckedVariance] =
|
||||
new Source(delegate.recoverWith(pf))
|
||||
|
||||
|
||||
/**
|
||||
* RecoverWithRetries allows to switch to alternative Source on flow failure. It will stay in effect after
|
||||
* a failure has been recovered up to `attempts` number of times so that each time there is a failure
|
||||
* it is fed into the `pf` and a new Source may be materialized. Note that if you pass in 0, this won't
|
||||
* attempt to recover at all. Passing in a negative number will behave exactly the same as `recoverWith`.
|
||||
*
|
||||
* Since the underlying failure signal onError arrives out-of-band, it might jump over existing elements.
|
||||
* This stage can recover the failure signal, but not the skipped elements, which will be dropped.
|
||||
*
|
||||
* '''Emits when''' element is available from the upstream or upstream is failed and element is available
|
||||
* from alternative Source
|
||||
*
|
||||
* '''Backpressures when''' downstream backpressures
|
||||
*
|
||||
* '''Completes when''' upstream completes or upstream failed with exception pf can handle
|
||||
*
|
||||
* '''Cancels when''' downstream cancels
|
||||
*
|
||||
*/
|
||||
def recoverWithRetries[T >: Out](attempts: Int, pf: PartialFunction[Throwable, _ <: Graph[SourceShape[T], NotUsed]]): Source[T, Mat @uncheckedVariance] =
|
||||
* RecoverWithRetries allows to switch to alternative Source on flow failure. It will stay in effect after
|
||||
* a failure has been recovered up to `attempts` number of times so that each time there is a failure
|
||||
* it is fed into the `pf` and a new Source may be materialized. Note that if you pass in 0, this won't
|
||||
* attempt to recover at all. Passing in a negative number will behave exactly the same as `recoverWith`.
|
||||
*
|
||||
* Since the underlying failure signal onError arrives out-of-band, it might jump over existing elements.
|
||||
* This stage can recover the failure signal, but not the skipped elements, which will be dropped.
|
||||
*
|
||||
* '''Emits when''' element is available from the upstream or upstream is failed and element is available
|
||||
* from alternative Source
|
||||
*
|
||||
* '''Backpressures when''' downstream backpressures
|
||||
*
|
||||
* '''Completes when''' upstream completes or upstream failed with exception pf can handle
|
||||
*
|
||||
* '''Cancels when''' downstream cancels
|
||||
*
|
||||
*/
|
||||
def recoverWithRetries[T >: Out](attempts: Int, pf: PartialFunction[Throwable, _ <: Graph[SourceShape[T], NotUsed]]): Source[T, Mat @uncheckedVariance] =
|
||||
new Source(delegate.recoverWithRetries(attempts, pf))
|
||||
/**
|
||||
* Transform each input element into an `Iterable` of output elements that is
|
||||
|
|
@ -1827,18 +1837,18 @@ final class Source[+Out, +Mat](delegate: scaladsl.Source[Out, Mat]) extends Grap
|
|||
new Source(delegate.idleTimeout(timeout))
|
||||
|
||||
/**
|
||||
* If the time between the emission of an element and the following downstream demand exceeds the provided timeout,
|
||||
* the stream is failed with a [[java.util.concurrent.TimeoutException]]. The timeout is checked periodically,
|
||||
* so the resolution of the check is one period (equals to timeout value).
|
||||
*
|
||||
* '''Emits when''' upstream emits an element
|
||||
*
|
||||
* '''Backpressures when''' downstream backpressures
|
||||
*
|
||||
* '''Completes when''' upstream completes or fails if timeout elapses between element emission and downstream demand.
|
||||
*
|
||||
* '''Cancels when''' downstream cancels
|
||||
*/
|
||||
* If the time between the emission of an element and the following downstream demand exceeds the provided timeout,
|
||||
* the stream is failed with a [[java.util.concurrent.TimeoutException]]. The timeout is checked periodically,
|
||||
* so the resolution of the check is one period (equals to timeout value).
|
||||
*
|
||||
* '''Emits when''' upstream emits an element
|
||||
*
|
||||
* '''Backpressures when''' downstream backpressures
|
||||
*
|
||||
* '''Completes when''' upstream completes or fails if timeout elapses between element emission and downstream demand.
|
||||
*
|
||||
* '''Cancels when''' downstream cancels
|
||||
*/
|
||||
def backpressureTimeout(timeout: FiniteDuration): javadsl.Source[Out, Mat] =
|
||||
new Source(delegate.backpressureTimeout(timeout))
|
||||
|
||||
|
|
@ -1949,11 +1959,11 @@ final class Source[+Out, +Mat](delegate: scaladsl.Source[Out, Mat]) extends Grap
|
|||
new Source(delegate.watchTermination()((left, right) ⇒ matF(left, right.toJava)))
|
||||
|
||||
/**
|
||||
* Materializes to `FlowMonitor[Out]` that allows monitoring of the the current flow. All events are propagated
|
||||
* by the monitor unchanged. Note that the monitor inserts a memory barrier every time it processes an
|
||||
* event, and may therefor affect performance.
|
||||
* The `combine` function is used to combine the `FlowMonitor` with this flow's materialized value.
|
||||
*/
|
||||
* Materializes to `FlowMonitor[Out]` that allows monitoring of the the current flow. All events are propagated
|
||||
* by the monitor unchanged. Note that the monitor inserts a memory barrier every time it processes an
|
||||
* event, and may therefor affect performance.
|
||||
* The `combine` function is used to combine the `FlowMonitor` with this flow's materialized value.
|
||||
*/
|
||||
def monitor[M]()(combine: function.Function2[Mat, FlowMonitor[Out], M]): javadsl.Source[Out, M] =
|
||||
new Source(delegate.monitor()(combinerToScala(combine)))
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue