Posts

Sort by:
Post not yet marked as solved
0 Replies
5 Views
Dear Sirs, I'm writing an audio application that should show up to 128 horizontal peakmeters (width for each is about 150, height is 8) stacked inside a ScrollViewReader. For the actual value of the peakmeter I have a binding to a CGFloat value. The peakmeter works as expected and is refreshing correct. For testing I added a timer to my swift application that is firing every 0.05 secs, meaning I want to show 20 values per second. Inside the timer func I'm just creating random CGFloat values in range of 0...1 for the bound values. The peakmeters refresh and flicker as expected but I can see a CPU load of 40-50% in the activity monitor on my MacBook Air with Apple M2 even when compiled in release mode. I think this is quite high and I'd like to reduce this CPU load. Should this be possible? I.e. I thought about blocking the refresh until I've set all values? How could this be done and would it help? What else could I do? Thanks and best regards, JFreyberger
Posted
by
Post not yet marked as solved
0 Replies
17 Views
On macOS, that is. The goals are largely for testing, where we'd like to know the maximum and minimum memory our processes are using, but we'd also like to know it on crash. Our current method is to use ps periodically and grab the appropriate field, but is there a better way? (I looked at MetricKit, but it's not as useful on macOS; I filed FB13640765 "MetricKit would be awesome with more mac features" a couple of months ago.)
Posted
by
Post not yet marked as solved
0 Replies
16 Views
Hello, I am working on a fairly complex iPhone app that controls the front built-in wide angle camera. I need to take and display a sequence of photos that cover the whole range of focus value available. Here is how I do it : call setExposureModeCustom to set the first lens position wait for the completionHandler to be called back capture a photo do it again for the next lens position. etc. This works fine, but it takes longer than I expected for the completionHandler to be called back. From what I've seen, the delay scales with the exposure duration. When I set the exposure duration to the max value: on the iPhone 14 Pro, it takes about 3 seconds (3 times the max exposure) on the iPhone 8 1.3s (4 times the max exposure). I was expecting a delay of two times the exposure duration: take a photo, throw one away while changing lens position, take the next photo, etc. but this takes more than that. I also tried the same thing with changing the ISO instead of the focus position and I get the same kind of delays. Also, I do not think the problem is linked to the way I process the images because I get the same delay even if I do nothing with the output. Is there something I could do to make things go faster for this use-case ? Any input would be appreciated, Thanks I created a minimal testing app to reproduce the issue : import Foundation import AVFoundation class Main:NSObject, AVCaptureVideoDataOutputSampleBufferDelegate { let dispatchQueue = DispatchQueue(label:"VideoQueue", qos: .userInitiated) let session:AVCaptureSession let videoDevice:AVCaptureDevice var focus:Float = 0 override init(){ session = AVCaptureSession() session.beginConfiguration() session.sessionPreset = .photo videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)! super.init() let videoDeviceInput = try! AVCaptureDeviceInput(device: videoDevice) session.addInput(videoDeviceInput) let videoDataOutput = AVCaptureVideoDataOutput() if session.canAddOutput(videoDataOutput) { session.addOutput(videoDataOutput) videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] videoDataOutput.setSampleBufferDelegate(self, queue: dispatchQueue) } session.commitConfiguration() dispatchQueue.async { self.startSession() } } func startSession(){ session.startRunning() //lock max exposure duration try! videoDevice.lockForConfiguration() let exposure = videoDevice.activeFormat.maxExposureDuration.seconds * 0.5 print("set max exposure", exposure) videoDevice.setExposureModeCustom(duration: CMTime(seconds: exposure, preferredTimescale: 1000), iso: videoDevice.activeFormat.minISO){ time in print("did set max exposure") self.changeFocus() } videoDevice.unlockForConfiguration() } func changeFocus(){ let date = Date.now print("set focus", focus) try! videoDevice.lockForConfiguration() videoDevice.setFocusModeLocked(lensPosition: focus){ time in let dt = abs(date.timeIntervalSinceNow) print("did set focus - took:", dt, "frames:", dt/self.videoDevice.exposureDuration.seconds) self.next() } videoDevice.unlockForConfiguration() } func next(){ focus += 0.02 if focus > 1 { print("done") return } changeFocus() } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){ print("did receive video frame") } }
Posted
by
Post not yet marked as solved
1 Replies
23 Views
I am trying to disable the screenshot on iOS app but apple does not expose any api for this. Due to which I added a workaround when iPhone will try to take screenshot we have a textfield in secured way which will cover the whole screen and screenshot will be blank. Does anyone tried the same approach to avoid screenshot? if yes did apple act against those application or they reject if they find. It will be great help if Apple can provide us some insight on it.
Posted
by
Post not yet marked as solved
0 Replies
55 Views
Our keyboard extension can be accessed independently in China region with native app like Notes or Safari, however the keyboard can only be opened in the app under same project in Taiwan region. I've checked some articles about how MDM managing extensions, also make sure our RequestOpenAccess option of keyboard extension info.plist also set to Yes. I'm not sure is there anything I missed, or I just need to inform client that they need to reach out their MDM manager and modify some restrictions? If keyboard supports mobile device management (MDM), it can work with managed apps. App extensions give third-party developers a way to provide functionality to other apps or even to key systems built into the operating systems Allow full access to custom keyboard in iOS
Posted
by
Post not yet marked as solved
0 Replies
45 Views
I'm adding many users to internal testing purpose. A few users did't received and welcome to apple developer program after signing in from invitation email. I've checked and didn't they on the user lists, so I can't added to internal tester group. We've tried so many times. I noticed that every users that had the same problem didn't see the trust this device page during the signing process. How should I fix this problem.
Posted
by
Post not yet marked as solved
3 Replies
69 Views
Hello, I am new to IOS Development and I am trying to create new project but swift ui view is not showing.
Posted
by
Post not yet marked as solved
1 Replies
48 Views
Hello everyone! I'm currently working on an iOS app developed with Swift that involves connecting to a specific bluetooth device and exchanging data even when the app is terminated or running in the background. I just want to understand why the CBCenterManager should be Implicitly unwrapped to use it. I have check out couple off apple developer sample project it was set to Implicitly unwrapped. can some one help to understand the reason behind this, also what are possible issues/scenario trigger if we set the centermanager to optional "CBcentermanager?" Thanks in Advance!
Posted
by
Post not yet marked as solved
0 Replies
45 Views
Hello! Is it possible to turn on hand pass-through (hand cutout, not sure what is the correct name, real hands from the camera) in WebXR when in immersive-vr and hand tracking feature is enabled? I only see my hands when I disable hand tracking feature in WebXR, but then I don't get the joints positions and orientations.
Posted
by
Post not yet marked as solved
0 Replies
56 Views
Hi everyone! We are wondering whether it's possible to have two macOS apps use the Voice Processing from Audio Engine at the same time, since we have had issues trying to do so. Specifically, our app seems to cut off the input stream from the other, only if it has Voice Processing enabled. We are developing a macOS app that records microphone input simultaneously with videoconference apps like Zoom. We are utilizing the Voice Processing from Audio Engine like in this sample: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing We have also noticed this behaviour in Safari recording audios with the Javascript Web Audio API, which also seems to use Voice Processing under the hood due to the Echo Cancellation. Any leads on this would be greatly appreciated! Thanks
Posted
by
Post not yet marked as solved
1 Replies
52 Views
Hi everyone! We use to have an intel Mac machine where we generate the Developer ID Installer & Application certs for signing and notarization process. This process works sweet. Now, we move from an intel to a m1 Mac machine, where we want to do the same process as before. I had try two different approaches, but ending up with the same result. I export the cert with the private key from my intel to the m1 machine, but when I try to sign, I get: Invalid signature. (Not sure what this error means in this case as everything works on the intel machine. I am guessing the cipher for creating either the private key or the signature differs between the architecture) I try to generate new certs for this m1 machine, but I get the following error: You already have a current Developer ID installer certificate or a pending certificate request. I try with the same account, but also with a different account. In both cases got the same error. I create a ticket for apple, where they said to expect a reply between one and two business days, but no luck yet.
Posted
by
Post not yet marked as solved
0 Replies
43 Views
Per the apple API documentation (https://developer.apple.com/documentation/tipkit/tipuipopoverviewcontroller/imagesize), imageSize is meant to control the size of the image within the tip, but it doesn't seem to be working. My code is as follows, taken from the apple docs example: tipObservationTask = tipObservationTask ?? Task { @MainActor [weak controller] in for await shouldDisplay in tip.shouldDisplayUpdates { if shouldDisplay { let popoverController = TipUIPopoverViewController(tip, sourceItem: sourceItem) popoverController.imageSize = CGSize(width: 10, height: 10) controller?.present(popoverController, animated: true) tipPopoverController = popoverController } else { if controller?.presentedViewController is TipUIPopoverViewController { controller?.dismiss(animated: true) tipPopoverController = nil } } } }
Posted
by
L4D
Post not yet marked as solved
0 Replies
45 Views
Hello, I've got a "other swift flag" set up for one of my schemes. It works when I build locally, but not when I try building using Xcode Cloud. My Other Swift Flags is set up like this: My code looks something like this: class Config { #if LIMITED static let configProperty = 1 #endif } However, Xcode Cloud says Config has no member 'configProperty', when I build my 'Limited' scheme.
Posted
by
Post not yet marked as solved
0 Replies
47 Views
What documents would count for verifying my address for the EU DSA? Do I have to own the address, or just be reachable from it? My family has a PO Box that they share that I was hoping to use so I don't have to list my home address, or pay for my own box that would rarely receive any mail.
Posted
by
Post not yet marked as solved
3 Replies
63 Views
We have a random issue that when ARKitSession.run() is called, monitorSessionEvents() receives .paused and it never transitions to .running If we exit Immersive Space and do ARKitSesssion.run() again it works fine. Unfortunately this is very difficult to manage in the flow of our App.
Posted
by
Post not yet marked as solved
1 Replies
49 Views
Hello, I am trying to enumerate all ways on macOS for launching an application when a user opens a session. Please note i am not looking for a way which requires root or sudo privileges. I have found this: ~/Library/LaunchAgents/ Login Items (in macOS System Settings) But are there others ? Thanks
Posted
by
Post not yet marked as solved
1 Replies
50 Views
I had DuckDuckGo browser installed and working fine on my MacBook Pro for years. Currently DDG Verion 1.85.0, MBP M2 Max Sonoma 14.4.1 Up until two days ago everything worked fine. Then, while using DDG to access a Copyright.gov site a security alert popped up. Now, because I followed a link from an email from the copyright.gov office, I suspected it might be suspicious so I hit escape until it canceled the load and exited DDG. I used the browser successfully the rest of the day with no issues. The following day DDG would not open any websites I tried except two (google and duotrope.com) The next day, no website would open. The error message I received was "DuckDuckGo Can't Load This Page" and "The network connection was lost" There is no problem with my network. All email, other browsers, streaming services, everything works fine. But not DDG. I have reinstalled DDG several times, deleting any remaining DDG files I could fine, in between. I even tried it in safe mode and I rebooted the WiFi router. Still the same error. I've checked network settings, keychain permissions, antivirus permissions (I use intego) I cannot seem to figure out what could be causing the issue. I have another MacBook Pro with DDG installed and it works fine. I tried copying that one to this MBP and I get the same error. Hell, at this point I'd light incense candles and sacrifice a chicken if it would work. Does anyone have any idea what could be causing my issue?
Posted
by
Post not yet marked as solved
0 Replies
48 Views
I am using AVFoundation to capture a photo. This was all working fine, then I realized all the photos were saving to the photo library in portrait mode. I wanted them to save in the orientation the device was in when the camera took the picture, much as the built in camera app does on iOS. So I added this code: if let videoConnection = photoOutput.connection(with: .video), videoConnection.isVideoOrientationSupported { // From() is just a helper to get video orientations from the device orientation. videoConnection.videoOrientation = .from(UIDevice.current.orientation) print("Photo orientation set to \(videoConnection.videoOrientation).") } With this addition, the first photo taken after a device rotation logs this error in the debugger: <<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:866) - (err=-12784) Subsequent photos will not repeat the error. Once you rotate the device again, same behavior. Photos taken after the app loads, but before any rotations have been made, do not produce this error. I have tried many things, no dice. If I comment this code out it works without error, but of course the photos are all saved in portrait mode again.
Posted
by
Post not yet marked as solved
0 Replies
49 Views
I don't know where the area for Beta feedback for the Vision Pro is so I'm submitting here. Using beta 1.2 on the Vision Pro, there are a couple of issues I've seen. It seems like the windows that have been previously opened are now closing automatically, even after just an hour or two without restarting or shutting down. I keep getting the message about either "being too far from display" or "being too close to the display" depending on which light seal cushion I have On (either 25W or 25W+). Let me know if there is another area I should be submitting the feedback for Beta issues.
Posted
by
Post not yet marked as solved
0 Replies
55 Views
Any ideas on this would be greatly appreciated. Is it a process error, set-up error or should I be raising a ticket for this? I continue to have 2 error messages when attempting push/pull to remote repository. Example: Create new project on Desktop using Xcode, File/New Project (tick 'create Git repository on my Mac'). Change 'ContentView' text to “Hello world!2”. Integrate/Commit, stage all, amend, commit. Create new remote repository (I have checked GitHub and a new repository has been added). Change 'ContentView' text to “Hello world!3” Integrate/Commit, stage all, amend, commit. - all looks fine so far Integrate/push to origin/main: Error message: “The local repository is out of date, make sure all changes have been pulled from the remote repository and try again”. There is now 1 up and 1 down arrow next to the branch. Integrate/pull Error message: "An unknown error has occurred. No merge base found". I have tried this several times with same response, using iMac with Xcode 15.3, also tried with Xcode 15.2.
Posted
by

TestFlight Public Links

Get Started

Pinned Posts

Categories

See all